Here’s a plan for an AI discussion.

TODO: get a bunch of sticky notes and pencils to facilitate this, and print the instructions.

Repeat twice:

  • People choose a category that they’re interested in discussing, and then privately write down a reason why it might be true, and a reason why it might be false. On sticky notes. And then put them some place. (4 mins)

    • Ideally the groups are ~equally sized.
      • Also each group should be allocated a “facilitator” (person who has thought about the AI problem before).
    • Bc the activity will repeat, tell ppl that they should randomly pick their first or second choice topic.
  • You get two sticky notes:

    • write on one a compelling reason the statement might be true, and on the other, a compelling reason that the statement might be false.
  • Shrug, thumbs up, thumbs down — to count pretty unsure / strong agree / strong disagree

  • One person decides on the sticky note the group will discuss first (hopefully in part based on how frequently sentiments similar to the sticky note were raised).

  • Discuss for a bit.

    • Facilitator should try to occasionally cold-call ppl to give their take on things, and after explaining your pov, check to see if they understood it / if it actually addressed their concern.
  • After about 5 mins facilitator ends the discussion about this sticky note. Asks for raised hands if your opinion changed / shifted --- cold call one of these ppl to say what changed it.

  • Then choose a random person to choose another sticky note!

  • A note: maybe this is just me, but it’s hard to pay attention if things are too loud. Try to spread the different groups out enough to mitigate this.

  • wrapping up after the discussions:

    • inv ppl to think more about and then write down their thoughts about AI risk sometime next week
      • suggest that they text someone, eg the person sitting next to them (and/or Alek), saying “ill write up my thoughts about X and send them to you next week”.

Here’s what I could say beforehand:

  • Introduce myself.
  • There are lots of ppl that think AI could be pretty dangerous --- eg CAIS statement on risk
    • But even within these people, there’s a lot of disagreement.
    • And some ppl say that AI isn’t too dangerous.
  • It’s worth investing some time into figuring out what’s true here.
  • I hope that you’ll be open to updating beliefs based on this conversation.
    • One fun norm that I’d like to institute---when someone says something surprising, write it down!
  • I hope that as a result of this activity:
    • You’ll put more effort into being informed about the issue.
      • Resources I recommend:
      • AI2027, How AI could takeover in 2 years, What failure looks like, EY on Bankless, Tegmark, Bengio, LeCunn, Mitchell debate. Zvi blog. LW/AF.
    • You’ll spend some more time thinking about how we can solve the AI problems, and doing some stuff to help. If you want ideas, come chat with me later.
    • You’ll help raise awareness about these problems, and increase common knowledge that people care about these problems, eg by publicly sharing your beliefs. Note that you need to be careful here that you’re not just saying “AI will be powerful” bc that elicits the wrong response.
  • Recap the argument for why AI could be scary and explain the activity.

Here are some slides that could be useful for a similar activity.