Sometimes I feel that I’ve had very limited success in communication efforts. In this post I’ll do three things:
- Share a new thought about communication that I found helpful.
- Share a few reasons why i will continue to communicate.
- Brainstorm some very local measures that I can take to communicate.
sorry if is low quality post — low energy rn.
Sec1: thought about communication
Sometimes I’ll get into discussions about AI and people will say something like “the burden of proof is on you to prove that you are correct” (this seems fairly frequent).
Of course, it’s clear that “if I want to convince people then I should spend time making my arguments compelling and figuring out how best to present the arguments”.
However, something I’d like to point out is that my failure to convince person P about xrisk does not mean that AI will not kill person P and everyone that they care about. It could merely mean that I’m bad at convincing people of things, or that person P has an aversion to accepting an extremely uncomfortable and unpopular opinion.
put another way, something that I wish I could make more clear to people when I’m discussing xrisk with them is that we’re hopefully on the same team.
i am willing to make it my responsibility to convince people of this issue. but to be convinced you must take personal responsibility for getting the right answer.
Sec2: why communicate?
-
the most obvious reason is that i believe that this makes it more likely that the future is good.
-
another reason is that people deserve to know
-
another reason is that my being louder about risks from AI can decrease the social burden on other people who want to care about AI. people can be like “look there are lots of other ppl that care about this too”
Sec3: what communicate
- I’m quite excited about my communicating with senator project / reaching out to congress staffers about this
But I think it’s a really good idea to reach out more locally as well. One compelling reason for this is that it should be pretty easy. Another compelling reason is that “more local communication has higher bandwidth”. Another reason is that there are a few ppl at MIT who care about making the future good — and as discussed above this is helpful for them. like the way i think about this is — i’d be super excited if i became aware of someone else spreading awareness about risks from AI (in a productive way). is this an invitation to the industrious skyspace follower? your choice :).
anyways, let’s spend 10 minutes to brainstorm what this should look like.
plan:
“Mitigating xrisk from AI should be priority” Join the CEO’s of OpenAI, anthropic and deepmind in endorsing this statement!
- Space for signatures on the paper.
- QR code to Hinton science article.
- Make version of AI-xrisk post that “sounds more reasonable” (i.e., deliberately make a bit vaguer about how high i actually believe the risk is at least at the beginning — this supposedly turns ppl off. although i also consistently get dramatically different reactions out of people once they realize how serious I actually believe the situation is. so we’ll find a balance.) post QR code linking to that as well, and add my contact info to that.
- On the ai-xrisk thing I can add my contact info and say something like — if you disagree please send me your argument for why im wrong. if you do this then im happy to meet up and discuss! if you agree and are willing to publicly share this fact plz plz plz post your argument on the internet so i can link to it. i can show you how to do this if you arent tech savy. alternatively you can send your arg to me and ill put it on the internet and link to it.
Question: isn’t this kind of doomed if ppl who have had extremely high bandwidth connection to arguments don’t believe it?
A: this is a bit different — im trying to get ppl to see that other ppl believe it . i believe there’s a chance that this group dynamic thing can work. lol idrk but im happy to try.