Lately, I’ve been more aware than normal that my words are consequential. I think often this implies that it’s a good idea to be more cautious about speaking and picking my words than I normally would. But, also sometimes it might require speaking more, and being more direct with my words.

The event that got me started thinking about this was getting a hair cut. Kim---a nice middle-aged hairdresser with (a little bit of) red hair---and I were chatting about AI and stuff. First we were talking about her son, who’s a senior in college studying CS. She was worried about him finding a job (because of AI making junior SWEs much less economically valuable). We later chatted about the last time she went back to Korea, to visit her dead parents. And finally, we said goodbye.

What were the stakes in that hair salon?

The first consequential decision that was made was to chat. Now obviously most hairdressers are pretty chatty, but I maintain that it was still totally possible for me to have acted differently and not have made a good connection. Why is this consequential? Life has many such chances---chances to choose to have interesting interactions with ppl, or to choose not to. But, passing up these chances is habit forming. And we only have a finite number of these opportunities. So it seems like a big deal.

The second thing that felt consequential was chatting about he concerns (e.g., for her son finding a job). Should I offer false words of comfort? There’s an instinct to --- you see someone afraid, you want to give reassurance. I didn’t go this route though, instead opting to validate the fear. But am I too aggressive in validating fear, too pessimistic about the future? Probably not, but it feels like it matters a lot (I’ll get into this more later).

But the main thing that made me appreciate the stakes of my actions was actually her feelings of powerlessness over the future, and the discussion of death.

Some of the things I do might impact how AI, and the future, will go.

For instance, suppose I tell a friend “you should/shouldn’t join xAI’s safety team”. There’s some chance that my words sway them. And there’s some chance that this matters a lot. Maybe if they hadn’t joined the lab the lab would’ve collapsed (thereby decreasing xrisk) because of people being too embarrassed to work there due to them having lots of “incidents”. Maybe if they had joined the lab they would’ve caught a misaligned AI’s takeover attempt, or prevented the company’s CEO from instilling secret (or not-so-secret) loyalties in the AI. What’s the actual right thing to do? I don’t know!

Maybe I tell a lab friend “I think you should modify your safety project in way X”. Maybe they actually do. Maybe this is good/bad.

Maybe I do some research, and other ppl build on it. Maybe my research was terribly wrong, but ppl trust it, and it betrays them?

Maybe I talk to a friend on the AI 2027 team and convince them that an AI pause is a good idea. Maybe this propagates into actually achieving a pause. And maybe if we hadn’t paused, AI would have progressed and would’ve been fine, and would’ve revolutionized medicine. Maybe then Kim (and lots of other old ppl) could live for 200 years, instead of ~80.

(Note: I’m not a biologist, but I don’t think it’s crazy to imagine AI doubling life expectancy. Life expectancy has doubled over the last 200 years and I think AI could be more transformative than the last 200 years. In fact, I think AI could plausibly extend life expectancy more than this.)

If I’m wrong about AI risk, I’m condemning ppl to unnecessary death. Can I look my hair dresser in the eye and say---sorry, you don’t make the cut?


On the other hand, what if I’m wrong in the opposite direction?

I was talking to a friend at an AI lab, I voiced some disagreement with the lab’s approach to ‘safety’ and they were like “you’re being way too pessimistic about our lab acting responsibly when we have powerful AI”. Am I? Or am I being way too diplomatic, not really telling this friend how things are for real, because I don’t want to offend them / because lab insane AI-optimism epistemics had infected me?


Or maybe I was thinking about it all wrong. Maybe my normal mode of just instantly saying what I think is not strategic. Maybe I’ll want to work with these ppl in the future. Maybe it’s key that they not see me as an enemy, and honest but ill-considered words could alienate them, or convince them that I’m crazy.


Maybe I alienate someone by saying that I think their research project is misguided / not asking interesting questions / flawed in some big way. Or by making fun (on accident) of their company for being xAI.


Clip, clip, clip.


Yes, there are stakes.

No, that doesn’t excuse inaction.

Yes, I’ll be wrong sometimes.


Conclusion:

I should be more thoughtful, especially when doing things that impact other ppl. I should act on my current best guesses about truth, and should actively seek out ways to discover more truth.