Effective altruism, rationality, trans-humanism, utilitarianism, etc empirically seem to have a fairly bad reputation among the wider population. If you mention this in a conversation ppl will instantly be like “oh so you should rob a bank, or be SBF”, or talk about how cold calculation gets the wrong answers to moral questions. I was previously pretty confused about this, because most of these ideas seem like common-sense to me.
For instance take EA. My understanding of what “EA” means is
It’s good to do good stuff, and better to do stuff that is even more good.
To me, this seems just pretty obviously correct.
But it seems that this was an issue where my favorite linguistic tool taboo could have helped.
When I said “I think EA sounds pretty cool”, and X responded “oh, I’ve heard EA is really dangerous”, we were talking about two different things.
What I meant was “it’s cool to aspire to do good, and consider opportunity cost in your calculation, and not to be constrained by what feels the best but to do stuff that impacts the world in a good way”.
What my friend meant was probably “it seems like historically lots of people have used the banner of ‘utilitarianism’ / ‘for the greater good’ to justify really bad things. Most humans are pretty bad at consequentialism and bad stuff happens when they stray too far from deontology”.
It’s also almost certainly true, and not even that weird, that ppls perception of this grp of ppl is shaped by news stories (eg FTX). I think people may also state that utilitarianism is too demanding / has incorrect conclusions, partially in order to avoid feeling cognitive dissonance about not donating to charity or whatever.
Takes:
-
I’m pretty glad that these communities (EA, AI safety, rationality) exist
- I think that they’re generally composed of really thoughtful, smart, altruistic people, who are right about lots of things + have pretty good epistemic norms.
- I think they do a lot of good.
- It is accurate to describe me as someone who aspires to be altruistic, rational, etc.
-
I’m pretty worried about people justifying doing bad things because they think they’re the good guys.
- For instance, Anthropic and OpenAI racing towards AGI under the banner of being the safe guys — or at least “better than” OpenAI and China respectively — seems really unfortunate.
- If you are at Anthropic or OpenAI --- please work harder at begging the government to stop the race to see who can annihilate humanity first.
- I’m not really suggesting you unilaterally pause. But you could work towards getting a multi-lateral pause. I guess this is part of Antrhopic’s stated strategy but they are crying for race. (Although overall I think it’s fairly likely that Anthropic’s existence is net-positive --- they really are trying much harder than all the other labs and I think they could be a key part of waking up the world / getting a reasonable govt intervention to happen).
It feels like there’s some general lesson here: I don’t care what words you attach to a thing, I care about the content of a thing.
- If an idea is good, it doesn’t matter who said it.
- If an action is bad, it doesn’t matter who did it.
- Just don’t do bad stuff.