Effective altruism (EA), rationality, trans-humanism, utilitarianism, etc empirically seem to have a pretty bad reputation. The first thing that people will always point to is FTX. People next will state that utilitarianism is too demanding / has incorrect conclusions.

I was previously pretty confused by people’s opinions about this. For instance take EA. My understanding of what “EA” means is

It’s good to do good stuff, and better to do stuff that is even more good.

To me, this seems just pretty obviously correct. But it seems that this was an issue where my favorite linguistic tool taboo could have helped.

When I said “I think EA sounds pretty cool”, and X responded “oh, I’ve heard EA is really dangerous”, we were talking about two different things.

What I meant was “it’s cool to aspire to do good, and consider opportunity cost in your calculation, and not to be constrained by what feels the best but to do stuff that impacts the world in a good way”.

What my friend meant was probably “it seems like historically lots of people have used the banner of ‘utilitarianism’ / ‘for the greater good’ to justify really bad things. Most humans are pretty bad at consequentialism and bad stuff happens when they stray too far from deontology”.


Takes:

  • I’m pretty glad that these communities (EA, AI safety, rationality) exist

    • I think that they’re generally composed of really thoughtful, smart, altruistic people, who are right about lots of things + have pretty good epistemic norms.
    • I think they do a lot of good.
    • It is accurate to describe me as someone who aspires to be altruistic, rational, etc.
  • I’m pretty worried about people justifying doing bad things because they think they’re the good guys.

    • For instance, Anthropic and OpenAI racing towards AGI under the banner of being the safe guys — or at least “better than” OpenAI and China respectively — seems really unfortunate.
    • If you are at Anthropic or OpenAI --- please work harder at begging the government to stop the race to see who can annihilate humanity first.

It feels like there’s some general lesson here: I don’t care what words you attach to a thing, I care about the content of a thing.

  • If an idea is good, it doesn’t matter who said it.
  • If an action is bad, it doesn’t matter who did it.
  • Just don’t do bad stuff.