REVISED VERSION available here: ai-xrisk.surge.sh
Planning to try to share the revised version widely pretty soon. Like probably tmw. I think it’s correct and at the point where it will spark a good discussion. but still please send feedback!!!
If we don’t communicate to policy makers a need to make AI development safer I expect AI to annihilate humanity.
It’s reasonable to be skeptical of claims with such big implications — they’re often false. However, remember that human extinction isn’t impossible — for instance, we could annihilate ourselves via nuclear war. I ask that you read my explanation of why AI development poses a large risk to humanity, and that you help spread the cry for caution.
The AI situation:
We (humanity) are racing at breakneck speed to build super-intelligent AI. That is, AI surpassing humans in strategy, persuasion, research, coding, and most economically valuable tasks. Trillions of dollars are being poured towards this goal. Many AI experts expect us to create super intelligent AI within 2-10 years.
It’s easy to feel that this is all just hype, that current models are just a bunch of tricks, that AI progress will stop, that the future will not look radically different than the past. But, 5 years ago you would have told me that an AI that could sound vaguely human was science-fiction. Now, we have AI systems that can perform complex coding and reasoning tasks in addition to holding engaging conversations. The evidence suggests that AI will rapidly become increasingly powerful.
What happens if we do create such super-intelligent AI?
CEO’s of AI companies claim that this will usher in a utopia — one where AI cures all diseases, creates global prosperity and obviates the need for human labor.
We are not on track to achieve this utopia. Instead, we are on track to build powerful misaligned AIs. Maybe this will be AIs whose primary goal is to maximize profit for a company, or maybe AIs that want to “understand the universe” or prove math theorems. Likely, AIs with some much more alien goal.
What happens if we build a powerful misaligned super-intelligence? Humans are the masters of the earth due to our intelligence. If a new species, more intelligent than humans were to be created, then we should expect to lose our place as masters of the earth. More precisely, we should expect to die.
But AI takeover happens in science fiction, thus not in real life! Unfortunately, the fact that AI takeover happens in books doesn’t imply that it won’t happen in real life. In fact, the books have some large differences from reality.
-
In books, the AIs often have enmity towards humans.
-
This seems pretty unlikely. It’s far more likely that an AI decides that all the land and resources used by humans should actually be covered with computing infrastructure, and wipes us out dispassionately --- just as we would chop down a forest to build a parking lot.
-
In books, AIs are stupid: they build a robot army, let the humans fight back, and eventually lose.
-
In real life, AIs will be smart. There will be no cinematographic fight. The AIs will simply wait until they have a robust plan, which they will carry out with precision. At which point we lose.
But computers just do what we tell them to! #todo --- fix We do not know how to reliably instill desired goals into an AI, and we do not have goals that we would feel comfortable with an AI optimizing for. We do not have methods for detecting or correcting misalignment. We are developing AI in a “race”, which incentivizes dismissing safety concerns.
But, I’m sure that tech companies have a plan to align AIs! #todo
How would the AIs takeover? I don’t know — but I do know that a smart AI will be able to come up with something. In-case it helps your intuition, here are a couple plausible takeover paths:
todo — add takeover paths
BUT CHINA!!! This is the most common argument people give for doing nothing about the situation. Some people are willing to nod along about the risks, but then explain that they’re hands are tied: CHINA!!!
This is not a very good objection — it’s just the easiest excuse for not changing course. If the US leadership becomes convinced that they will lose control of AI unless we stop now, then they’d immediately start urgent talks with China and other world powers to coordiante a global pause. The US has a lot of clout — we could get people on board.
What should I do about it? This is not a good situation. By default, we will lose control to AI. This probably looks like us all being dead — me, you, your friends and your children.
But, it’s not over yet. We can still choose a different path — although it will not be the most comfortable path. What is this path? We need the government to push the breaks on frontier AI development until we have reason to believe it is safe to proceed. We need international coordination to enforce this slow-down, and to work towards a stable long-term solution. To accomplish this, policy makers need to know that this issue is important to the public.
What is my ask? Please discuss the risks with your friends and with policy makers.
You can start by sharing this post, and scheduling a time to email / talk to a policy maker to tell them your concerns.
Here are some additional resources for learning about the risks: an article, or another article , or [Max Tegmark’s Web summit talk!].
Also, please please let me know (e.g., email me) if you do communicate about this — hearing about whatever you did will easily make my day!
It’s easy to feel that this is not your responsibility. I’m sorry. It is yours anyways.
Thanks in advance for your help! Let’s do this!
(Happy to clarify/discuss things in comments, on the phone, or in-person.)