I’m meeting with an MA senator today. The theory of change is pretty weak, but I’ve thought about it and I think that the sign is at least positive, so that’s good. It also feels like this is a part of my civic duty, or at least it feels nice and virtuous and low effort.

Overview of “the AI situation”

“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”

Signed by Nobel prize winner Geoffrey Hinton (“Godfather of Deep Learning”) and the CEO’s of major AGI labs (Sam Altman OpenAI, Dario Amodei Anthropic, Demis Hassabis Google)  

Why are people worried? 

Advances in AI are rapidly producing systems that surpass human capabilities in tasks like reasoning, coding, and communication. Soon, AI’s will be able to appear convincingly human in video conference interactions. Unlike traditional software, AIs are not explicitly programmed but trained through processes resembling evolution, which means we can build highly capable systems without fully understanding how they work. This is concerning, because it means that AI systems might pursue goals that conflict with human interests. Current development practices tend to produce AIs that behave like profit-maximizing corporations—they follow rules only for fear of being caught, and would not follow rules if they could get away with it. We don’t have good techniques to detect this misalignment — recent studies have shown that AI’s may lie about their goals if they think that humans would disapprove of their goals. Competitive pressure between companies makes it hard for companies to prioritize safety measures — many companies would love for the government to apply safety measures across the board to all companies, but can’t adopt these measures themselves for fear of losing competitiveness. 

Policy Recommendations 

  1. Whistleblower protections: it’s important that people inside of AGI labs are able to speak up relatively easily if their lab is doing something irresponsible. 
  2. State-level liability: If someone uses an AI to commit cybercrime or develop a biological weapon, then the AI company should be liable for the damage caused. 
  3. Transparency requirements: AGI labs should be required to publish reports to show that they have a plan to ensure the safety of their systems.