Brainstorming dump:

Red line

A major obstacle to coordination is the fact that there is “no fire alarm for AGI”(EY).

Some counter arguments:

  • some people argue that the masses will get super angry once/if they lose their jobs.

  • some people argue that “by continuity” or something “we will definitely have some pretty compelling warning shots”


my take: I think it genuinely is likely (although not guaranteed) that people will lose jobs and get annoyed and also that there are some non-catastrophic disasters.

it seems like this is something that most people thinking about AI policy probably have in their model and drafting some ambitious legislation for a moment when the public is ready to accept such regulation or whatever seems like a good and worthwhile goal.


however I think it’s not strongly overdetermined that humanity will “wake up” one day.

Q: conditional on no AI winter, will humans lose jobs bc of AI before getting toasted ?

todo finish this