Discussion about this post

User's avatar
George Menyhei's avatar

Yudkowsky's panic is very similar to the panic that made us abandon nuclear power and overreact to Covid.

Expand full comment
John of Orange's avatar

This post really helped me made sense of a recent interaction I had on another Substack. Some people who had seemingly reasonable object-level concerns about near-term AI risks were bizarrely insistent on treating other people with seemingly closely related longer-term concerns as not only competitors or even enemies, but kind of beneath contempt and dismissible by name-calling and handwaving. And then when I pointed out how baffling and politicized this was they responded to say no it's not baffling and politicized and also the other team started it.

My own view is that Chernobylism is obviously correct and we are basically certain to experience at least *some* level of "AI Chernobyl" type catastrophes, both literal discrete AI calamities and also deleterious longer-term social trends. But I don't see how this is incompatible or even competitive with classical "doomerist" worries about Skynet killing us all. That stuff is just not obviously dismissible in the way that my friends from the other Substack wanted to pretend, and since the losses in a completed "Skynet" scenario would be *literally a million Chernobyls at a conservative estimate, and arguably a lot more*, the possibility only needs to be "not obviously dismissible" for it to automatically become an extremely serious concern.

Expand full comment
28 more comments...

No posts