Will artificial intelligence spell the end of humanity? The concept has been implanted in American culture through dystopian phenomena like Terminator and The Matrix, but how real is this possibility? Since the public release of Open AI’s ChatGPT in late 2022, AI doomerism has played a key role in shaping the discourse around this rapidly advancing technology. “Artificial intelligence could lead to extinction,” blares the BBC. “The race to win the AI competition could doom us all,” warns The Japan Times. Some commentators have even said that we may need to bomb data centers to stop or slow AI development.

Is so-called AI “doomerism” simply an outgrowth of AI-related science fiction? Or is there a concerted PR effort to frame the conversation? How does doomerism impact the debate over how/whether to regulate AI, and what positive applications of AI aren’t receiving enough attention?  Evan is joined by Perry Metzger, CEO of a stealth AI startup and founder of Alliance for the Future. You can read his work on his Substack, Diminished Capacity. Evan is also joined by Jon Askonas, a professor of politics at Catholic University and Senior Fellow at the Foundation for American Innovation. He has written broadly on tech and culture for outlets like Foreign Policy and American Affairs, and his work has been discussed at length in the New York Times