Read the full transcript here.

How can we find and expand the limitations of our imaginations, especially with respect to possible futures for humanity? What sorts of existential threats have we not yet even imagined? Why is there a failure of imagination among the general populace about AI safety? How can we make better decisions under uncertainty and avoid decision paralysis? What kinds of tribes have been forming lately within AI fields? What are the differences between alignment and control in AI safety? What do people most commonly misunderstand about AI safety? Why can't we just turn a rogue AI off? What threats from AI are unique in human history? What can the average person do to help mitigate AI risks? What are the best ways to communicate AI risks to the general populace?

Darren McKee (MSc, MPA) is the author of the just-released Uncontrollable: The Threat of Artificial Superintelligence and the Race to Save the World. He is a speaker and sits on the Board of Advisors for AIGS Canada, the leading safety and governance network in the country. McKee also hosts the international award-winning podcast, The Reality Check, a top 0.5% podcast on Listen Notes with over 4.5 million downloads. Learn more about him on his website, darrenmckee.info, or follow him on X / Twitter at @dbcmckee.

Staff

Spencer Greenberg — Host / DirectorJosh Castle — ProducerRyan Kessler — Audio EngineerUri Bram — FactotumWeAmplify — TranscriptionistsMiles Kestran — Marketing

Music

Broke for FreeJosh WoodwardLee RosevereQuiet Music for Tiny Robotswowamusiczapsplat.com

Affiliates

Clearer ThinkingGuidedTrackMind EasePositlyUpLift[Read more]

Twitter Mentions