Future of Life Institute Podcast artwork

Future of Life Institute Podcast

210 episodes - English - Latest episode: about 23 hours ago - ★★★★★ - 100 ratings

The Future of Life Institute (FLI) is a nonprofit working to reduce global catastrophic and existential risk from powerful technologies. In particular, FLI focuses on risks from artificial intelligence (AI), biotechnology, nuclear weapons and climate change.

The Institute's work is made up of three main strands: grantmaking for risk reduction, educational outreach, and advocacy within the United Nations, US government and European Union institutions.

FLI has become one of the world's leading voices on the governance of AI having created one of the earliest and most influential sets of governance principles: the Asilomar AI Principles.

Technology
Homepage Apple Podcasts Google Podcasts Overcast Castro Pocket Casts RSS feed

Episodes

Liron Shapira on Superintelligence Goals

April 19, 2024 14:29 - 1 hour - 119 MB

Liron Shapira joins the podcast to discuss superintelligence goals, what makes AI different from other technologies, risks from centralizing power, and whether AI can defend us from AI. Timestamps: 00:00 Intelligence as optimization-power 05:18 Will LLMs imitate human values? 07:15 Why would AI develop dangerous goals? 09:55 Goal-completeness 12:53 Alignment to which values? 22:12 Is AI just another technology? 31:20 What is FOOM? 38:59 Risks from centralized power 49:18 Can AI defen...

Annie Jacobsen on Nuclear War - a Second by Second Timeline

April 05, 2024 14:22 - 1 hour - 119 MB

Annie Jacobsen joins the podcast to lay out a second by second timeline for how nuclear war could happen. We also discuss time pressure, submarines, interceptor missiles, cyberattacks, and concentration of power. You can find more on Annie's work at https://anniejacobsen.com Timestamps: 00:00 A scenario of nuclear war 06:56 Who would launch an attack? 13:50 Detecting nuclear attacks 19:37 The first critical seconds 29:42 Decisions under time pressure 34:27 Lessons from insiders 44:18 S...

Katja Grace on the Largest Survey of AI Researchers

March 14, 2024 17:59 - 1 hour - 94.5 MB

Katja Grace joins the podcast to discuss the largest survey of AI researchers conducted to date, AI researchers' beliefs about different AI risks, capabilities required for continued AI-related transformation, the idea of discontinuous progress, the impacts of AI from either side of the human-level intelligence threshold, intelligence and power, and her thoughts on how we can mitigate AI risk. Find more on Katja's work at https://aiimpacts.org/. Timestamps: 0:20 AI Impacts surveys 18:11 What...

Holly Elmore on Pausing AI, Hardware Overhang, Safety Research, and Protesting

February 29, 2024 14:25 - 1 hour - 133 MB

Holly Elmore joins the podcast to discuss pausing frontier AI, hardware overhang, safety research during a pause, the social dynamics of AI risk, and what prevents AGI corporations from collaborating. You can read more about Holly's work at https://pauseai.info Timestamps: 00:00 Pausing AI 10:23 Risks during an AI pause 19:41 Hardware overhang 29:04 Technological progress 37:00 Safety research during a pause 54:42 Social dynamics of AI risk 1:10:00 What prevents cooperation? 1:18:21 What...

Sneha Revanur on the Social Effects of AI

February 16, 2024 15:22 - 57 minutes - 79.7 MB

Sneha Revanur joins the podcast to discuss the social effects of AI, the illusory divide between AI ethics and AI safety, the importance of humans in the loop, the different effects of AI on younger and older people, and the importance of AIs identifying as AIs. You can read more about Sneha's work at https://encodejustice.org Timestamps: 00:00 Encode Justice 06:11 AI ethics and AI safety 15:49 Humans in the loop 23:59 AI in social media 30:42 Deteriorating social skills? 36:00 AIs iden...

Roman Yampolskiy on Shoggoth, Scaling Laws, and Evidence for AI being Uncontrollable

February 02, 2024 15:21 - 1 hour - 126 MB

Roman Yampolskiy joins the podcast again to discuss whether AI is like a Shoggoth, whether scaling laws will hold for more agent-like AIs, evidence that AI is uncontrollable, and whether designing human-like AI would be safer than the current development path. You can read more about Roman's work at http://cecs.louisville.edu/ry/ Timestamps: 00:00 Is AI like a Shoggoth? 09:50 Scaling laws 16:41 Are humans more general than AIs? 21:54 Are AI models explainable? 27:49 Using AI to explain ...

Special: Flo Crivello on AI as a New Form of Life

January 19, 2024 18:11 - 47 minutes - 65.7 MB

On this special episode of the podcast, Flo Crivello talks with Nathan Labenz about AI as a new form of life, whether attempts to regulate AI risks regulatory capture, how a GPU kill switch could work, and why Flo expects AGI in 2-8 years. Timestamps: 00:00 Technological progress 07:59 Regulatory capture and AI 11:53 AI as a new form of life 15:44 Can AI development be paused? 20:12 Biden's executive order on AI 22:54 How would a GPU kill switch work? 27:00 Regulating models or applicati...

Carl Robichaud on Preventing Nuclear War

January 06, 2024 11:50 - 1 hour - 137 MB

Carl Robichaud joins the podcast to discuss the new nuclear arms race, how much world leaders and ideologies matter for nuclear risk, and how to reach a stable, low-risk era. You can learn more about Carl's work here: https://www.longview.org/about/carl-robichaud/ Timestamps: 00:00 A new nuclear arms race 08:07 How much do world leaders matter? 18:04 How much does ideology matter? 22:14 Do nuclear weapons cause stable peace? 31:29 North Korea 34:01 Have we overestimated nuclear risk? 43:...

Frank Sauer on Autonomous Weapon Systems

December 14, 2023 18:10 - 1 hour - 141 MB

Frank Sauer joins the podcast to discuss autonomy in weapon systems, killer drones, low-tech defenses against drones, the flaws and unpredictability of autonomous weapon systems, and the political possibilities of regulating such systems. You can learn more about Frank's work here: https://metis.unibw.de/en/ Timestamps: 00:00 Autonomy in weapon systems 12:19 Balance of offense and defense 20:05 Killer drone systems 28:53 Is autonomy like nuclear weapons? 37:20 Low-tech defenses against dr...

Darren McKee on Uncontrollable Superintelligence

December 01, 2023 17:38 - 1 hour - 138 MB

Darren McKee joins the podcast to discuss how AI might be difficult to control, which goals and traits AI systems will develop, and whether there's a unified solution to AI alignment. Timestamps: 00:00 Uncontrollable superintelligence 16:41 AI goals and the "virus analogy" 28:36 Speed of AI cognition 39:25 Narrow AI and autonomy 52:23 Reliability of current and future AI 1:02:33 Planning for multiple AI scenarios 1:18:57 Will AIs seek self-preservation? 1:27:57 Is there a unified solut...

Mark Brakel on the UK AI Summit and the Future of AI Policy

November 17, 2023 17:57 - 1 hour - 150 MB

Mark Brakel (Director of Policy at the Future of Life Institute) joins the podcast to discuss the AI Safety Summit in Bletchley Park, objections to AI policy, AI regulation in the EU and US, global institutions for safe AI, and autonomy in weapon systems. Timestamps: 00:00 AI Safety Summit in the UK 12:18 Are officials up to date on AI? 23:22 Objections to AI policy 31:27 The EU AI Act 43:37 The right level of regulation 57:11 Risks and regulatory tools 1:04:44 Open-source AI 1:14:56...

Dan Hendrycks on Catastrophic AI Risks

November 03, 2023 16:51 - 2 hours - 176 MB

Dan Hendrycks joins the podcast again to discuss X.ai, how AI risk thinking has evolved, malicious use of AI, AI race dynamics between companies and between militaries, making AI organizations safer, and how representation engineering could help us understand AI traits like deception. You can learn more about Dan's work at https://www.safe.ai Timestamps: 00:00 X.ai - Elon Musk's new AI venture 02:41 How AI risk thinking has evolved 12:58 AI bioengeneering 19:16 AI agents 24:55 Preventing ...

Samuel Hammond on AGI and Institutional Disruption

October 20, 2023 15:04 - 2 hours - 186 MB

Samuel Hammond joins the podcast to discuss how AGI will transform economies, governments, institutions, and other power structures. You can read Samuel's blog at https://www.secondbest.ca Timestamps: 00:00 Is AGI close? 06:56 Compute versus data 09:59 Information theory 20:36 Universality of learning 24:53 Hards steps in evolution 30:30 Governments and advanced AI 40:33 How will AI transform the economy? 55:26 How will AI change transaction costs? 1:00:31 Isolated thinking about AI ...

Imagine A World: What if AI advisors helped us make better decisions?

October 17, 2023 13:00 - 59 minutes - 54.7 MB

Are we doomed to a future of loneliness and unfulfilling online interactions? What if technology made us feel more connected instead? Imagine a World is a podcast exploring a range of plausible and positive futures with advanced AI, produced by the Future of Life Institute. We interview the creators of 8 diverse and thought provoking imagined futures that we received as part of the worldbuilding contest FLI ran last year In the eighth and final episode of Imagine A World we explore the fict...

Imagine A World: What if narrow AI fractured our shared reality?

October 10, 2023 13:00 - 50 minutes - 46.3 MB

Let’s imagine a future where AGI is developed but kept at a distance from practically impacting the world, while narrow AI remakes the world completely. Most people don’t know or care about the difference and have no idea how they could distinguish between a human or artificial stranger. Inequality sticks around and AI fractures society into separate media bubbles with irreconcilable perspectives. But it's not all bad. AI markedly improves the general quality of life, enhancing medicine and t...

Steve Omohundro on Provably Safe AGI

October 05, 2023 11:59 - 2 hours - 169 MB

Steve Omohundro joins the podcast to discuss Provably Safe Systems, a paper he co-authored with FLI President Max Tegmark. You can read the paper here: https://arxiv.org/pdf/2309.01933.pdf Timestamps: 00:00 Provably safe AI systems 12:17 Alignment and evaluations 21:08 Proofs about language model behavior 27:11 Can we formalize safety? 30:29 Provable contracts 43:13 Digital replicas of actual systems 46:32 Proof-carrying code 56:25 Can language models think logically? 1:00:44 Can AI d...

Imagine A World: What if AI enabled us to communicate with animals?

October 03, 2023 13:00 - 1 hour - 58.7 MB

What if AI allowed us to communicate with animals? Could interspecies communication lead to new levels of empathy? How might communicating with animals lead humans to reimagine our place in the natural world? Imagine a World is a podcast exploring a range of plausible and positive futures with advanced AI, produced by the Future of Life Institute. We interview the creators of 8 diverse and thought provoking imagined futures that we received as part of the worldbuilding contest FLI ran last y...

Imagine A World: What if some people could live forever?

September 26, 2023 13:18 - 58 minutes - 53.9 MB

If you could extend your life, would you? How might life extension technologies create new social and political divides? How can the world unite to solve the great problems of our time, like AI risk? What if AI creators could agree on an inspection process to expose AI dangers before they're unleashed? Imagine a World is a podcast exploring a range of plausible and positive futures with advanced AI, produced by the Future of Life Institute. We interview the creators of 8 diverse and thought...

Johannes Ackva on Managing Climate Change

September 21, 2023 16:39 - 1 hour - 138 MB

Johannes Ackva joins the podcast to discuss the main drivers of climate change and our best technological and governmental options for managing it. You can read more about Johannes' work at http://founderspledge.com/climate Timestamps: 00:00 Johannes's journey as an environmentalist 13:21 The drivers of climate change 23:00 Oil, coal, and gas 38:05 Solar, wind, and hydro 49:34 Nuclear energy 57:03 Geothermal energy 1:00:41 Most promising technologies 1:05:40 Government subsidies 1:13:28 ...

Imagine A World: What if we had digital nations untethered to geography?

September 19, 2023 14:17 - 55 minutes - 50.9 MB

How do low income countries affected by climate change imagine their futures? How do they overcome these twin challenges? Will all nations eventually choose or be forced to go digital? Imagine a World is a podcast exploring a range of plausible and positive futures with advanced AI, produced by the Future of Life Institute. We interview the creators of 8 diverse and thought provoking imagined futures that we received as part of the worldbuilding contest FLI ran last year. In the fourth epis...

Imagine A World: What if global challenges led to more centralization?

September 12, 2023 13:44 - 1 hour - 55.4 MB

What if we had one advanced AI system for the entire world? Would this led to a world 'beyond' nation states - and do we want this? Imagine a World is a podcast exploring a range of plausible and positive futures with advanced AI, produced by the Future of Life Institute. We interview the creators of 8 diverse and thought provoking imagined futures that we received as part of the worldbuilding contest FLI ran last year. In the third episode of Imagine A World, we explore the fictional world...

Tom Davidson on How Quickly AI Could Automate the Economy

September 08, 2023 13:06 - 1 hour - 162 MB

Tom Davidson joins the podcast to discuss how AI could quickly automate most cognitive tasks, including AI research, and why this would be risky. Timestamps: 00:00 The current pace of AI 03:58 Near-term risks from AI 09:34 Historical analogies to AI 13:58 AI benchmarks VS economic impact 18:30 AI takeoff speed and bottlenecks 31:09 Tom's model of AI takeoff speed 36:21 How AI could automate AI research 41:49 Bottlenecks to AI automating AI hardware 46:15 How much of AI research is aut...

Imagine A World: What if we designed and built AI in an inclusive way?

September 05, 2023 13:05 - 52 minutes - 48.4 MB

How does who is involved in the design of AI affect the possibilities for our future? Why isn’t the design of AI inclusive already? Can technology solve all our problems? Can human nature change? Do we want either of these things to happen? Imagine a World is a podcast exploring a range of plausible and positive futures with advanced AI, produced by the Future of Life Institute. We interview the creators of 8 diverse and thought provoking imagined futures that we received as part of the worl...

Imagine A World: What if new governance mechanisms helped us coordinate?

September 05, 2023 13:00 - 1 hour - 57.3 MB

Are today's democratic systems equipped well enough to create the best possible future for everyone? If they're not, what systems might work better? And are governments around the world taking the destabilizing threats of new technologies seriously enough, or will it take a dramatic event, such as an AI-driven war, to get their act together? Imagine a World is a podcast exploring a range of plausible and positive futures with advanced AI, produced by the Future of Life Institute. We intervi...

New: Imagine A World Podcast [TRAILER]

August 29, 2023 13:58 - 2 minutes - 1.84 MB

Coming Soon… The year is 2045. Humanity is not extinct, nor living in a dystopia. It has averted climate disaster and major wars. Instead, AI and other new technologies are helping to make the world more peaceful, happy and equal. How? This was what we asked the entrants of our Worldbuilding Contest to imagine last year. Our new podcast series digs deeper into the eight winning entries, their ideas and solutions, the diverse teams behind them and the challenges they faced. You might love so...

Robert Trager on International AI Governance and Cybersecurity at AI Companies

August 20, 2023 15:55 - 1 hour - 144 MB

Robert Trager joins the podcast to discuss AI governance, the incentives of governments and companies, the track record of international regulation, the security dilemma in AI, cybersecurity at AI companies, and skepticism about AI governance. We also discuss Robert's forthcoming paper International Governance of Civilian AI: A Jurisdictional Certification Approach. You can read more about Robert's work at https://www.governance.ai Timestamps: 00:00 The goals of AI governance 08:38 Incentive...

Jason Crawford on Progress and Risks from AI

July 21, 2023 08:48 - 1 hour - 118 MB

Jason Crawford joins the podcast to discuss the history of progress, the future of economic growth, and the relationship between progress and risks from AI. You can read more about Jason's work at https://rootsofprogress.org Timestamps: 00:00 Eras of human progress 06:47 Flywheels of progress 17:56 Main causes of progress 21:01 Progress and risk 32:49 Safety as part of progress 45:20 Slowing down specific technologies? 52:29 Four lenses on AI risk 58:48 Analogies causing disagreement...

Special: Jaan Tallinn on Pausing Giant AI Experiments

July 06, 2023 07:00 - 1 hour - 139 MB

On this special episode of the podcast, Jaan Tallinn talks with Nathan Labenz about Jaan's model of AI risk, the future of AI development, and pausing giant AI experiments. Timestamps: 0:00 Nathan introduces Jaan 4:22 AI safety and Future of Life Institute 5:55 Jaan's first meeting with Eliezer Yudkowsky 12:04 Future of AI evolution 14:58 Jaan's investments in AI companies 23:06 The emerging danger paradigm 26:53 Economic transformation with AI 32:31 AI supervising itself 34:06 Language mo...

Joe Carlsmith on How We Change Our Minds About AI Risk

June 22, 2023 13:32 - 2 hours - 199 MB

Joe Carlsmith joins the podcast to discuss how we change our minds about AI risk, gut feelings versus abstract models, and what to do if transformative AI is coming soon. You can read more about Joe's work at https://joecarlsmith.com. Timestamps: 00:00 Predictable updating on AI risk 07:27 Abstract models versus gut feelings 22:06 How Joe began believing in AI risk 29:06 Is AI risk falsifiable? 35:39 Types of skepticisms about AI risk 44:51 Are we fundamentally confused? 53:35 Becoming...

Dan Hendrycks on Why Evolution Favors AIs over Humans

June 08, 2023 10:59 - 2 hours - 202 MB

Dan Hendrycks joins the podcast to discuss evolutionary dynamics in AI development and how we could develop AI safely. You can read more about Dan's work at https://www.safe.ai Timestamps: 00:00 Corporate AI race 06:28 Evolutionary dynamics in AI 25:26 Why evolution applies to AI 50:58 Deceptive AI 1:06:04 Competition erodes safety 10:17:40 Evolutionary fitness: humans versus AI 1:26:32 Different paradigms of AI risk 1:42:57 Interpreting AI systems 1:58:03 Honest AI and uncertain ...

Roman Yampolskiy on Objections to AI Safety

May 26, 2023 08:17 - 1 hour - 141 MB

Roman Yampolskiy joins the podcast to discuss various objections to AI safety, impossibility results for AI, and how much risk civilization should accept from emerging technologies. You can read more about Roman's work at http://cecs.louisville.edu/ry/ Timestamps: 00:00 Objections to AI safety 15:06 Will robots make AI risks salient? 27:51 Was early AI safety research useful? 37:28 Impossibility results for AI 47:25 How much risk should we accept? 1:01:21 Exponential or S-curve? 1:12...

Nathan Labenz on How AI Will Transform the Economy

May 11, 2023 16:44 - 1 hour - 92.1 MB

Nathan Labenz joins the podcast to discuss the economic effects of AI on growth, productivity, and employment. We also talk about whether AI might have catastrophic effects on the world. You can read more about Nathan's work at https://www.cognitiverevolution.ai Timestamps: 00:00 Economic transformation from AI 11:15 Productivity increases from technology 17:44 AI effects on employment 28:43 Life without jobs 38:42 Losing contact with reality 42:31 Catastrophic risks from AI 53:52 Sc...

Nathan Labenz on the Cognitive Revolution, Red Teaming GPT-4, and Potential Dangers of AI

May 04, 2023 17:36 - 59 minutes - 82.2 MB

Nathan Labenz joins the podcast to discuss the cognitive revolution, his experience red teaming GPT-4, and the potential near-term dangers of AI. You can read more about Nathan's work at https://www.cognitiverevolution.ai Timestamps: 00:00 The cognitive revolution 07:47 Red teaming GPT-4 24:00 Coming to believe in transformative AI 30:14 Is AI depth or breadth most impressive? 42:52 Potential near-term dangers from AI Social Media Links: ➡️ WEBSITE: https://futureoflife.org ➡️ TWITT...

Maryanna Saenko on Venture Capital, Philanthropy, and Ethical Technology

April 27, 2023 11:18 - 1 hour - 107 MB

Maryanna Saenko joins the podcast to discuss how venture capital works, how to fund innovation, and what the fields of investing and philanthropy could learn from each other. You can read more about Maryanna's work at https://future.ventures Timestamps: 00:00 How does venture capital work? 09:01 Failure and success for startups 13:22 Is overconfidence necessary? 19:20 Repeat entrepreneurs 24:38 Long-term investing 30:36 Feedback loops from investments 35:05 Timing investments 38:35 ...

Connor Leahy on the State of AI and Alignment Research

April 20, 2023 16:10 - 52 minutes - 71.8 MB

Connor Leahy joins the podcast to discuss the state of the AI. Which labs are in front? Which alignment solutions might work? How will the public react to more capable AI? You can read more about Connor's work at https://conjecture.dev Timestamps: 00:00 Landscape of AI research labs 10:13 Is AGI a useful term? 13:31 AI predictions 17:56 Reinforcement learning from human feedback 29:53 Mechanistic interpretability 33:37 Yudkowsky and Christiano 41:39 Cognitive Emulations 43:11 Public ...

Connor Leahy on AGI and Cognitive Emulation

April 13, 2023 13:00 - 1 hour - 133 MB

Connor Leahy joins the podcast to discuss GPT-4, magic, cognitive emulation, demand for human-like AI, and aligning superintelligence. You can read more about Connor's work at https://conjecture.dev Timestamps: 00:00 GPT-4 16:35 "Magic" in machine learning 27:43 Cognitive emulations 38:00 Machine learning VS explainability 48:00 Human data = human AI? 1:00:07 Analogies for cognitive emulations 1:26:03 Demand for human-like AI 1:31:50 Aligning superintelligence Social Media Links: ➡...

Lennart Heim on Compute Governance

April 06, 2023 09:09 - 50 minutes - 69.5 MB

Lennart Heim joins the podcast to discuss options for governing the compute used by AI labs and potential problems with this approach to AI safety. You can read more about Lennart's work here: https://heim.xyz/about/ Timestamps: 00:00 Introduction 00:37 AI risk 03:33 Why focus on compute? 11:27 Monitoring compute 20:30 Restricting compute 26:54 Subsidising compute 34:00 Compute as a bottleneck 38:41 US and China 42:14 Unintended consequences 46:50 Will AI be like nuclear energy? S...

Lennart Heim on the AI Triad: Compute, Data, and Algorithms

March 30, 2023 18:37 - 47 minutes - 65.9 MB

Lennart Heim joins the podcast to discuss how we can forecast AI progress by researching AI hardware. You can read more about Lennart's work here: https://heim.xyz/about/ Timestamps: 00:00 Introduction 01:00 The AI triad 06:26 Modern chip production 15:54 Forecasting AI with compute 27:18 Running out of data? 32:37 Three eras of AI training 37:58 Next chip paradigm 44:21 AI takeoff speeds Social Media Links: ➡️ WEBSITE: https://futureoflife.org ➡️ TWITTER: https://twitter.com/FLIxr...

Liv Boeree on Poker, GPT-4, and the Future of AI

March 23, 2023 18:31 - 51 minutes - 71 MB

Liv Boeree joins the podcast to discuss poker, GPT-4, human-AI interaction, whether this is the most important century, and building a dataset of human wisdom. You can read more about Liv's work here: https://livboeree.com Timestamps: 00:00 Introduction 00:36 AI in Poker 09:35 Game-playing AI 13:45 GPT-4 and generative AI 26:41 Human-AI interaction 32:05 AI arms race risks 39:32 Most important century? 42:36 Diminishing returns to intelligence? 49:14 Dataset of human wisdom/meaning ...

Liv Boeree on Moloch, Beauty Filters, Game Theory, Institutions, and AI

March 16, 2023 18:23 - 42 minutes - 58.2 MB

Liv Boeree joins the podcast to discuss Moloch, beauty filters, game theory, institutional change, and artificial intelligence. You can read more about Liv's work here: https://livboeree.com Timestamps: 00:00 Introduction 01:57 What is Moloch? 04:13 Beauty filters 10:06 Science citations 15:18 Resisting Moloch 20:51 New institutions 26:02 Moloch and WinWin 28:41 Changing systems 33:37 Artificial intelligence 39:14 AI acceleration Social Media Links: ➡️ WEBSITE: https://futureofl...

Tobias Baumann on Space Colonization and Cooperative Artificial Intelligence

March 09, 2023 17:19 - 43 minutes - 59.8 MB

Tobias Baumann joins the podcast to discuss suffering risks, space colonization, and cooperative artificial intelligence. You can read more about Tobias' work here: https://centerforreducingsuffering.org. Timestamps: 00:00 Suffering risks 02:50 Space colonization 10:12 Moral circle expansion 19:14 Cooperative artificial intelligence 36:19 Influencing governments 39:34 Can we reduce suffering? Social Media Links: ➡️ WEBSITE: https://futureoflife.org ➡️ TWITTER: https://twitter.com/F...

Tobias Baumann on Artificial Sentience and Reducing the Risk of Astronomical Suffering

March 02, 2023 15:18 - 47 minutes - 64.9 MB

Tobias Baumann joins the podcast to discuss suffering risks, artificial sentience, and the problem of knowing which actions reduce suffering in the long-term future. You can read more about Tobias' work here: https://centerforreducingsuffering.org. Timestamps: 00:00 Introduction 00:52 What are suffering risks? 05:40 Artificial sentience 17:18 Is reducing suffering hopelessly difficult? 26:06 Can we know how to reduce suffering? 31:17 Why are suffering risks neglected? 37:31 How do we...

Neel Nanda on Math, Tech Progress, Aging, Living up to Our Values, and Generative AI

February 23, 2023 14:09 - 34 minutes - 48 MB

Neel Nanda joins the podcast for a lightning round on mathematics, technological progress, aging, living up to our values, and generative AI. You can find his blog here: https://www.neelnanda.io Timestamps: 00:00 Introduction 00:55 How useful is advanced mathematics? 02:24 Will AI replace mathematicians? 03:28 What are the key drivers of tech progress? 04:13 What scientific discovery would disrupt Neel's worldview? 05:59 How should humanity view aging? 08:03 How can we live up to our ...

Neel Nanda on Avoiding an AI Catastrophe with Mechanistic Interpretability

February 16, 2023 18:09 - 1 hour - 84.9 MB

Neel Nanda joins the podcast to talk about mechanistic interpretability and how it can make AI safer. Neel is an independent AI safety researcher. You can find his blog here: https://www.neelnanda.io Timestamps: 00:00 Introduction 00:46 How early is the field mechanistic interpretability? 03:12 Why should we care about mechanistic interpretability? 06:38 What are some successes in mechanistic interpretability? 16:29 How promising is mechanistic interpretability? 31:13 Is machine learni...

Neel Nanda on What is Going on Inside Neural Networks

February 09, 2023 11:48 - 1 hour - 89.3 MB

Neel Nanda joins the podcast to explain how we can understand neural networks using mechanistic interpretability. Neel is an independent AI safety researcher. You can find his blog here: https://www.neelnanda.io Timestamps: 00:00 Who is Neel? 04:41 How did Neel choose to work on AI safety? 12:57 What does an AI safety researcher do? 15:53 How analogous are digital neural networks to brains? 21:34 Are neural networks like alien beings? 29:13 Can humans think like AIs? 35:00 Can AIs hel...

Connor Leahy on Aliens, Ethics, Economics, Memetics, and Education

February 02, 2023 16:33 - 1 hour - 90.7 MB

Connor Leahy from Conjecture joins the podcast for a lightning round on a variety of topics ranging from aliens to education. Learn more about Connor's work at https://conjecture.dev Social Media Links: ➡️ WEBSITE: https://futureoflife.org ➡️ TWITTER: https://twitter.com/FLIxrisk ➡️ INSTAGRAM: https://www.instagram.com/futureoflifeinstitute/ ➡️ META: https://www.facebook.com/futureoflifeinstitute ➡️ LINKEDIN: https://www.linkedin.com/company/future-of-life-institute/

Connor Leahy on AI Safety and Why the World is Fragile

January 26, 2023 13:23 - 1 hour - 89.5 MB

Connor Leahy from Conjecture joins the podcast to discuss AI safety, the fragility of the world, slowing down AI development, regulating AI, and the optimal funding model for AI safety research. Learn more about Connor's work at https://conjecture.dev Timestamps: 00:00 Introduction 00:47 What is the best way to understand AI safety? 09:50 Why is the world relatively stable? 15:18 Is the main worry human misuse of AI? 22:47 Can humanity solve AI safety? 30:06 Can we slow down AI develop...

Connor Leahy on AI Progress, Chimps, Memes, and Markets

January 19, 2023 13:52 - 1 hour - 88.3 MB

Connor Leahy from Conjecture joins the podcast to discuss AI progress, chimps, memes, and markets. Learn more about Connor's work at https://conjecture.dev Timestamps: 00:00 Introduction 01:00 Defining artificial general intelligence 04:52 What makes humans more powerful than chimps? 17:23 Would AIs have to be social to be intelligent? 20:29 Importing humanity's memes into AIs 23:07 How do we measure progress in AI? 42:39 Gut feelings about AI progress 47:29 Connor's predictions abo...

Sean Ekins on Regulating AI Drug Discovery

January 12, 2023 16:12 - 36 minutes - 50.3 MB

On this special episode of the podcast, Emilia Javorsky interviews Sean Ekins about regulating AI drug discovery. Timestramps: 00:00 Introduction 00:31 Ethical guidelines and regulation of AI drug discovery 06:11 How do we balance innovation and safety in AI drug discovery? 13:12 Keeping dangerous chemical data safe 21:16 Sean’s personal story of voicing concerns about AI drug discovery 32:06 How Sean will continue working on AI drug discovery

Sean Ekins on the Dangers of AI Drug Discovery

January 05, 2023 18:14 - 39 minutes - 53.9 MB

On this special episode of the podcast, Emilia Javorsky interviews Sean Ekins about the dangers of AI drug discovery. They talk about how Sean discovered an extremely toxic chemical (VX) by reversing an AI drug discovery algorithm. Timestamps: 00:00 Introduction 00:46 Sean’s professional journey 03:45 Can computational models replace animal models? 07:24 The risks of AI drug discovery 12:48 Should scientists disclose dangerous discoveries? 19:40 How should scientists handle dual-use tec...

Twitter Mentions

@flixrisk 20 Episodes
@lucasfmperry 3 Episodes
@ostgutton 2 Episodes
@samvoltek 2 Episodes
@anthropicai 1 Episode
@jjding99 1 Episode
@alanrobock 1 Episode