Future of Life Institute Podcast
210 episodes - English - Latest episode: 14 days ago - ★★★★★ - 100 ratingsThe Future of Life Institute (FLI) is a nonprofit working to reduce global catastrophic and existential risk from powerful technologies. In particular, FLI focuses on risks from artificial intelligence (AI), biotechnology, nuclear weapons and climate change.
The Institute's work is made up of three main strands: grantmaking for risk reduction, educational outreach, and advocacy within the United Nations, US government and European Union institutions.
FLI has become one of the world's leading voices on the governance of AI having created one of the earliest and most influential sets of governance principles: the Asilomar AI Principles.
Homepage Apple Podcasts Google Podcasts Overcast Castro Pocket Casts RSS feed
Episodes
Anders Sandberg on the Value of the Future
December 29, 2022 19:29 - 49 minutes - 68.4 MBAnders Sandberg joins the podcast to discuss various philosophical questions about the value of the future. Learn more about Anders' work: https://www.fhi.ox.ac.uk Timestamps: 00:00 Introduction 00:54 Humanity as an immature teenager 04:24 How should we respond to our values changing over time? 18:53 How quickly should we change our values? 24:58 Are there limits to what future morality could become? 29:45 Could the universe contain infinite value? 36:00 How do we balance weird philo...
Anders Sandberg on Grand Futures and the Limits of Physics
December 22, 2022 19:15 - 1 hour - 86.3 MBAnders Sandberg joins the podcast to discuss how big the future could be and what humanity could achieve at the limits of physics. Learn more about Anders' work: https://www.fhi.ox.ac.uk Timestamps: 00:00 Introduction 00:58 Does it make sense to write long books now? 06:53 Is it possible to understand all of science now? 10:44 What is exploratory engineering? 15:48 Will humanity develop a completed science? 21:18 How much of possible technology has humanity already invented? 25:22 ...
Anders Sandberg on ChatGPT and the Future of AI
December 15, 2022 13:29 - 58 minutes - 80.1 MBAnders Sandberg from The Future of Humanity Institute joins the podcast to discuss ChatGPT, large language models, and what he's learned about the risks and benefits of AI. Timestamps: 00:00 Introduction 00:40 ChatGPT 06:33 Will AI continue to surprise us? 16:22 How do language models fail? 24:23 Language models trained on their own output 27:29 Can language models write college-level essays? 35:03 Do language models understand anything? 39:59 How will AI models improve in the futur...
Vincent Boulanin on Military Use of Artificial Intelligence
December 08, 2022 18:48 - 48 minutes - 66.2 MBVincent Boulanin joins the podcast to explain how modern militaries use AI, including in nuclear weapons systems. Learn more about Vincent's work: https://sipri.org Timestamps: 00:00 Introduction 00:45 Categorizing risks from AI and nuclear 07:40 AI being used by non-state actors 12:57 Combining AI with nuclear technology 15:13 A human should remain in the loop 25:05 Automation bias 29:58 Information requirements for nuclear launch decisions 35:22 Vincent's general conclusion abou...
Vincent Boulanin on the Dangers of AI in Nuclear Weapons Systems
December 01, 2022 16:27 - 44 minutes - 61.7 MBVincent Boulanin joins the podcast to explain the dangers of incorporating artificial intelligence in nuclear weapons systems. Learn more about Vincent's work: https://sipri.org Timestamps: 00:00 Introduction 00:55 What is strategic stability? 02:45 How can AI be a positive factor in nuclear risk? 10:17 Remote sensing of nuclear submarines 19:50 Using AI in nuclear command and control 24:21 How does AI change the game theory of nuclear war? 30:49 How could AI cause an accidental nu...
Robin Hanson on Predicting the Future of Artificial Intelligence
November 24, 2022 13:21 - 51 minutes - 71.3 MBRobin Hanson joins the podcast to discuss AI forecasting methods and metrics. Timestamps: 00:00 Introduction 00:49 Robin's experience working with AI 06:04 Robin's views on AI development 10:41 Should we care about metrics for AI progress? 16:56 Is it useful to track AI progress? 22:02 When should we begin worrying about AI safety? 29:16 The history of AI development 39:52 AI progress that deviates from current trends 43:34 Is this AI boom different than past booms? 48:26 Different ...
Robin Hanson on Grabby Aliens and When Humanity Will Meet Them
November 17, 2022 12:01 - 59 minutes - 82.5 MBRobin Hanson joins the podcast to explain his theory of grabby aliens and its implications for the future of humanity. Learn more about the theory here: https://grabbyaliens.com Timestamps: 00:00 Introduction 00:49 Why should we care about aliens? 05:58 Loud alien civilizations and quiet alien civilizations 08:16 Why would some alien civilizations be quiet? 14:50 The moving parts of the grabby aliens model 23:57 Why is humanity early in the universe? 28:46 Could't we just be alone in ...
Ajeya Cotra on Thinking Clearly in a Rapidly Changing World
November 10, 2022 12:28 - 44 minutes - 61.5 MBAjeya Cotra joins us to talk about thinking clearly in a rapidly changing world. Learn more about the work of Ajeya and her colleagues: https://www.openphilanthropy.org Timestamps: 00:00 Introduction 00:44 The default versus the accelerating picture of the future 04:25 The role of AI in accelerating change 06:48 Extrapolating economic growth 08:53 How do we know whether the pace of change is accelerating? 15:07 How can we cope with a rapidly changing world? 18:50 How could the future...
Ajeya Cotra on how Artificial Intelligence Could Cause Catastrophe
November 03, 2022 12:45 - 54 minutes - 74.8 MBAjeya Cotra joins us to discuss how artificial intelligence could cause catastrophe. Follow the work of Ajeya and her colleagues: https://www.openphilanthropy.org Timestamps: 00:00 Introduction 00:53 AI safety research in general 02:04 Realistic scenarios for AI catastrophes 06:51 A dangerous AI model developed in the near future 09:10 Assumptions behind dangerous AI development 14:45 Can AIs learn long-term planning? 18:09 Can AIs understand human psychology? 22:32 Training an AI mo...
Ajeya Cotra on Forecasting Transformative Artificial Intelligence
October 27, 2022 00:22 - 47 minutes - 65.5 MBAjeya Cotra joins us to discuss forecasting transformative artificial intelligence. Follow the work of Ajeya and her colleagues: https://www.openphilanthropy.org Timestamps: 00:00 Introduction 00:53 Ajeya's report on AI 01:16 What is transformative AI? 02:09 Forecasting transformative AI 02:53 Historical growth rates 05:10 Simpler forecasting methods 09:01 Biological anchors 16:31 Different paths to transformative AI 17:55 Which year will we get transformative AI? 25:54 Expert opinion on t...
Alan Robock on Nuclear Winter, Famine, and Geoengineering
October 20, 2022 08:45 - 41 minutes - 47.3 MBAlan Robock joins us to discuss nuclear winter, famine and geoengineering. Learn more about Alan's work: http://people.envsci.rutgers.edu/robock/ Follow Alan on Twitter: https://twitter.com/AlanRobock Timestamps: 00:00 Introduction 00:45 What is nuclear winter? 06:27 A nuclear war between India and Pakistan 09:16 Targets in a nuclear war 11:08 Why does the world have so many nuclear weapons? 19:28 Societal collapse in a nuclear winter 22:45 Should we prepare for a nuclear winter? 28:1...
Brian Toon on Nuclear Winter, Asteroids, Volcanoes, and the Future of Humanity
October 13, 2022 10:03 - 49 minutes - 45.2 MBBrian Toon joins us to discuss the risk of nuclear winter. Learn more about Brian's work: https://lasp.colorado.edu/home/people/brian-toon/ Read Brian's publications: https://airbornescience.nasa.gov/person/Brian_Toon Timestamps: 00:00 Introduction 01:02 Asteroid impacts 04:20 The discovery of nuclear winter 13:56 Comparing volcanoes and asteroids to nuclear weapons 19:42 How did life survive the asteroid impact 65 million years ago? 25:05 How humanity could go extinct 29:46 Nuclear w...
Brian Toon on nuclear winter, asteroids, volcanoes, and the future of humanity
October 13, 2022 10:03 - 49 minutes - 45.2 MBBrian Toon joins us to discuss the risk of nuclear winter. Learn more about Brian's work: https://lasp.colorado.edu/home/people/brian-toon/ Read Brian's publications: https://airbornescience.nasa.gov/person/Brian_Toon Timestamps: 00:00 Introduction 01:02 Asteroid impacts 04:20 The discovery of nuclear winter 13:56 Comparing volcanoes and asteroids to nuclear weapons 19:42 How did life survive the asteroid impact 65 million years ago? 25:05 How humanity could go extinct 29:46 Nuclear w...
Philip Reiner on nuclear command, control, and communications
October 06, 2022 14:12 - 47 minutes - 43.4 MBPhilip Reiner joins us to talk about nuclear, command, control and communications systems. Learn more about Philip’s work: https://securityandtechnology.org/ Timestamps: [00:00:00] Introduction [00:00:50] Nuclear command, control, and communications [00:03:52] Old technology in nuclear systems [00:12:18] Incentives for nuclear states [00:15:04] Selectively enhancing security [00:17:34] Unilateral de-escalation [00:18:04] Nuclear communications [00:24:08] The CATALINK System [00:31:25]...
Philip Reiner on Nuclear Command, Control, and Communications
October 06, 2022 14:12 - 47 minutes - 43.4 MBPhilip Reiner joins us to talk about nuclear, command, control and communications systems. Learn more about Philip’s work: https://securityandtechnology.org/ Timestamps: [00:00:00] Introduction [00:00:50] Nuclear command, control, and communications [00:03:52] Old technology in nuclear systems [00:12:18] Incentives for nuclear states [00:15:04] Selectively enhancing security [00:17:34] Unilateral de-escalation [00:18:04] Nuclear communications [00:24:08] The CATALINK System [00:31:25]...
Daniela and Dario Amodei on Anthropic
March 04, 2022 23:29 - 2 hours - 278 MBDaniela and Dario Amodei join us to discuss Anthropic: a new AI safety and research company that's working to build reliable, interpretable, and steerable AI systems. Topics discussed in this episode include: -Anthropic's mission and research strategy -Recent research and papers by Anthropic -Anthropic's structure as a "public benefit corporation" -Career opportunities You can find the page for the podcast here: https://futureoflife.org/2022/03/04/daniela-and-dario-amodei-on-anthropic/ Wa...
Anthony Aguirre and Anna Yelizarova on FLI's Worldbuilding Contest
February 09, 2022 02:19 - 33 minutes - 76.2 MBAnthony Aguirre and Anna Yelizarova join us to discuss FLI's new Worldbuilding Contest. Topics discussed in this episode include: -Motivations behind the contest -The importance of worldbuilding -The rules of the contest -What a submission consists of -Due date and prizes Learn more about the contest here: https://worldbuild.ai/ Join the discord: https://discord.com/invite/njZyTJpwMz You can find the page for the podcast here: https://futureoflife.org/2022/02/08/anthony-aguirre-and-anna-...
David Chalmers on Reality+: Virtual Worlds and the Problems of Philosophy
January 26, 2022 19:42 - 1 hour - 235 MBDavid Chalmers, Professor of Philosophy and Neural Science at NYU, joins us to discuss his newest book Reality+: Virtual Worlds and the Problems of Philosophy. Topics discussed in this episode include: -Virtual reality as genuine reality -Why VR is compatible with the good life -Why we can never know whether we're in a simulation -Consciousness in virtual realities -The ethics of simulated beings You can find the page for the podcast here: https://futureoflife.org/2022/01/26/david-chalmers...
Rohin Shah on the State of AGI Safety Research in 2021
November 02, 2021 00:48 - 1 hour - 95.1 MBRohin Shah, Research Scientist on DeepMind's technical AGI safety team, joins us to discuss: AI value alignment; how an AI Researcher might decide whether to work on AI Safety; and why we don't know that AI systems won't lead to existential risk. Topics discussed in this episode include: - Inner Alignment versus Outer Alignment - Foundation Models - Structural AI Risks - Unipolar versus Multipolar Scenarios - The Most Important Thing That Impacts the Future of Life You can find the page f...
Future of Life Institute's $25M Grants Program for Existential Risk Reduction
October 18, 2021 22:41 - 24 minutes - 56.6 MBFuture of Life Institute President Max Tegmark and our grants team, Andrea Berman and Daniel Filan, join us to announce a $25M multi-year AI Existential Safety Grants Program. Topics discussed in this episode include: - The reason Future of Life Institute is offering AI Existential Safety Grants - Max speaks about how receiving a grant changed his career early on - Daniel and Andrea provide details on the fellowships and future grant priorities Check out our grants programs here: https://g...
Max Tegmark on FLI's $25M AI Existential Safety Grants
October 18, 2021 22:41 - 20 minutes - 47.4 MBFuture of Life Institute President Max Tegmark and our grants team, Andrea Berman and Daniel Filan, join us to announce a $25M multi-year AI Existential Safety Grants Program. Topics discussed in this episode include: - The reason Future of Life Institute is offering AI Existential Safety Grants - How receiving a grant changed Max's career early on - Details on the fellowships and future grant priorities Check out our grants programs here: https://grants.futureoflife.org/ Join our AI Exis...
Max Tegmark on the Future of Life Institute's $25M Grants Program for Existential Risk Reduction
October 18, 2021 22:41 - 20 minutes - 47.4 MBFuture of Life Institute President Max Tegmark and our grants team, Andrea Berman and Daniel Filan, join us to announce a $25M multi-year AI Existential Safety Grants Program. Topics discussed in this episode include: - The reason Future of Life Institute is offering AI Existential Safety Grants - How receiving a grant changed Max's career early on - Details on the fellowships and future grant priorities Check out our grants programs here: https://grants.futureoflife.org/ Join our AI Exis...
Filippa Lentzos on Global Catastrophic Biological Risks
October 01, 2021 15:50 - 58 minutes - 133 MBDr. Filippa Lentzos, Senior Lecturer in Science and International Security at King's College London, joins us to discuss the most pressing issues in biosecurity, big data in biology and life sciences, and governance in biological risk. Topics discussed in this episode include: - The most pressing issue in biosecurity - Stories from when biosafety labs failed to contain dangerous pathogens - The lethality of pathogens being worked on at biolaboratories - Lessons from COVID-19 You can find t...
Filippa Lentzos on Emerging Threats in Biosecurity
October 01, 2021 15:50 - 58 minutes - 133 MBDr. Filippa Lentzos, Senior Lecturer in Science and International Security at King's College London, joins us to discuss the most pressing issues in biosecurity, big data in biology and life sciences, and governance in biological risk. Topics discussed in this episode include: - The most pressing issue in biosecurity - Stories from when biosafety labs failed to contain dangerous pathogens - The lethality of pathogens being worked on at biolaboratories - Lessons from COVID-19 You can find t...
Filippa Lentzos on Emerging Issues in Biosecurity
October 01, 2021 15:50 - 58 minutes - 133 MBFilippa Lentzos on Emerging Issues in Biosecurity by Future of Life Institute
Susan Solomon and Stephen Andersen on Saving the Ozone Layer
September 16, 2021 08:34 - 1 hour - 96 MBSusan Solomon, internationally recognized atmospheric chemist, and Stephen Andersen, leader of the Montreal Protocol, join us to tell the story of the ozone hole and their roles in helping to bring us back from the brink of disaster. Topics discussed in this episode include: -The industrial and commercial uses of chlorofluorocarbons (CFCs) -How we discovered the atmospheric effects of CFCs -The Montreal Protocol and its significance -Dr. Solomon's, Dr. Farman's, and Dr. Andersen's crucial ...
James Manyika on Global Economic and Technological Trends
September 07, 2021 04:53 - 1 hour - 89.9 MBJames Manyika, Chairman and Director of the McKinsey Global Institute, joins us to discuss the rapidly evolving landscape of the modern global economy and the role of technology in it. Topics discussed in this episode include: -The modern social contract -Reskilling, wage stagnation, and inequality -Technology induced unemployment -The structure of the global economy -The geographic concentration of economic growth You can find the page for this podcast here: https://futureoflife.org/2021...
Michael Klare on the Pentagon's view of Climate Change and the Risks of State Collapse
July 30, 2021 22:21 - 1 hour - 87.2 MBMichael Klare, Five College Professor of Peace & World Security Studies, joins us to discuss the Pentagon's view of climate change, why it's distinctive, and how this all ultimately relates to the risks of great powers conflict and state collapse. Topics discussed in this episode include: -How the US military views and takes action on climate change -Examples of existing climate related difficulties and what they tell us about the future -Threat multiplication from climate change -The risk...
Avi Loeb on UFOs and if they're Alien in Origin
July 09, 2021 18:22 - 40 minutes - 37.1 MBAvi Loeb, Professor of Science at Harvard University, joins us to discuss unidentified aerial phenomena and a recent US Government report assessing their existence and threat. Topics discussed in this episode include: -Evidence counting for the natural, human, and extraterrestrial origins of UAPs -The culture of science and how it deals with UAP reports -How humanity should respond if we discover UAPs are alien in origin -A project for collecting high quality data on UAPs You can find the...
Avi Loeb on 'Oumuamua, Aliens, Space Archeology, Great Filters, and Superstructures
July 09, 2021 18:14 - 2 hours - 284 MBAvi Loeb, Professor of Science at Harvard University, joins us to discuss a recent interstellar visitor, if we've already encountered alien technology, and whether we're ultimately alone in the cosmos. Topics discussed in this episode include: -Whether 'Oumuamua is alien or natural in origin -The culture of science and how it affects fruitful inquiry -Looking for signs of alien life throughout the solar system and beyond -Alien artefacts and galactic treaties -How humanity should handle a ...
Nicolas Berggruen on the Dynamics of Power, Wisdom, and Ideas in the Age of AI
June 01, 2021 02:44 - 1 hour - 156 MBNicolas Berggruen, investor and philanthropist, joins us to explore the dynamics of power, wisdom, technology and ideas in the 21st century. Topics discussed in this episode include: -What wisdom consists of -The role of ideas in society and civilization -The increasing concentration of power and wealth -The technological displacement of human labor -Democracy, universal basic income, and universal basic capital -Living an examined life You can find the page for this podcast here: https:...
Bart Selman on the Promises and Perils of Artificial Intelligence
May 20, 2021 18:07 - 1 hour - 231 MBBart Selman, Professor of Computer Science at Cornell University, joins us to discuss a wide range of AI issues, from autonomous weapons and AI consciousness to international governance and the possibilities of superintelligence. Topics discussed in this episode include: -Negative and positive outcomes from AI in the short, medium, and long-terms -The perils and promises of AGI and superintelligence -AI alignment and AI existential risk -Lethal autonomous weapons -AI governance and racing t...
Jaan Tallinn on Avoiding Civilizational Pitfalls and Surviving the 21st Century
April 21, 2021 01:00 - 1 hour - 119 MBJaan Tallinn, investor, programmer, and co-founder of the Future of Life Institute, joins us to discuss his perspective on AI, synthetic biology, unknown unknows, and what's needed for mitigating existential risk in the 21st century. Topics discussed in this episode include: -Intelligence and coordination -Existential risk from AI, synthetic biology, and unknown unknowns -AI adoption as a delegation process -Jaan's investments and philanthropic efforts -International coordination and incent...
Joscha Bach and Anthony Aguirre on Digital Physics and Moving Towards Beneficial Futures
April 01, 2021 00:34 - 1 hour - 225 MBJoscha Bach, Cognitive Scientist and AI researcher, as well as Anthony Aguirre, UCSC Professor of Physics, join us to explore the world through the lens of computation and the difficulties we face on the way to beneficial futures. Topics discussed in this episode include: -Understanding the universe through digital physics -How human consciousness operates and is structured -The path to aligned AGI and bottlenecks to beneficial futures -Incentive structures and collective coordination You...
Roman Yampolskiy on the Uncontrollability, Incomprehensibility, and Unexplainability of AI
March 20, 2021 00:39 - 1 hour - 165 MBRoman Yampolskiy, Professor of Computer Science at the University of Louisville, joins us to discuss whether we can control, comprehend, and explain AI systems, and how this constrains the project of AI safety. Topics discussed in this episode include: -Roman’s results on the unexplainability, incomprehensibility, and uncontrollability of AI -The relationship between AI safety, control, and alignment -Virtual worlds as a proposal for solving multi-multi alignment -AI security You can find...
Stuart Russell and Zachary Kallenborn on Drone Swarms and the Riskiest Aspects of Autonomous Weapons
February 25, 2021 22:28 - 1 hour - 228 MBStuart Russell, Professor of Computer Science at UC Berkeley, and Zachary Kallenborn, WMD and drone swarms expert, join us to discuss the highest risk and most destabilizing aspects of lethal autonomous weapons. Topics discussed in this episode include: -The current state of the deployment and development of lethal autonomous weapons and swarm technologies -Drone swarms as a potential weapon of mass destruction -The risks of escalation, unpredictability, and proliferation with regards to a...
John Prendergast on Non-dual Awareness and Wisdom for the 21st Century
February 09, 2021 22:40 - 1 hour - 243 MBJohn Prendergast, former adjunct professor of psychology at the California Institute of Integral Studies, joins Lucas Perry for a discussion about the experience and effects of ego-identification, how to shift to new levels of identity, the nature of non-dual awareness, and the potential relationship between waking up and collective human problems. This is not an FLI Podcast, but a special release where Lucas shares a direction he feels has an important relationship with AI alignment and exis...
Beatrice Fihn on the Total Elimination of Nuclear Weapons
January 22, 2021 02:53 - 1 hour - 178 MBBeatrice Fihn, executive director of the International Campaign to Abolish Nuclear Weapons (ICAN) and Nobel Peace Prize recipient, joins us to discuss the current risks of nuclear war, policies that can reduce the risks of nuclear conflict, and how to move towards a nuclear weapons free world. Topics discussed in this episode include: -The current nuclear weapons geopolitical situation -The risks and mechanics of accidental and intentional nuclear war -Policy proposals for reducing the risk...
Max Tegmark and the FLI Team on 2020 and Existential Risk Reduction in the New Year
January 08, 2021 22:47 - 1 hour - 139 MBMax Tegmark and members of the FLI core team come together to discuss favorite projects from 2020, what we've learned from the past year, and what we think is needed for existential risk reduction in 2021. Topics discussed in this episode include: -FLI's perspectives on 2020 and hopes for 2021 -What our favorite projects from 2020 were -The biggest lessons we've learned from 2020 -What we see as crucial and needed in 2021 to ensure and make -improvements towards existential safety You can ...
Future of Life Award 2020: Saving 200,000,000 Lives by Eradicating Smallpox
December 11, 2020 02:39 - 1 hour - 262 MBThe recipients of the 2020 Future of Life Award, William Foege, Michael Burkinsky, and Victor Zhdanov Jr., join us on this episode of the FLI Podcast to recount the story of smallpox eradication, William Foege's and Victor Zhdanov Sr.'s involvement in the eradication, and their personal experience of the events. Topics discussed in this episode include: -William Foege's and Victor Zhdanov's efforts to eradicate smallpox -Personal stories from Foege's and Zhdanov's lives -The history of sma...
Sean Carroll on Consciousness, Physicalism, and the History of Intellectual Progress
December 02, 2020 02:43 - 1 hour - 207 MBSean Carroll, theoretical physicist at Caltech, joins us on this episode of the FLI Podcast to comb through the history of human thought, the strengths and weaknesses of various intellectual movements, and how we are to situate ourselves in the 21st century given progress thus far. Topics discussed in this episode include: -Important intellectual movements and their merits -The evolution of metaphysical and epistemological views over human history -Consciousness, free will, and philosophic...
Mohamed Abdalla on Big Tech, Ethics-washing, and the Threat on Academic Integrity
November 17, 2020 23:41 - 1 hour - 188 MBMohamed Abdalla, PhD student at the University of Toronto, joins us to discuss how Big Tobacco and Big Tech work to manipulate public opinion and academic institutions in order to maximize profits and avoid regulation. Topics discussed in this episode include: -How Big Tobacco uses it's wealth to obfuscate the harm of tobacco and appear socially responsible -The tactics shared by Big Tech and Big Tobacco to preform ethics-washing and avoid regulation -How Big Tech and Big Tobacco work to in...
Maria Arpa on the Power of Nonviolent Communication
November 02, 2020 18:14 - 1 hour - 166 MBMaria Arpa, Executive Director of the Center for Nonviolent Communication, joins the FLI Podcast to share the ins and outs of the powerful needs-based framework of nonviolent communication. Topics discussed in this episode include: -What nonviolent communication (NVC) consists of -How NVC is different from normal discourse -How NVC is composed of observations, feelings, needs, and requests -NVC for systemic change -Foundational assumptions in NVC -An NVC exercise You can find the page fo...
Stephen Batchelor on Awakening, Embracing Existential Risk, and Secular Buddhism
October 15, 2020 23:46 - 1 hour - 228 MBStephen Batchelor, a Secular Buddhist teacher and former monk, joins the FLI Podcast to discuss the project of awakening, the facets of human nature which contribute to extinction risk, and how we might better embrace existential threats. Topics discussed in this episode include: -The projects of awakening and growing the wisdom with which to manage technologies -What might be possible of embarking on the project of waking up -Facets of human nature that contribute to existential risk -The...
Kelly Wanser on Climate Change as a Possible Existential Threat
September 30, 2020 23:51 - 1 hour - 242 MBKelly Wanser from SilverLining joins us to discuss techniques for climate intervention to mitigate the impacts of human induced climate change. Topics discussed in this episode include: - The risks of climate change in the short-term - Tipping points and tipping cascades - Climate intervention via marine cloud brightening and releasing particles in the stratosphere - The benefits and risks of climate intervention techniques - The international politics of climate change and weather modifi...
Andrew Critch on AI Research Considerations for Human Existential Safety
September 16, 2020 00:01 - 1 hour - 255 MBIn this episode of the AI Alignment Podcast, Andrew Critch joins us to discuss a recent paper he co-authored with David Krueger titled AI Research Considerations for Human Existential Safety. We explore a wide range of issues, from how the mainstream computer science community views AI existential risk, to the need for more accurate terminology in the field of AI existential safety and the risks of what Andrew calls prepotent AI systems. Crucially, we also discuss what Andrew sees as being th...
Iason Gabriel on Foundational Philosophical Questions in AI Alignment
September 03, 2020 21:32 - 1 hour - 263 MBIn the contemporary practice of many scientific disciplines, questions of values, norms, and political thought rarely explicitly enter the picture. In the realm of AI alignment, however, the normative and technical come together in an important and inseparable way. How do we decide on an appropriate procedure for aligning AI systems to human values when there is disagreement over what constitutes a moral alignment procedure? Choosing any procedure or set of values with which to align AI bring...
Peter Railton on Moral Learning and Metaethics in AI Systems
August 18, 2020 17:25 - 1 hour - 233 MBFrom a young age, humans are capable of developing moral competency and autonomy through experience. We begin life by constructing sophisticated moral representations of the world that allow for us to successfully navigate our way through complex social situations with sensitivity to morally relevant information and variables. This capacity for moral learning allows us to solve open-ended problems with other persons who may hold complex beliefs and preferences. As AI systems become increasing...
Evan Hubinger on Inner Alignment, Outer Alignment, and Proposals for Building Safe Advanced AI
July 01, 2020 17:05 - 1 hour - 222 MBIt's well-established in the AI alignment literature what happens when an AI system learns or is given an objective that doesn't fully capture what we want. Human preferences and values are inevitably left out and the AI, likely being a powerful optimizer, will take advantage of the dimensions of freedom afforded by the misspecified objective and set them to extreme values. This may allow for better optimization on the goals in the objective function, but can have catastrophic consequences f...
Barker - Hedonic Recalibration (Mix)
June 26, 2020 16:05 - 43 minutes - 40 MBThis is a mix by Barker, Berlin-based music producer, that was featured on our last podcast: Sam Barker and David Pearce on Art, Paradise Engineering, and Existential Hope (With Guest Mix). We hope that you'll find inspiration and well-being in this soundscape. You can find the page for this podcast here: https://futureoflife.org/2020/06/24/sam-barker-and-david-pearce-on-art-paradise-engineering-and-existential-hope-featuring-a-guest-mix/ Tracklist: Delta Rain Dance - 1 John Beltran - A ...