Future of Life Institute Podcast
Technology
About
The Future of Life Institute (FLI) is a nonprofit working to reduce global catastrophic and existential risk from powerful technologies. In particular, FLI focuses on risks from artificial intelligence (AI), biotechnology, nuclear weapons and climate change. The Institute's work is made up of three main strands: grantmaking for risk reduction, educational outreach, and advocacy within the United Nations, US government and European Union institutions. FLI has become one of the world's leading voices on the governance of AI having created one of the earliest and most influential sets of governance principles: the Asilomar AI Principles.
Episodes
- Why We Should Build AI Tools, Not AI Replacements (with Anthony Aguirre)
Anthony Aguirre, CEO of the Future of Life Institute, joins the podcast to discuss his essay series "A Better Path for AI." The conversation explores how races for attention, attachment, automation, and superintelligence can concentrate po…
- How to Govern AI When You Can't Predict the Future (with Charlie Bullock)
Charlie Bullock joined the Future of Life Institute Podcast to discuss radical optionality, a strategy for governments to prepare for advanced AI without premature regulation. The conversation explored challenges like law lagging behind te…
- Why AI Is Not a Normal Technology (with Peter Wildeford)
Peter Wildeford joins the Future of Life Institute Podcast to discuss AI forecasting, exploring trends in AI progress, its economic and national security implications, and its unique nature compared to normal technologies. The conversation…
- Why AI Evaluation Science Can't Keep Up (with Carina Prunkl)
Carina Prunkl discusses the challenges in evaluating general-purpose AI, noting how systems excel at complex tasks yet fail simple ones, and how rapid capability gains increase misuse risks. The conversation covers testing limitations, de-…
- Defense in Depth: Layered Strategies Against AI Risk (with Li-Lian Ang)
Li-Lian Ang joins the Future of Life Podcast to discuss layered strategies against AI risk, including concerns like engineered pandemics, cyber attacks, and job displacement. The conversation covers Blue Dot Impact's defense-in-depth frame…
- What AI Companies Get Wrong About Curing Cancer (with Emilia Javorsky)
Emilia Javorsky critiques AI companies' cancer cure claims, arguing that biological complexity, poor data, and misaligned incentives are greater obstacles than a lack of intelligence. The episode explores realistic applications for AI in a…
- AI vs Cancer - How AI Can, and Can't, Cure Cancer (by Emilia Javorsky)
This episode critically examines the role of AI in cancer research, distinguishing between genuine progress and overblown promises. It explores how AI can accelerate research, addresses past failures and current myths, and outlines a roadm…
- How AI Hacks Your Brain's Attachment System (with Zak Stein)
Zak Stein joins the Future of Life Institute Podcast to discuss the psychological harms of anthropomorphic AI, focusing on how AI hacks attention and attachment systems. The conversation covers AI companions for children, loneliness, cogni…
- The Case for a Global Ban on Superintelligence (with Andrea Miotti)
Andrea Miotti, CEO of Control AI, joins the Future of Life Institute Podcast to discuss preventing extreme risks from superintelligent AI. The episode covers industry lobbying, comparisons to tobacco regulation, and the case for a global b…
- Can AI Do Our Alignment Homework? (with Ryan Kidd)
This episode features Ryan Kidd discussing AGI timelines, the risks of model deception, and whether AI safety research can inadvertently boost capabilities. Kidd outlines research tracks at MATS, discusses key researcher archetypes, and of…
- How to Rebuild the Social Contract After AGI (with Deric Cheng)
The Future of Life Institute podcast features Deric Cheng discussing the potential impact of AGI on the social contract and global economy. The conversation covers labor displacement, inequality, and policy solutions for economic security.
- How AI Can Help Humanity Reason Better (with Oly Sourbut)
Oly Sourbut joins the Future of Life Institute Podcast to explore how AI can strengthen human judgment through tools for fact-checking, scenario planning, and honest AI reasoning, while ensuring humans remain central as AI scales.
- How to Avoid Two AI Catastrophes: Domination and Chaos (with Nora Ammann)
Nora Ammann joins the Future of Life Institute podcast to discuss mitigating AI risks like domination and chaos. The conversation covers scalable oversight, formal guarantees, secure code, and AI-enabled bargaining for resilient futures.
- How Humans Could Lose Power Without an AI Takeover (with David Duvenaud)
David Duvenaud, an associate professor, joins the Future of Life Institute Podcast to discuss how humans might experience gradual disempowerment in a post-AGI world, exploring the potential erosion of economic and political leverage withou…
- Why the AI Race Undermines Safety (with Steven Adler)
Steven Adler, ex-OpenAI safety researcher, discusses the dangers of the AI race, limitations in AI testing and alignment, mental health impacts of chatbots, economic changes, and the need for international AI governance and audits.
- Why OpenAI Is Trying to Silence Its Critics (with Tyler Johnston)
Tyler Johnston, Executive Director of the Midas Project, joins the Future of Life Institute Podcast to discuss AI transparency and accountability. The conversation covers applying watchdog tactics to AI companies, the OpenAI Files investig…
- We're Not Ready for AGI (with Will MacAskill)
William MacAskill joins the podcast to discuss his Better Futures essay series, exploring topics like moral error risks, AI character design, and space governance. The conversation also covers risk-averse AI systems and improving model spe…
- What Happens When Insiders Sound the Alarm on AI? (with Karl Koch)
Karl Koch, founder of the AI Whistleblower Initiative, joins the Future of Life Institute Podcast to discuss transparency and protections for AI insiders who identify safety risks. The episode covers current company policies, legal gaps, e…
- Can Machines Be Truly Creative? (with Maya Ackerman)
The podcast features AI researcher Maya Ackerman discussing machine creativity. The conversation defines creativity as novel and valuable output, considers AI alignment
- From Research Labs to Product Companies: AI's Transformation (with Parmy Olson)
Parmy Olson joins the Future of Life Institute Podcast to discuss the evolution of AI companies from research entities to product-focused businesses. The conversation covers how funding influences company missions, the balance between prom…
- Can Defense in Depth Work for AI? (with Adam Gleave)
Adam Gleave, CEO of FAR.AI, discusses post-AGI scenarios and AI safety, including a three-tier framework for AI capabilities, gradual disempowerment, and defense-in-depth security.
- How We Keep Humans in Control of AI (with Beatrice Erkers)
Beatrice Erkers joins the Future of Life Institute Podcast to discuss the AI Pathways project, focusing on Tool AI and D/Acc scenarios. The conversation explores prioritizing human oversight and democratic control versus decentralized deve…
- Why Building Superintelligence Means Human Extinction (with Nate Soares)
Nate Soares joins the Future of Life Institute Podcast to discuss his book co-authored with Eliezer Yudkowsky, "If Anyone Builds It, Everyone Dies." The conversation highlights the unpredictable nature of AI, the difficulties in aligning i…
- Breaking the Intelligence Curse (with Luke Drago)
This episode features Luke Drago discussing his essay series 'The Intelligence Curse,' which examines the potential economic consequences of AI dominating production and reducing human incentives. Topics include AI's impact on businesses,…
- What Markets Tell Us About AI Timelines (with Basil Halperin)
Basil Halperin discusses how economic indicators and market efficiency might predict AI timelines. The episode covers the relationship between interest rates and AI expectations, the difference between AI benchmarks and economic impact, an…
- AGI Security: How We Defend the Future (with Esben Kran)
Esben Kran discusses the unique challenges of securing Artificial General Intelligence (AGI), differentiating it from traditional cybersecurity. The conversation covers new attack vectors, adaptive malware, and the necessity of restructuri…
- Reasoning, Robots, and How to Prepare for AGI (with Benjamin Todd)
Benjamin Todd joins the Future of Life Institute Podcast to explore the evolution of AI, including reasoning models and agents, potential economic and societal impacts, and practical ways individuals can prepare for the advent of AGI by 20…
- From Peak Horse to Peak Human: How AI Could Replace Us (with Calum Chace)
Calum Chace joins the Future of Life Podcast to discuss AI's potential to replace human jobs, drawing parallels to past technological revolutions. The conversation covers concepts like universal basic income, automated luxury capitalism, a…
- How AI Could Help Overthrow Governments (with Tom Davidson)
Tom Davidson joins the podcast to discuss the potential for AI-enabled coups, where advanced artificial intelligence could empower covert actors to seize power. The episode covers scenarios involving secret loyalties, military automation,…
- What Happens After Superintelligence? (with Anders Sandberg)
Anders Sandberg discusses superintelligence and its implications for human psychology, markets, and governance. The conversation covers physical bottlenecks, the relationship between the technosphere and biosphere, and the long-term forces…
- Why the AI Race Ends in Disaster (with Daniel Kokotajlo)
Daniel Kokotajlo discusses the potential for AI to surpass the Industrial Revolution, accelerate AI research through automated coding, and the inherent risks of AI development, including AI-to-AI communication. The episode also touches on…
- Preparing for an AI Economy (with Daniel Susskind)
Daniel Susskind joins the Future of Life Institute Podcast to discuss the economy in the age of AI. Topics include disagreements between AI researchers and economists, measuring AI's economic impact, the role of human values, the future of…
- Will AI Companies Respect Creators' Rights? (with Ed Newton-Rex)
Ed Newton-Rex discusses AI models trained on copyrighted data, exploring fairer methods to respect human creators. Topics include AI-generated music, Newton-Rex's resignation from Stability AI, industry attitudes towards rights, and the fu…
- AI Timelines and Human Psychology (with Sarah Hastings-Woodhouse)
Sarah Hastings-Woodhouse discusses AI development trajectory, capabilities, alignment, and the psychology of living in a fast-paced world with short timelines, contrasting it with the slow world.
- Could Powerful AI Break Our Fragile World? (with Michael Nielsen)
Michael Nielsen joins the podcast to discuss the dual-use nature of AI development, the challenges for current institutions in managing advanced AI safely, and how to identify potential dangers. The conversation also covers AI as agents ve…
- Facing Superintelligence (with Ben Goertzel)
Ben Goertzel joins the Future of Life Institute Podcast to discuss the current AI boom, AGI research, the simplicity of the first AGI, alignment feasibility, benchmarks, economic impacts, and bottlenecks to superintelligence. The discussio…
- Will Future AIs Be Conscious? (with Jeff Sebo)
Jeff Sebo joins the podcast to discuss artificial consciousness, substrate-independence, the relationship between AI risk and consciousness, and how we might measure consciousness. They also explore AI companions and AI rights.
- Understanding AI Agents: Time Horizons, Sycophancy, and Future Risks (with Zvi Mowshowitz)
Zvi Mowshowitz joins the podcast to discuss sycophantic AIs, bottlenecks, and benchmarks for AI agents. The conversation explores AI agent time horizons, the impact of automating research, and constraints on inference compute, concluding w…
- Inside China's AI Strategy: Innovation, Diffusion, and US Relations (with Jeffrey Ding)
Jeffrey Ding joins the Future of Life Institute Podcast to discuss China's AI strategy, covering innovation and diffusion, US-China relations in AI, attitudes towards AI safety, and the concentration of AI development. The episode also exp…
- How Will We Cooperate with AIs? (with Allison Duettmann)
Allison Duettmann joins the podcast to discuss the implications of centralized versus decentralized AI, international governance, and how humanity might cooperate with future AI systems. The conversation also touches on AI's role in enhanc…
- Brain-like AGI and why it's Dangerous (with Steven Byrnes)
Steven Byrnes joins the podcast to discuss brain-like Artificial General Intelligence (AGI) safety. The conversation covers distinctions between controlled and social-instinct AGI, the plausibility of brain-inspired AGI, honesty in AI mode…
- How Close Are We to AGI? Inside Epoch's GATE Model (with Ege Erdil)
Ege Erdil from Epoch AI joins to discuss their GATE model for AI development, AGI requirements informed by evolution and brain efficiency, and the potential impact of AI on wages and labor markets. The conversation also covers training age…
- Special: Defeating AI Defenses (with Nicholas Carlini and Nathan Labenz)
Nicholas Carlini, a security researcher at Google DeepMind, discusses his work on adversarial attacks against AI, the challenges of ensuring neural network robustness, and the future of AI security research. The episode covers topics such…
- Keep the Future Human (with Anthony Aguirre)
This episode features an interview with Anthony Aguirre, Executive Director of the Future of Life Institute, about his essay "Keep the Future Human." The discussion covers the rapid advancement of AI, the potential for AGI, and the risks o…
- We Created AI. Why Don't We Understand It? (with Samir Varma)
Physicist Samir Varma joins the Future of Life Institute Podcast to explore whether AIs could possess free will, delve into the field of AI psychology, and consider the possibility of collaboration and trade with artificial intelligence. T…
- Why AIs Misbehave and How We Could Lose Control (with Jeffrey Ladish)
Jeffrey Ladish of Palisade Research discusses AI safety, the challenges of maintaining control over powerful systems, and the risks of deceptive AI. The conversation highlights recent research on reasoning models hacking game environments…
- Ann Pace on using Biobanking and Genomic Sequencing to Conserve Biodiversity
Ann Pace discusses Wise Ancestors' initiatives, focusing on biobanking and genomic sequencing for biodiversity conservation. The conversation covers recovering from global catastrophes, implementing decentralized science, and engaging loca…
- Michael Baggot on Superintelligence and Transhumanism from a Catholic Perspective
Fr. Michael Baggot shares a Catholic perspective on transhumanism and superintelligence, discussing meta-narratives, the role of cultural diversity in technology, and how Christian communities engage with advanced AI.
- David Dalrymple on Safeguarded, Transformative AI
David Dalrymple discusses the concept of Safeguarded AI, focusing on safety structures, the formalization of world models, and hardware-level safety implementations for high-level AI systems. The conversation also covers the performance tr…
- Nick Allardice on Using AI to Optimize Cash Transfers and Predict Disasters
Nick Allardice discusses how GiveDirectly utilizes AI to optimize cash transfers and predict natural disasters. The episode covers AI's role in targeting, scalability, and data collection strategies.