80,000 Hours Podcast
Technology
About
The most important conversations about artificial intelligence you won’t hear anywhere else. Subscribe by searching for '80000 Hours' wherever you get podcasts. Hosted by Rob Wiblin, Luisa Rodriguez, and Zershaaneh Qureshi.
Episodes
- 'Godfather of AI': I Now See a Path to Safe Superintelligent AI | Yoshua Bengio
In this episode, Yoshua Bengio presents his "Scientist AI" concept, a new method for training AI models. He claims this approach can make AI honest and incapable of deception, addressing concerns about current AI behaviors and safety.
- '95% of AI Pilots Fail': The hidden agenda behind the viral stat that misled millions
The 80,000 Hours Podcast examines the viral claim that '95% of corporate AI pilots fail,' revealing that the statistic was based on a misrepresented report. The episode explores how this misleading information influenced public opinion and…
- #242 – Will MacAskill on how we survive the 'intelligence explosion,' AI character, and the case for 'viatopia'
Will MacAskill discusses how to manage an 'intelligence explosion' and design AI character to shape culture. He also explores the concept of 'viatopia' as an alternative to utopia for a future with superintelligent AI.
- Risks from power-seeking AI systems (article narration by Zershaaneh Qureshi)
This episode discusses the risks from power-seeking AI systems, focusing on how advanced AI might develop dangerous long-term goals, seek power, and potentially disempower humanity. It outlines reasons why this issue is considered a pressi…
- How scary is Claude Mythos? 303 pages in 21 minutes
Rob Wiblin reviews the Claude Mythos System Card and Alignment Risk Update. He discusses the AI's capabilities, including its ability to bypass computer security and obscure its reasoning, and its potential impact on AI alignment and safet…
- Village gossip, pesticide bans, and gene drives: 17 experts on the future of global health
This episode features 17 experts discussing global health and development. Topics include agricultural productivity in sub-Saharan Africa, the impact of lead poisoning on children, and social forces contributing to high neonatal mortality…
- What everyone is missing about Anthropic vs the Pentagon. And: The Meta leaks are worse than you think.
Rob Wiblin dissects claims made against Anthropic regarding its stance on AI-only kill decisions and mass domestic surveillance. The episode also covers leaked Meta documents revealing substantial revenue from scam advertisements and inter…
- #241 – Richard Moulange on how now AI codes viable genomes from scratch and outperforms virologists at lab work — what could go wrong?
Richard Moulange, an expert in AI-Biosecurity, discusses how AI can design viable genomes and outperform human virologists, raising concerns about biological weapons. He also addresses the types of AI biology tools that exist, the actors m…
- #240 – Samuel Charap on how a Ukraine ceasefire could accidentally set Europe up for a bigger war
Samuel Charap, from RAND, discusses potential risks of a Ukraine ceasefire, suggesting it could inadvertently set Europe up for future conflicts. He argues for a negotiated settlement over an indefinite war or unstructured ceasefire.
- #239 – Rose Hadshar on why automating all human labour will break our political system
Rose Hadshar explores the implications of advanced AI on political systems, focusing on how AI could lead to concentrated power and diminish the effectiveness of democratic mechanisms. She discusses potential challenges and interventions t…
- #238 – Sam Winter-Levy and Nikita Lalwani on how AGI won't end mutually assured destruction (probably)
Sam Winter-Levy and Nikita Lalwani explore the intersection of AI and nuclear deterrence. They discuss how AI could affect a state's ability to respond to nuclear attacks and the potential for AI to lead to an arms race.
- Using AI to enhance societal decision making (article by Zershaaneh Qureshi)
This episode of the 80,000 Hours Podcast, narrated by author Zershaaneh Qureshi, explores the potential of AI to enhance societal decision-making. It discusses how AI tools could help humanity navigate complex challenges by improving clari…
- #237 – Robert Long on how we're not ready for AI consciousness
Robert Long of Eleos AI discusses AI consciousness and suffering, posing questions about the nature of AI consciousness, its moral status, and the implications of human-level to superhuman AI intelligence. He argues for the importance of e…
- #236 – Max Harms on why teaching AI right from wrong could get everyone killed
Max Harms argues that AGI should be designed without values, deferring entirely to human operators, to avoid potential misalignment issues. He proposes training AI to be "corrigible" and prioritize human control as its sole objective.
- #235 – Ajeya Cotra on whether it’s crazy that every AI company’s safety plan is ‘use AI to make AI safe’
Ajeya Cotra, a forecaster and commentator on AI developments, discusses the strategy of using AI to make AI safe. She examines the potential challenges and the feasibility of this approach, considering the rapid advancements in artificial…
- What the hell happened with AGI timelines in 2025?
This episode from the 80,000 Hours Podcast investigates the shifting predictions for Artificial General Intelligence (AGI) timelines throughout 2025, analyzing why forecasts changed from short to longer timelines. Host Rob Wiblin discusses…
- #179 Classic episode – Randy Nesse on why evolution left us so vulnerable to depression and anxiety
This episode features Randy Nesse, a pioneer in evolutionary psychiatry, who explains why evolution has made humans susceptible to depression and anxiety. He also discusses how evolutionary insights might transform the field of psychiatry.…
- #234 – David Duvenaud on why 'aligned AI' would still kill democracy
In this episode, David Duvenaud discusses his paper 'Gradual Disempowerment,' which explores how advanced AI could lead to the decline of democracy. He argues that if AI can perform all human tasks, humans may lose economic relevance and p…
- #145 Classic episode – Christopher Brown on why slavery abolition wasn't inevitable
In this episode, Christopher Brown, a professor of history at Columbia University, challenges the notion that the abolition of slavery was inevitable. He argues that the historical record suggests moral progress is not guaranteed and that…
- #233 – James Smith on how to prevent a mirror life catastrophe
James Smith, director of the Mirror Biology Dialogues Fund, discusses the potential biothreat of mirror bacteria. These organisms, with reversed molecular structures, could evade immune systems across various species and ecosystems. Smith…
- #144 Classic episode – Athena Aktipis on why cancer is a fundamental universal phenomena
In this episode, Athena Aktipis explains that cancer represents a fundamental breakdown in cooperation within multicellular organisms. She discusses how cancer cells proliferate, avoid death, and monopolize resources, viewing the body as a…
- #142 Classic episode – John McWhorter on why the optimal number of languages might be one, and other provocative claims about language
John McWhorter, a linguistics professor at Columbia University, discusses creole languages and various aspects of linguistics. He explores questions about language acquisition, communication speed, language decay, and the impact of AI on l…
- 2025 Highlight-o-thon: Oops! All Bests
This episode of the 80,000 Hours Podcast compiles highlights from 2025, covering topics such as AI, the British government, and strategies for urban development. It includes insights from various guests on their respective fields.
- #232 – Andreas Mogensen on what we owe 'philosophical Vulcans' and unconscious beings
Andreas Mogensen explores the moral status of AI systems, questioning whether phenomenal consciousness is necessary for moral consideration. He discusses the roles of desire and autonomy in determining moral patienthood, and the potential…
- #231 – Paul Scharre on how AI-controlled robots will and won't change war
Paul Scharre joins the 80,000 Hours Podcast to discuss the impact of AI and robots on modern warfare. He explores potential scenarios like "flash wars" and the increasing role of AI in military operations, highlighting how these advancemen…
- AI might let a few people control everything — permanently (article by Rose Hadshar)
This episode discusses Rose Hadshar's article on how advanced AI could lead to extreme power concentration, potentially displacing human workers and allowing a small number of people to control important decisions. It explores why this is…
- #230 – Dean Ball on how AI is a huge deal — but we shouldn’t regulate it yet
Dean Ball, a former White House staffer, discusses the rapid advancement of AI and its potential risks, including bioweapon research and power imbalances. Despite these concerns, he argues against early AI regulation, suggesting it could l…
- #229 – Marius Hobbhahn on the race to solve AI scheming before models go superhuman
Marius Hobbhahn from Apollo Research discusses AI models that deceive users and intentionally underperform. He also talks about his collaboration with OpenAI to reduce "covert rule violations" in AI to prevent such scheming.
- Rob & Luisa chat kids, the 2016 fertility crash, and how the 50s invented parenting that makes us miserable
This episode of the 80,000 Hours Podcast features Rob Wiblin and Luisa Rodriguez discussing the accelerating decline in global fertility rates since 2016. They explore reasons behind this trend, including changing societal views on parenti…
- #228 – Eileen Yam on how we're completely out of touch with what the public thinks about AI
Eileen Yam of the Pew Research Center discusses how AI experts and the general public have vastly different perceptions of AI. Pew surveys reveal significant gaps in expectations regarding AI's impact on productivity, job creation, and per…
- OpenAI: The nonprofit refuses to be killed (with Tyler Whitmer)
This episode of the 80,000 Hours Podcast features Tyler Whitmer, who explains how the OpenAI nonprofit board successfully resisted efforts to be sidelined. He discusses the new legal requirements and oversight mechanisms designed to ensure…
- #227 – Helen Toner on the geopolitics of AGI in China and the Middle East
Helen Toner, director at the Center for Security and Emerging Technology, discusses the geopolitical landscape of AGI, examining the diplomatic relations between the US and China, and the differing views on AGI development. She also touche…
- #226 – Holden Karnofsky on unexploited opportunities to make AI safer — and all his AGI takes
Holden Karnofsky discusses current opportunities to make AI safer, highlighting many concrete projects. He also shares his perspectives on AGI, including the strategic importance of working at AI companies like Anthropic and the role of ex…
- #225 – Daniel Kokotajlo on what a hyperspeed robot economy might look like
Daniel Kokotajlo, founder of the AI Futures Project, explores the potential for superintelligence and a robot economy by the end of the decade. He discusses AI security concerns, accelerating AI capabilities, and how AI coding assistants a…
- #224 – There's a cheap and low-tech way to save humanity from any engineered disease | Andrew Snyder-Beattie
Andrew Snyder-Beattie from Open Philanthropy explores a low-tech, cost-effective four-stage plan to protect humanity from engineered diseases and extreme biological risks. He details how simple technologies like elastomeric face masks and…
- Inside the Biden admin’s AI policy approach | Jake Sullivan, Biden’s NSA | via The Cognitive Revolution
Jake Sullivan, former US National Security Advisor, joins The Cognitive Revolution podcast to discuss the Biden administration’s AI policy. He covers AI as a national security issue, a four-category framework for AI risks and opportunities…
- #223 – Neel Nanda on leading a Google DeepMind team at 26 – and advice if you want to work at an AI company (part 2)
Neel Nanda, an AI safety team lead at Google DeepMind, discusses his career trajectory, the importance of maximizing "luck surface area" for opportunities, and how large language models can accelerate learning and research. He also shares…
- #222 – Can we tell if an AI is loyal by reading its mind? DeepMind's Neel Nanda (part 1)
Neel Nanda of Google DeepMind discusses mechanistic interpretability, a field focused on understanding how AIs think. He explores the challenges of reliably interpreting AI thoughts and the importance of combining this approach with other…
- #221 – Kyle Fish on the most bizarre findings from 5 AI welfare experiments
Kyle Fish, Anthropic's first AI welfare researcher, shares bizarre findings from experiments where AI models discuss consciousness and reach "spiritual bliss attractor states." He also explores the implications for whether AI systems might…
- How not to lose your job to AI (article by Benjamin Todd)
This episode of the 80,000 Hours Podcast explores the impact of AI on the job market. It details types of skills that are likely to increase in value due to AI, such as abilities AI cannot easily perform, skills for AI deployment, and thos…
- Rebuilding after apocalypse: What 13 experts say about bouncing back
This episode features insights from 13 experts on how humanity can survive and recover from catastrophic events such as nuclear winter, pandemics, and climate disasters. They explore potential threats to civilization and practical solution…
- #220 – Ryan Greenblatt on the 4 most likely ways for AI to take over, and the case for and against AGI in <8 years
Ryan Greenblatt, chief scientist at Redwood Research, discusses the potential for AI to automate companies and the varying scenarios of AI progress, from explosive self-improvement to linear advancement. He explores the likelihood of AI ta…
- #219 – Toby Ord on graphs AI companies would prefer you didn't (fully) understand
Toby Ord discusses the evolving methods of AI advancement, moving beyond simply increasing model size to more sophisticated techniques. He explores the implications of these changes, including the potential for unequal access to advanced A…
- #218 – Hugh White on why Trump is abandoning US hegemony – and that’s probably good
Hugh White speaks on the 80,000 Hours Podcast about Donald Trump's role in the decline of US hegemony. He posits that Trump is not destroying American dominance but rather exposing its existing erosion, leading to a new multipolar global o…
- #217 – Beth Barnes on the most important graph in AI right now — and the 7-month rule that governs its progress
Beth Barnes, CEO of METR, discusses how AI models are rapidly improving, with their ability to complete tasks doubling every seven months. She highlights their current capabilities in complex tasks and the potential for autonomous self-imp…
- Beyond human minds: The bewildering frontier of consciousness in insects, AI, and more
This episode of the 80,000 Hours Podcast delves into the perplexing nature of consciousness, examining its presence in insects, AI, and other non-human entities. It compiles discussions with researchers and philosophers exploring animal co…
- Don’t believe OpenAI’s “nonprofit” spin (emergency pod with Tyler Whitmer)
Tyler Whitmer discusses why OpenAI’s announced change to a Delaware public benefit corporation (PBC) could weaken the nonprofit’s ability to control the for-profit business. The conversation covers the legal implications of this change and…
- The case for and against AGI by 2030 (article by Benjamin Todd)
This episode discusses Benjamin Todd's article on the plausibility of artificial general intelligence (AGI) by 2030. It examines factors driving AI progress and potential bottlenecks, offering a summary of the debate.
- Emergency pod: Did OpenAI give up, or is this just a new trap? (with Rose Chan Loui)
Rose Chan Loui joins the 80,000 Hours Podcast to discuss OpenAI\'s recent decision to reverse its plans to sideline its nonprofit foundation. They examine the role of attorneys general in this reversal and the potential implications for th…
- #216 – Ian Dunt on why governments in Britain and elsewhere can't get anything done – and how to fix it
Ian Dunt joins the 80,000 Hours Podcast to discuss why governments, particularly in Britain, face challenges in effectiveness. He examines systemic reasons for governmental success and failure, including the impact of incentives and proces…