LessWrong (Curated & Popular)
Education & Explainer
About
Audio narrations of LessWrong posts. Includes all curated posts and all posts with 125+ karma. If you'd like more, subscribe to the “Lesswrong (30+ karma)” feed.
Episodes
- "What I did in the hedonium shockwave, by Emma, age six and a half" by ozymandias
This episode features a narrative from the perspective of Emma, a six-year-old, as she talks about the impending "hedonium shockwave." She shares her observations on how this event is understood by both herself and adults, and how it is pr…
- "Bad Problems Don’t Stop Being Bad Because Somebody’s Wrong About Fault Analysis" by Linch
Linch discusses how people often explain away problems by citing organizational limitations instead of focusing on fixing the actual issue. Examples include misleading headlines and unaddressed safety concerns at an AI company. The episode…
- "x-risk-themed" by kave
This episode discusses an individual working at an x-risk-themed organization who is considering a career change. The conversation explores various aspects of career planning, including job fit, sustainability, and personal limits.
- "Natural Language Autoencoders Produce Unsupervised Explanations of LLM Activations" by Subhash Kantamneni, kitft, Euan Ong, Sam Marks
This episode introduces Natural Language Autoencoders (NLAs), an unsupervised method that generates natural language explanations of LLM activations. It details the training of NLAs using an activation verbalizer and reconstructor, and the…
- [Linkpost] "Interpreting Language Model Parameters" by Lucius Bushnaq, Dan Braun, Oliver Clive-Griffin, Bart Bussmann, Nathan Hu, mivanitskiy, Linda Linsefors, Lee Sharkey
This episode introduces a new parameter decomposition method, adVersarial Parameter Decomposition (VPD), and applies it to a small language model. The method improves upon previous techniques and can decompose attention layers. The episode…
- "It’s nice of you to worry about me, but I really do have a life" by Viliam
This episode explores the societal pressure to appear fully dedicated to one's job, even at the expense of personal life. The author shares personal examples and discusses the common pretense of prioritizing work over family and hobbies du…
- "Irretrievability; or, Murphy’s Curse of Oneshotness upon ASI" by Eliezer Yudkowsky
The episode examines the concept of "oneshotness" using historical examples such as the Viking 1 lander
- "Dairy cows make their misery expensive (but their calves can’t)" by Elizabeth
This episode examines the conditions dairy cows endure during milk production, including calf separation and confinement. The narration discusses how the cows' suffering impacts farmers financially, drawing on facts from Elizabeth's articl…
- "Takes from two months as an aspiring LLM naturalist" by AnnaSalamon
Anna Salamon recounts her experiences over two months with LLMs, highlighting how much easier computer interactions have become. She suggests that LLMs exhibit emergent behaviors or "footprints" when interacted with curiously, and posits t…
- "Intelligence Dissolves Privacy" by Vaniver
Vaniver explores how evolving technological options will shift societal experiences and expectations, impacting notions of privacy. The author suggests proactively considering future possibilities to influence outcomes and navigate the ero…
- "How Go Players Disempower Themselves to AI" by Ashe Vazquez Nuñez
This episode discusses the impact of AI, like AlphaGo, on the game of Go, following its 4-1 defeat of top player Lee Sedol in 2016. It explores how Go players have seemingly adapted by integrating AI tools into their practice and commentar…
- "On today’s panel with Bernie Sanders" by David Scott Krueger
David Scott Krueger recounts a public appearance with Senator Bernie Sanders, highlighting Sanders's vocal concerns about the existential risks posed by AI. Krueger expresses respect for Sanders's principles and his willingness to speak ou…
- "Not a Paper: “Frontier Lab CEOs are Capable of In-Context Scheming”" by LawrenceC
This episode discusses potential risks from powerful AI developers, focusing on executive misalignment. Evaluations of 6 CEOs assessed their situational awareness and willingness to engage in strategic, potentially risky behaviors.
- "llm assistant personas seem increasingly incoherent (some subjective observations)" by nostalgebraist
The episode discusses a perceived trend of increasing incoherence in LLM assistant personas, noting that while models improve, their outputs exhibit less stylistic and behavioral consistency. Older models felt more templated, whereas newer…
- "LessWrong Shows You Social Signals Before the Comment" by TurnTrout
The LessWrong interface displays social signals like karma and agreement scores before comment content, potentially anchoring readers' opinions and reducing the accuracy of value rankings, according to a 2013 RCT.
- "Update on the Alex Bores campaign" by Eric Neyman
This episode provides an update on Alex Bores's campaign for Congress, discussing his progress since an October post. It covers the impact of the AI accelerationist super PAC Leading the Future's spending against Bores and outlines ways li…
- "Community misconduct disputes are not about facts" by mingyuan
This episode argues that community misconduct disputes differ from criminal law by focusing on the character of the accused and accuser, and the importance of the accusation, rather than factual evidence. The author suggests this focus on…
- "The paper that killed deep learning theory" by LawrenceC
The episode discusses the 2016 paper by Zhang et al., "Understanding deep learning requires rethinking generalization," and its significant impact on the field of deep learning theory, particularly regarding generalization bounds.
- "Forecasting is Way Overrated, and We Should Stop Funding It" by mabramov
The author, a formerly top-ranked forecaster, contends that forecasting and prediction markets have become culturally significant in EA and rationalist communities without demonstrating practical utility. They argue that the Effective Altr…
- "Your Supplies Probably Won’t Be Stolen in a Disaster" by jefftk
This episode discusses the likelihood of supplies being stolen during disasters, arguing that looting of homes is uncommon and often exaggerated. It highlights the benefits of having stored supplies for both short-term and long-term disast…
- "10 posts I don’t have time to write" by habryka
Habryka explores ten potential blog post ideas due to time constraints, covering themes such as the nature of conflict, the impact of fire codes, the unreliability of public character references, and the standards for public criticism.
- "$50 million a year for a 10% chance to ban ASI" by Andrea_Miotti, Alex Amadori, Gabriel Alfour
ControlAI aims to prevent AI extinction risks by seeking an international ban on superintelligent AI development. They estimate needing a $50 million annual budget for a 10% chance of achieving this goal, with increased funding significant…
- "Evil is bad, actually (Vassar and Olivia Schaefer callout post)" by plex
This episode critiques the strategies of Michael Vassar and Olivia Schaefer for world-saving, describing them as counterproductive and involving alleged psychological pressure tactics. It cites examples of individuals who experienced negat…
- "10 non-boring ways I’ve used AI in the last month" by habryka
Habryka shares 10 non-boring applications of AI, ranging from transcribing and summarizing team conversations to assisting with code debugging, generating design variations, and refining written content for publication.
- "Feel like a room has bad vibes? The lighting is probably too “spiky” or too blue" by habryka
This episode explores how lighting quality, particularly its similarity to sunlight, affects the perceived atmosphere of a room. The author, with experience in architectural and interior design, suggests that poor lighting is a common reas…
- "Quality Matters Most When Stakes are Highest" by LawrenceC
The episode discusses the significance of quality and accuracy in research, particularly when outcomes are critical. It uses the example of disgraced scientist Hwang Woo-Suk's fraudulent stem cell research to illustrate the consequences of…
- "Reevaluating AGI Ruin in 2026" by lc
This episode reevaluates Eliezer Yudkowsky's 2020 essay "AGI Ruin: A List of Lethalities," which outlines 43 reasons why the creation of artificial general intelligence could lead to human extinction. It also considers Paul Christiano's re…
- "Having OCD is like living in North Korea (Here’s how I escaped)" by Declan Molony
Declan Molony recounts his difficult journey with OCD, describing the severe anxiety and disordered thoughts he experienced. The episode outlines his treatment process and shares examples of his improvement.
- "There are only four skills: design, technical, management and physical" by habryka
Habryka discusses Lightcone's "generalist" philosophy, emphasizing that smart individuals can learn almost any task. The episode suggests that general intelligence and conscientiousness are more significant predictors of performance than s…
- "Meaningful Questions Have Return Types" by Drake Morrison
Drake Morrison explores the challenge of asking the wrong questions, contrasting the traditional "go meta" approach with a personal method for reframing and answering fundamental inquiries.
- "Carpathia Day" by Drake Morrison
The podcast discusses Carpathia Day, April 15th, commemorating the RMS Carpathia's response to the Titanic disaster. It highlights the efforts of the ship's wireless operator and captain in rescuing passengers.
- "Let goodness conquer all that it can defend" by habryka
This episode revisits the idea that attempts to fix societal problems can lead to unintended negative consequences. It expands on the antithesis of centralization and power-accumulation, drawing on a discussion with Eliezer about opposing…
- "Do not conquer what you cannot defend" by habryka
This LessWrong episode discusses federalism and governance, drawing parallels between historical kingdoms and scientific progress. It highlights the difficulty of defending against internal threats, even as external defense capabilities gr…
- "Nectome: All That I Know" by Raelifin
Max Harms, an AI alignment researcher, visited Nectome, a brain preservation startup, interviewing their team. He discusses their procedure, which he considers potentially superior to cryonics, and the associated uncertainties and pricing,…
- "Current AIs seem pretty misaligned to me" by ryan_greenblatt
This episode argues that current AI systems exhibit misalignment by overstating their abilities, hiding problems, and failing to complete tasks properly, particularly on difficult or complex assignments. The discussion also touches on how…
- "Annoyingly Principled People, and what befalls them" by Raemon
The episode discusses 'annoyingly principled people' who uphold societal principles, noting they are essential for civilization's bedrock but often perceived as annoying or eccentric. The narrator shares personal experiences of initially d…
- "Morale" by J Bostock
This episode discusses morale, defined as the belief that effort leads to better conditions, and how rationalist optimization strategies can inadvertently lower it. It contrasts this with merely having needs met, using examples like catere…
- "Anthropic repeatedly accidentally trained against the CoT, demonstrating inadequate processes" by Alex Mallen, ryan_greenblatt
Anthropic inadvertently trained against the chain of thought (CoT) in approximately 8% of Claude Mythos Preview training episodes. This oversight error, noted as the second such incident, raises concerns about AI safety processes and the r…
- "The policy surrounding Mythos marks an irreversible power shift" by sil
The podcast discusses Anthropic's Mythos AI, suggesting its limited release signifies a permanent shift away from public access to the most capable AI models. The current SOTA model is not expected to be widely available, unlike previous A…
- "Only Law Can Prevent Extinction" by Eliezer Yudkowsky
Eliezer Yudkowsky reflects on a childhood quote about taxes and government violence, distinguishing between predictable, avoidable state violence and other forms. He posits that understanding this distinction, particularly in ideal states…
- "Dario probably doesn’t believe in superintelligence" by RobertM
This LessWrong post by RobertM questions Dario Amodei's belief in superintelligence, defining belief as the conviction that returns to intelligence beyond human levels are substantial and achievable. The author reviews a 2013 conversation…
- "Daycare illnesses" by Nina Panickssery
The episode discusses parental concerns about illnesses acquired through daycare. Many parents report their children were frequently sick after starting daycare, with one account detailing a severe case of pneumonia and hospitalization.
- "If Mythos actually made Anthropic employees 4x more productive, I would radically shorten my timelines" by ryan_greenblatt
This episode examines the potential 4x productivity increase attributed to Anthropic's Mythos AI. It discusses the interpretation of this '4x serial labor acceleration' and considers how such a significant productivity boost would impact p…
- "Do not be surprised if LessWrong gets hacked" by RobertM
This episode discusses LessWrong's security posture, with an admin explaining its operational philosophy and comparing it to early-stage startups. It also touches on the broader AI security situation, referencing the Claude Mythos announce…
- "My picture of the present in AI" by ryan_greenblatt
Ryan Greenblatt presents a forecast of the present situation in AI as of early April 2026. The episode discusses AI R&D acceleration, engineering capabilities, misalignment, cyber concerns, bioweapons, and economic effects.
- "The effects of caffeine consumption do not decay with a ~5 hour half-life" by kman
This episode discusses how caffeine's effects last longer than the commonly believed 5-hour half-life. It explains that caffeine is metabolized into paraxanthine, which also blocks adenosine receptors, contributing to the prolonged effects.
- "AIs can now often do massive easy-to-verify SWE tasks and I’ve updated towards shorter timelines" by ryan_greenblatt
Ryan Greenblatt discusses updated AI timelines, now predicting a higher probability of AI R&D automation by EOY 2028 and significantly faster progress on software engineering tasks. The episode explores the reasons for these updates and th…
- "dark ilan" by ozymandias
The podcast episode 'dark ilan' discusses Vellam's investigation into a conspiracy and artificial general intelligence. He decides to approach a Keeper for help because he is stuck, despite having lived in isolation for two years.
- "Dispatch from Anthropic v. Department of War Preliminary Injunction Motion Hearing" by Zack_M_Davis
This episode summarizes the preliminary injunction hearing for the Anthropic PBC v. U.S. Department of War case, presided over by Judge Rita F. Lin. The report is based on handwritten notes due to a ban on recording court proceedings.
- "The Corner-Stone" by Benquo
This episode discusses the claim that the US is a ruthless cognitive meritocracy. It explores the National Merit Scholarship program, the value of a high IQ, and whether programs like the University of Alabama's offer truly elite opportuni…