Future of Life Institute Podcast

Can AI Do Our Alignment Homework? (with Ryan Kidd)

This episode features Ryan Kidd discussing AGI timelines, the risks of model deception, and whether AI safety research can inadvertently boost capabilities. Kidd outlines research tracks at MATS, discusses key researcher archetypes, and offers advice for those interested in AI…

Listen