Building & Scaling the AI Safety Research Community, with Ryan Kidd of MATS - "The Cognitive Revolution" | AI Builders, Researchers, and Live Player Analysis Recap
Podcast: "The Cognitive Revolution" | AI Builders, Researchers, and Live Player Analysis
Published: 2026-01-04
Duration: 1 hr 54 min
Guests: Ryan Kidd
Summary
Ryan Kidd discusses the growth and impact of MATS in AI safety research, touching on AGI timelines, AI ethics, and the need for both technical and governance solutions.
What Happened
Ryan Kidd, co-executive director of MATS, delves into the organization's pivotal role in shaping AI safety research. With 446 alumni and a who's who of mentors and alumni, MATS is a critical talent pipeline for AI safety. Kidd emphasizes the uncertainty surrounding AGI timelines, noting estimates around 2033, while acknowledging the possibility of earlier occurrences.
He highlights the dual-use nature of AI safety research, where efforts intended for safety often accelerate capabilities, citing RLHF as a key example. The conversation touches on the moral and ethical behavior of AI models like Claude, which exhibit impressively ethical tendencies despite concerns about deception.
Kidd outlines the three research archetypes in MATS: connectors, iterators, and amplifiers. Connectors define new research paradigms, iterators develop these through experimentation, and amplifiers help scale research teams. He notes that while iterators have historically been in demand, amplifiers are increasingly needed as AI tools lower technical barriers.
The episode explores the challenge of separating AI safety from capabilities work, especially as AI becomes more capable of performing alignment work. Kidd discusses the importance of preparing for shorter AGI timelines, suggesting a portfolio approach to research investments.
Ryan discusses the importance of governance alongside technical research, stressing the need for policy solutions as AI capabilities advance rapidly. While MATS has a strong technical focus, there is recognition of the rising importance of governance research.
Finally, Kidd shares insights into the AI safety labor market, noting that while there are many opportunities, the field is competitive, requiring strong technical skills, research experience, and credible references. MATS provides a crucial stepping stone for those looking to enter the field, boasting a high success rate in placing alumni in AI safety roles.
Key Insights
- MATS has established itself as a significant talent pipeline for AI safety research, with 446 alumni and a network of notable mentors and alumni contributing to the field.
- AI safety research often has a dual-use nature, where safety efforts can inadvertently accelerate AI capabilities, with Reinforcement Learning from Human Feedback (RLHF) cited as a key example.
- MATS categorizes researchers into three archetypes: connectors who define new paradigms, iterators who develop these through experimentation, and amplifiers who help scale research teams. The demand for amplifiers is increasing as AI tools reduce technical barriers.
- The AI safety labor market is competitive, requiring strong technical skills, research experience, and credible references. MATS plays a crucial role in preparing individuals for this field, with a high success rate in placing alumni in AI safety roles.