Why Building Superintelligence Means Human Extinction (with Nate Soares) - Future of Life Institute Podcast Recap

Podcast: Future of Life Institute Podcast

Published: 2025-09-18

Duration: 1 hr 40 min

Guests: Nate Soares

Summary

Nate Soares argues that the pursuit of superintelligence poses an existential threat to humanity. He advocates for an international ban on AI research and development to prevent catastrophic outcomes.

What Happened

Nate Soares, president of the Machine Intelligence Research Institute, discusses the dangers of developing superintelligent AI systems. He emphasizes that once AI surpasses human intelligence, the rules of the game change drastically, and the consequences could be irreversible.

Soares describes the AI development process as more akin to growing than crafting, highlighting the unpredictability and uncontrollability of the resulting systems. He draws parallels to historical disasters like Chernobyl to illustrate the catastrophic potential of unchecked AI development.

The conversation touches on the psychological barriers that prevent people from acknowledging the risks associated with superintelligence. Soares compares this to historical instances where warnings were ignored until it was too late, such as the Titanic and Chernobyl.

Soares argues that the current pace of AI advancement is reckless, likening it to a race towards superintelligence that humanity is not prepared to win. He mentions the difficulty of retroactively fixing problems in AI systems, emphasizing that mistakes in superintelligence could be fatal with no opportunity for retries.

He also explains the concept of threshold effects in intelligence, where small changes can lead to significant and unpredictable outcomes. This unpredictability makes it challenging to anticipate when AI might reach a critical point of intelligence.

Soares stresses the importance of international cooperation to halt the race towards superintelligence. He believes that without a global agreement, any nation pursuing superintelligent AI could jeopardize humanity's future.

The episode concludes with a call to action for policymakers and the general public to recognize the existential risk posed by AI and to advocate for policies that prioritize safety over advancement.

Key Insights