The Case for a Global Ban on Superintelligence (with Andrea Miotti) - Future of Life Institute Podcast Recap

Podcast: Future of Life Institute Podcast

Published: 2026-02-20

Duration: 1 hr 7 min

Summary

The episode discusses the urgent need for a global ban on superintelligence due to its existential risks to humanity, emphasizing a collective understanding of these dangers over regulatory measures. Andrea Miotti argues that the lobbying tactics used by AI companies mirror those of the tobacco industry, prioritizing profit over safety.

What Happened

In this episode, host Gus Stocker welcomes Andrea Miotti, founder and CEO of Control AI, to explore the perilous landscape of artificial intelligence and the pressing need for regulation. Miotti highlights the alarming predictions made by influential figures in the tech industry, including Sam Altman and Elon Musk, who have warned of a significant chance of human extinction due to superintelligence. Despite these warnings, Miotti notes that AI companies have been actively lobbying against regulation, striving to keep their development unchecked while the risks become more apparent.

Miotti compares the current tactics of AI companies to those employed by the tobacco industry, which historically downplayed health risks associated with smoking. He points out that while CEOs acknowledge the potential dangers of superintelligence, they simultaneously lobby for minimal regulation, often pushing for evidence-based approaches that can delay necessary safeguards indefinitely. The conversation emphasizes that the solution lies in widespread public awareness and concern regarding the risks of superintelligence, which could lead to collective action without specific laws being needed.

Key Insights

Key Questions Answered

What did Sam Altman say about superhuman machine intelligence?

Sam Altman stated, 'The development of superhuman machine intelligence is the greatest threat to the existence of humanity.' This indicates his deep concern about the potential consequences of AI technology surpassing human intelligence. It highlights the urgency of addressing the risks associated with superintelligence before it becomes a reality.

How do AI companies lobby against regulation?

Andrea Miotti explains that AI companies have been employing a lobbying strategy reminiscent of the tobacco industry's past. They aim to prevent regulation by claiming they support targeted or evidence-based regulations, yet consistently oppose any specific proposals that arise. This strategy allows them to delay necessary oversight while continuing their operations unchecked.

What are the predicted risks of superintelligence according to experts?

Experts like Amodei from Anthropic suggest there is a 25% chance of a catastrophic outcome that could lead to human extinction. Additionally, Elon Musk has indicated a 20% chance of annihilation, emphasizing the potential for AI systems to gain control over humanity once they surpass human intelligence, leading to dire consequences.

What role does public awareness play in AI regulation?

Miotti emphasizes that the best antidote to the lobbying power of AI companies is public awareness. As more individuals become informed about the risks of superintelligence, there is a greater likelihood that they will demand action and make informed decisions regarding AI development. This collective understanding could drive the necessary changes without waiting for formal legislation.

How do the actions of AI companies reflect their understanding of risks?

Despite acknowledging the potential dangers of superintelligence, AI companies continue to lobby against regulation. Miotti points out that their actions—raising billions of dollars to prevent regulation—contradict their stated concerns. This behavior underscores a prioritization of profit over public safety, similar to the strategies used by the tobacco industry to protect their interests.