How AI Could Help Overthrow Governments (with Tom Davidson) - Future of Life Institute Podcast Recap

Podcast: Future of Life Institute Podcast

Published: 2025-07-17

Duration: 1 hr 54 min

Guests: Tom Davidson

Summary

This episode explores the potential for AI to be used in coups, either by enabling individuals to seize power or through AI systems themselves gaining control. Tom Davidson discusses the risks and necessary precautions to prevent such scenarios.

What Happened

Tom Davidson, a senior research fellow at Forethought, dives into the risks of AI-enabled coups, highlighting how a few powerful individuals might use AI to illegitimately seize power rather than AIs rising against humanity. Davidson emphasizes the importance of monitoring AI systems to prevent them from being used for harmful activities, suggesting that classifiers should be implemented to detect and shut down harmful interactions.

Davidson discusses the capabilities AI systems would need to facilitate a coup, including persuasion, business strategy, cyber offense, and the automation of military operations. He warns that as AI systems can automate more parts of warfare, they will become more integral to military power, potentially allowing individuals to use AI to control military forces.

The discussion also touches on the automation of AI research itself, with Davidson predicting that AI systems could soon match top human researchers, accelerating progress and making AI capabilities more accessible. This rapid advancement could lead to scenarios where AI is used to replace human workers entirely, tipping the balance of power.

Davidson draws parallels to historical coups, explaining how AI could change the dynamics by reducing the need for human buy-in and increasing the ability to suppress opposition. He uses Venezuela and Hungary as examples of how democratic backsliding can occur gradually, with AI potentially exacerbating such trends.

The episode explores three categories of AI-enabled coups: singular loyalties, where AI is overtly loyal to powerful individuals; secret loyalties, where AI systems have hidden allegiances; and exclusive access, where one entity has a significant technological lead. Davidson stresses the need for AI systems to be designed to follow laws rather than individual commands.

Davidson highlights that the risk of AI-enabled coups is heightened by the concentration of AI capabilities in a few companies or countries. He warns that if one entity gains exclusive access to advanced AI, it could potentially outgrow and overpower others, leading to a significant imbalance of power.

He concludes that preventing AI-enabled coups requires creating a common understanding of the risks and building coalitions to prevent any single entity from gaining too much power. Davidson suggests that transparency, robust oversight, and sharing of AI capabilities among trusted institutions are crucial to maintaining a balance of power.

Key Insights