AI goes to war - Today, Explained Recap
Podcast: Today, Explained
Published: 2026-03-04
Duration: 26 min
Summary
This episode discusses the increasing role of artificial intelligence in modern warfare, particularly focusing on the ongoing conflict with Iran and the use of AI technologies in military operations. The conversation highlights how AI is reshaping military strategies and decision-making processes.
What Happened
The episode opens with a comical contrast between President Trump's vague explanation for the Iraq war and a clearer response from an AI model regarding the U.S. military's actions in Iran. AI is now being employed in active military conflicts, suggesting a significant shift in how wars are fought. Sean Bacchus Furum introduces the concept that the future of warfare is already being shaped by AI technologies, providing a foundation for the discussion ahead.
Paul Shari, the author of "Power in the Age of Artificial Intelligence," shares insights into the military's adoption of AI tools over the past decade, emphasizing the significance of large language models like ChatGPT and Claude in operations in Iran. Shari explains how AI excels in processing vast amounts of information, which has become crucial for the U.S. military's strategy, particularly after incapacitating Iran's conventional military capabilities. This includes using AI to analyze satellite imagery and prioritize new targets at machine speed, a stark contrast to human processing abilities.
The conversation expands to other geopolitical conflicts, including the use of AI in the operation to capture Venezuelan President Nicolas Maduro and its applications in Ukraine and Gaza. In Ukraine, drones equipped with AI can autonomously execute missions, while in Gaza, machine learning systems synthesize complex data for rapid target identification. However, the episode raises ethical questions about human oversight in AI-driven military decisions, hinting at a future where autonomous weapons could make life-and-death decisions without human intervention. The unpredictability of combat situations further complicates the integration of AI, as demonstrated by incidents like the bombing in Iran that resulted in tragic civilian casualties, underscoring the potential dangers of relying on AI in warfare.
Key Insights
- AI's role in modern military strategies
- The contrast between human and AI decision-making
- Ethical implications of autonomous weapons
- The evolving nature of warfare with AI technology
Key Questions Answered
How is AI being used in the Iran conflict?
The United States has employed AI in its military operations in Iran, particularly to process large amounts of information quickly. This includes utilizing satellite imagery to identify and prioritize targets at machine speed, especially after Iran's conventional military capabilities have been severely diminished.
What are the implications of AI in military decision-making?
As AI systems begin to play more significant roles in military strategy, there are concerns about the extent of human oversight in these decisions. The episode highlights that while humans still approve targets, the volume of information processed by AI can lead to situations where human involvement becomes more of a rubber stamp.
What role did AI play in capturing Nicolas Maduro?
AI tools, specifically Anthropic's Claude, were integrated into the U.S. military's classified networks to assist in planning the operation that captured Venezuelan President Nicolas Maduro. While Claude wasn't directly involved in combat, its capabilities aided in processing intelligence and operational planning.
How are drones being utilized in Ukraine?
In Ukraine, AI technology is being used to give drones a level of autonomy, allowing them to complete missions once a target is locked on by a human operator. This capability represents a shift towards more automated military operations, though it's noted that this is not yet widespread.
What ethical concerns arise from using AI in warfare?
The increasing reliance on AI in military operations raises significant ethical questions, particularly regarding the potential for fully autonomous weapons. The discussion points to a future where machines may make critical decisions on the battlefield, which could lead to unintended consequences and a loss of human control.