How Is AI Being Used In The Iran War? - Science Friday Recap

Podcast: Science Friday

Published: 2026-03-12

Duration: 14 min

Guests: Karen Howe

Summary

The episode examines the role of AI-powered large language models in military decision-making, particularly in the Iran war, and critiques the ethical and practical implications of using tools like Anthropic's Claude for identifying bombing targets.

What Happened

AI's integration into military operations has reached a critical moment, with large language models like Anthropic's Claude reportedly being used to analyze intelligence data and identify bombing targets in the Iran war. Reports suggest that Claude identified around a thousand targets, but the technology's inherent inaccuracies raise concerns about its reliability in life-and-death decisions.

The use of AI in the bombing of a school in Iran—followed by a second strike on first responders—has sparked speculation about whether Claude misidentified civilian locations as military targets. While U.S. officials have stated it's unclear if AI was responsible, the lack of transparency and accountability in these decisions highlights the risks of deploying these tools in warfare.

Anthropic's relationship with the Pentagon adds another layer of complexity. Despite the company's resistance to fully autonomous weapons, they allowed Claude to be used as a decision support system for identifying bomb targets, a practice criticized for fostering automation bias in human decision-makers.

Anthropic's CEO, Dario Amadei, has further complicated the ethical debate by signaling openness to developing autonomous weapons in the future while opposing their use in the current iteration of Claude. This stance has drawn criticism for contradicting the company's self-proclaimed ethical approach.

The episode also explores the broader public resistance to unchecked AI development. Polls show that 80% of Americans support AI regulation, and grassroots movements against data center expansions have become effective methods to slow the industry's growth and demand accountability.

Karen Howe highlights the imperial nature of AI companies like Anthropic, describing their approach as akin to 'clean coal'—a problematic attempt to position their work as ethical despite its harmful consequences. She argues for stronger public pushback against AI's reckless deployment in military and other sectors.

Looking ahead, the episode points to the growing coalition of public resistance as a source of optimism in the fight against the unchecked expansion of AI technologies. Howe emphasizes the importance of leveraging lessons from local grassroots movements to challenge AI's broader impacts, including its use in defense and unethical data practices.

Key Insights

Key Questions Answered

How is Anthropic's Claude being used in the Iran war?

Claude, a large language model developed by Anthropic, has reportedly been used to analyze intelligence data and identify bomb targets. However, its inaccuracies may have contributed to tragic errors, such as the bombing of a school and subsequent strike on first responders.

What is automation bias, and how does it affect AI in warfare?

Automation bias occurs when humans overly trust AI outputs, believing them to be more accurate than they may actually be. In warfare, this bias can lead to unquestioned acceptance of AI-identified targets, increasing the risk of civilian casualties.

What resistance exists against the expansion of AI technologies?

Grassroots movements across the U.S. are protesting data center expansions, pressuring local governments, and voting out officials who support unchecked AI growth. This resistance reflects growing public demand for regulation and accountability in the AI industry.