At 1:15 a.m. ET on February 28, 2026, the United States and Israel launched Operation Epic Fury — a massive, multi-domain assault on Iranian military and nuclear targets. Within 12 hours, nearly 900 strikes were carried out from land, air, and sea. At every layer of the operation, artificial intelligence was present.
Autonomous Drones: The LUCAS Swarm
One of the most notable AI-enabled assets deployed was the Low-Cost Unmanned Combat Attack System (LUCAS) — one-way attack drones operated by CENTCOM's Task Force Scorpion Strike. Costing roughly $35,000 each (compared to millions for an MQ-9 Reaper), these autonomous, long-range drones were reverse-engineered from captured Iranian Shahed-136 technology and pre-positioned in the region months before the strikes began.
LUCAS drones represent a paradigm shift: AI-guided, expendable, and deployable by catapult, rocket-assist, or mobile ground vehicle. They don't need a pilot, a runway, or a satellite link to find their target. This was their first confirmed use in combat.
AI for Target Identification & Battle Simulation
According to reports from the Wall Street Journal, U.S. Central Command used large language models (LLMs) — specifically Anthropic's Claude, operating through the Palantir defence platform — to identify targets and run combat simulations ahead of the strikes. The AI system had been the only advanced model cleared for use on classified military networks, and had reportedly already been employed in the January operation to capture Venezuelan President Nicolás Maduro.
The use of AI for target generation is not new. Israel previously deployed systems like Habsora (for identifying buildings of military interest) and Lavender (for flagging suspected operatives using pattern-of-life analysis) during the Gaza conflict. But Epic Fury may represent the most extensive integration of general-purpose AI into a large-scale conventional military operation by the United States.
AI-Powered Air Defence
On the defensive side, AI was equally critical. The U.S. deployed Patriot missiles, THAAD batteries, and ship-launched Standard Missiles — all systems that rely on AI-assisted threat detection, tracking, and interception. Israel's Iron Dome, which uses AI to detect, classify, and autonomously intercept incoming threats, was activated as Iran launched retaliatory missile and drone salvos. The UAE reported dealing with 137 missiles and 209 drones within the first 24 hours alone.
The Anthropic–Pentagon Showdown
Perhaps the most extraordinary subplot of Operation Epic Fury isn't about what happened on the battlefield — it's about what happened in Washington 19 hours before the first strike.
Anthropic signs a $200 million contract with the Pentagon. Claude becomes the only advanced AI model on classified military networks, deployed via Palantir.
Pentagon demands Anthropic make Claude available for "all lawful purposes" without restrictions. Anthropic insists on two guardrails: no fully autonomous weapons, and no mass domestic surveillance.
Deadline passes. Anthropic refuses to comply. Trump orders all federal agencies to cease using Anthropic's technology. Defence Secretary Hegseth designates Anthropic a "supply chain risk to national security."
Operation Epic Fury begins. Reports indicate Claude was still used for target identification and simulation despite the blacklisting hours earlier.
The U.S. government was simultaneously using an AI system to plan its largest military operation in decades — while officially declaring that same AI system a threat to national security — because its creators refused to remove safety guardrails.
Anthropic CEO Dario Amodei stated: "We cannot in good conscience" allow unrestricted military use. The Pentagon's response, as one commentator noted, was airstrikes. The government has since begun transitioning to OpenAI as an alternative provider, though it remains unclear whether the same demands are being made.
What This Means Going Forward
Operation Epic Fury is a watershed moment for military AI — not just for the technology deployed, but for the questions it forces us to confront:
Speed vs. oversight. AI enables the compression of decision cycles from hours to seconds. In the opening salvo of Epic Fury, autonomous drones, AI-selected targets, and AI-guided interceptors operated at a tempo no human command chain could match alone. But faster isn't always safer — the risk of algorithmic escalation, where machines accelerate a crisis faster than humans can intervene, is now a concrete operational reality.
Who governs the algorithm? The Anthropic–Pentagon dispute exposed a fundamental question: once the military buys an AI tool, should the company that built it have any say in how it's used? The Pentagon says no. Anthropic — backed by hundreds of employees from Google and OpenAI — says there must be limits. This debate will define AI governance for a generation.
The democratisation of autonomous warfare. LUCAS drones at $35,000 apiece, built from reverse-engineered adversary technology, signal a future where sophisticated autonomous weapons are cheap and abundant. The barrier to entry for AI-enabled warfare is dropping rapidly.
"It is only a matter of time before drones are fighting drones, attacking critical infrastructure and targeting people fully autonomous — all by themselves. No human involved." — President Volodymyr Zelenskyy, addressing the UN General Assembly, Sept 2025
Operation Epic Fury will be studied for decades — as a military operation, as a geopolitical turning point, and as the moment AI moved from supporting role to lead actor on the modern battlefield. The algorithms are at war. The question now is whether humans still hold the reins.