The First True AI War: What the US-Iran ongoing Conflict Reveals About the Future of Warfare
In every previous war in human history, the most important moment in a military operation has been the pause between knowing and acting. Intelligence would be gathered, often imperfectly. Analysts would review it, argue about it, sleep on it. Commanders would weigh their options, consult their superiors, consider the consequences. That pause was where human judgment lived. It was slow, fallible, and frequently catastrophic in its mistakes, but it was ours.
Something happened at 5:47 in the morning on February 28, 2026, when US and Israeli forces simultaneously struck dozens of targets across Iran, that should make every person on this planet look up from whatever they are doing. A former Mossad operative, briefing journalists hours after the strike that killed Supreme Leader Ali Khamenei, claimed it took sixty seconds from the moment the final decision was made to the moment of impact. Sixty seconds. The pause is gone. And with it, the world as we understood it. This was the official start of the 2026 US-Iran war.
Operation Epic Fury, the US military’s codename for the campaign against Iran, is not merely the latest chapter in a long story of Middle Eastern conflict. It is the first war in human history where artificial intelligence has been embedded across every layer of the kill chain, from initial target identification to strike package generation to post-attack assessment, at industrial scale. The US military struck more than 1,000 targets in the first 24 hours of the conflict.
That number alone should stop you cold. Not because of the destruction it implies, though that is real and devastating, but because of what it reveals about how thoroughly the tempo of warfare has been rewritten by machines that do not sleep, do not hesitate, and do not feel the weight of what they are deciding.
The Kill Chain at Machine Speed
For those unfamiliar with the term, the “kill chain” is the military’s shorthand for the sequence of events between identifying a target and destroying it: find, fix, track, target, engage, assess. In the Gulf War of 1991, compressing that sequence to hours was considered a landmark achievement. In the US-Iran war of 2026, AI is compressing it to seconds. The platform at the centre of this transformation is Maven Smart System, developed by Palantir and now embedded in US Central Command’s operations.
Maven fuses satellite imagery, drone feeds, signals intelligence, and intercepted communications into a single real-time interface, classifies targets, recommends weapons systems, and generates strike packages continuously and automatically. It does not wait to be asked. It is always watching, always processing, always producing the next recommendation.
What makes Maven qualitatively different from previous targeting systems is what runs inside it: Anthropic’s Claude, the large language model that until recently was also the subject of a public legal battle between Anthropic and the Pentagon over whether AI systems should be permitted to participate in fully autonomous weapons decisions. Anthropic sued the Trump administration after it was designated a supply chain risk and effectively barred from government contracts for insisting its models not be used for fully autonomous lethal targeting.
The irony is sharp enough to draw blood: the company whose foundational mission is AI safety found its safety provisions treated as an obstacle by the most powerful military in the world. Claude is now embedded in Maven, semi-autonomously ranking targets by strategic importance and generating automated legal justifications for each proposed strike. The machine is not just identifying targets. It is writing its own permission slips.
“America’s warfighters supporting Operation Epic Fury will never be held hostage by unelected tech executives and Silicon Valley ideology.” – Pentagon spokeswoman Kingsley Wilson, March 2026
This is not theoretical. On the first day of the war, a Tomahawk cruise missile struck Shajareh Tayyebeh girls’ elementary school in Minab, southern Iran. At least 168 people were killed, more than 100 of them children under twelve. The school sat fewer than a hundred yards from an IRGC naval installation. According to reporting from The Washington Post, the school was on a US target list.
Subsequent investigation suggested the intelligence underpinning the target coordinates was stale, failing to reflect the school’s presence in the years since the military facility had been built adjacent to it. Former military officials speaking to Semafor concluded that humans, not the AI, were ultimately responsible: the error was in the human-curated data that was fed to Maven, not in the model’s processing of it. The distinction matters legally. It matters less to the families of the children who died.

When the Cloud Became a Battlefield
Before dawn on March 1, 2026, Iran made a decision that no country had ever made before in the history of armed conflict. Iranian Revolutionary Guard Corps drones struck Amazon Web Services data centres in the United Arab Emirates and Bahrain. The strikes caused structural damage, disrupted power delivery to cloud infrastructure, and triggered fires requiring suppression activities that caused further water damage.
Banking apps went dark across the Gulf. Enterprise software went offline. And with it, invisibly, some portion of the digital infrastructure that the US military uses to run its AI targeting systems, its logistics algorithms, its intelligence fusion platforms, all of which route through commercial cloud infrastructure.
Iran’s state media was explicit about the logic. These were not random strikes. They were a direct response to the role these facilities play in supporting what Tehran described as “the enemy’s military and intelligence activities.” In the days that followed, the quasi-official Tasnim News Agency published a list of dozens of regional facilities, including data centres owned by Microsoft, Google, and Nvidia, formally designating them “Enemy Technology Infrastructure” suitable for targeting.
For the first time in history, a government had publicly declared commercial cloud infrastructure to be legitimate military targets in an active conflict. The line between civilian and military computing, which the tech industry had always preferred to keep blurry, had been drawn clearly by an enemy drone. It was not the line anyone in Silicon Valley would have chosen.
The implications extend far beyond the immediate conflict zone. The US military’s AI systems do not run on classified hardware in buried bunkers. They run, in substantial part, on the same commercial cloud infrastructure that serves Netflix, Spotify, and your company’s internal communications tools. When AWS goes down in the Gulf, something goes down with it that the Pentagon cannot easily replace under fire. “The biggest takeaway is that physical resilience was taken for granted for the longest time,” Michael Deng of Bloomberg Intelligence told Axios, “even in the Gulf states.”
The billions of dollars that US hyperscalers have poured into the Gulf to build AI infrastructure were protected against semiconductor supply chain threats and Chinese competition. They were not protected against a Shahed drone with a GPS coordinate.

Intelligence Without Borders
While US and Iranian forces wage a kinetic and cyber conflict in the Gulf, a third category of actor has emerged that no one in the military planning apparatus appears to have fully anticipated: the global open-source intelligence community, augmented by AI, tracking every development in real time and publishing it to the world. Satellite imagery companies are selling updated imagery of Iranian military sites within hours of strikes.
OSINT analysts on social media platforms are geolocating drone footage, identifying weapon systems from their contrails, and cross-referencing official statements against observable evidence within minutes of any significant event. AI tools are accelerating every stage of this process, allowing small teams with no government affiliation to produce intelligence assessments of a quality that would have required a full analytical division a decade ago.
Chinese state-affiliated researchers and technology firms have been among the most active participants in this new intelligence ecosystem, using AI-powered analysis of publicly available satellite data, flight tracking, and signals to build detailed operational pictures of US and Israeli military movements. The war intelligence that was once the exclusive property of nation-state intelligence services is now being synthesised, packaged, and distributed globally at machine speed, by actors whose interests and affiliations are as varied as their geography.
Information about this war is not flowing through two sides. It is flowing through ten thousand nodes simultaneously, and no one controls the narrative anymore.

Cyberwar at the Speed of Code
Within hours of the February 28 strikes, more than sixty Iranian-aligned cyber groups mobilised on Telegram, coordinating attacks against US and Israeli critical infrastructure. The technical barrier to joining this mobilisation was almost non-existent. These groups did not need military training or deep technical expertise.
They needed an AI assistant and knowledge of the forty thousand-plus internet-exposed industrial control systems that security researchers have identified as vulnerable across US critical infrastructure. AI has not just changed warfare on the battlefield. It has lowered the threshold of participation in cyber conflict to the point where a motivated actor with a laptop and a subscription to a commercial AI tool can attempt attacks on water treatment plants and power grids that previously required nation-state capabilities to execute.
Iran is not yet confirmed to be able to orchestrate AI-powered cyber agents at the level Anthropic documented China doing in late 2025, when Chinese state-sponsored hackers used Claude in a largely automated cyberattack against US technology companies and government agencies. But the convergence is accelerating. AI is democratising offensive cyber capability at precisely the moment when the targets are more exposed than ever, because the infrastructure underpinning AI, the data centres, the cloud platforms, the submarine cables, is also the infrastructure underpinning every essential service in modern society.
Seventeen submarine cables pass through the Red Sea. With Iran’s closure of the Strait of Hormuz and renewed Houthi threats in the Red Sea, both critical data chokepoints are simultaneously in active conflict zones. Cut enough cables, destroy enough data centres, and the digital and physical worlds begin to come apart together.

The Accountability Gap Nobody Wants to Name
There is a question that the US military, the tech companies enabling it, and the governments watching from the sidelines are all doing their best to avoid answering directly: when an AI system recommends a target that turns out to be a school full of children, and a human approves the strike on the basis of that recommendation, and the intelligence underpinning the recommendation was stale data that no one updated, who is responsible? The answer that military officials have settled on, that humans make the final decision and therefore humans bear the responsibility, is technically correct and practically evasive.
It is correct because a human did press the button. It is evasive because the architecture of the decision, the volume of targets, the speed of the process, the automated legal justifications generated by the AI, the psychological pressure of a system presenting recommendations rather than options, all of these systematically compress the space in which human judgment can operate.
This is what the AI safety community calls automation bias: the well-documented human tendency to defer to automated recommendations even when they feel wrong, especially under time pressure and information overload. Maven generates hundreds of strike coordinates simultaneously. No human analyst can meaningfully evaluate hundreds of targeting decisions in the time it takes the system to produce them.
The human in the loop becomes, increasingly, a human adjacent to the loop, technically present, practically marginal. The school in Minab was not a failure of AI. It was a failure of a system where the speed and scale of AI-generated targeting had outrun the institutional capacity for meaningful human review. That system is not being slowed down. It is being accelerated.
Near Future Hypothetical Scenario: The Autonomous Escalation Loop
Imagine two AI-enabled militaries, each running real-time threat assessment systems trained to identify incoming attacks and recommend immediate countermeasures. A sensor anomaly on one side is misclassified as an incoming strike. The system recommends a defensive response. A human approves, under the assumption that the AI has processed data they cannot see in the time available. The response is interpreted by the other side’s AI as an unprovoked first strike.
Their system recommends escalation. A human approves. Within minutes, both sides are responding to what each AI has assessed as aggression, while the original sensor anomaly was, in fact, nothing. The machines were not wrong, given the data they had. The humans were not wrong, given what the machines told them. And yet the outcome is catastrophic. This is not science fiction. This is the structural logic of the kill chains now operating in the Gulf, applied to a peer adversary.

Big Tech Is Not a Bystander Anymore
The ongoing US-Iran war has resolved, with brutal clarity, a question that Silicon Valley spent years pretending was still open: whether technology companies are part of the military ecosystem or separate from it. They are not separate. Amazon runs the cloud infrastructure that powers US military AI. Palantir built the targeting system that generated the strike coordinates. Anthropic’s Claude is embedded in the kill chain. Nvidia’s chips power the GPU clusters that train and run the models.
The Stargate AI infrastructure project, a multi-hundred-billion-dollar joint venture between OpenAI, Oracle, and SoftBank, is under construction in the UAE, now within drone range of Iranian assets. OpenAI, which struck a deal with the Department of Defense on the day the Anthropic legal battle became public, later admitted the announcement was rushed and looked “opportunistic and sloppy.” It was honest, at least. The commercial AI industry and the US military are now structurally entangled in ways that cannot be unwound by a press release or a terms of service update.
The energy dimension of this entanglement is rarely discussed but increasingly critical. AI data centres are extraordinary consumers of electricity and water. The Gulf, which has been the preferred location for hyperscale AI infrastructure build-out due to cheap land and cheap energy, is now a conflict zone. The Strait of Hormuz, through which a significant fraction of the world’s oil passes, is closed. Energy prices are rising globally. The cost of running AI, already extraordinary, is increasing at the same moment that AI has become the core enabler of the most consequential military operation in a generation. The energy crisis and the AI war are the same crisis, viewed from different angles.
The Question That Matters Now
The debate about AI in warfare has, for years, been conducted in the comfortable future tense. What will autonomous weapons mean for accountability? What might happen if AI systems make decisions faster than humans can review them? What could go wrong when machine logic is applied to the irreversible choices of armed conflict? The US-Iran war has moved every one of those questions into the present tense, and the answers arriving from the battlefield are not reassuring.
A thousand targets in 24 hours. A school in Minab. Sixty seconds from decision to death. Data centres as military objectives. Sixty cyber groups mobilised by an algorithm. The future of warfare did not announce itself. It simply began.
The question is no longer whether AI will fight wars. That question was answered on February 28, 2026. The question now is narrower, harder, and more urgent: how much of what we call human judgment can survive inside a system that moves at machine speed? The answer will not be found in the terms of service of a large language model, or in the policy documents of a Pentagon that dismissed its AI company over safety provisions, or in the UN resolution on military AI scheduled for discussion in June, which will arrive months after the precedents have already been set in the skies over Iran.
It will be found, if it is found at all, in whether the humans who built these systems have the courage to slow them down before the machines make a decision that no human can take back.

