
How AI Changed Modern Warfare — A Medical Student's Perspective on the 2026 Iran War / AIは戦争をどう変えたか
日本語サマリー: 2026年2月28日、米国とイスラエルはイランへの大規模な軍事攻撃を開始した。この紛争は「最初のAI戦争」と呼ばれている。メイブン・スマート・システム、Anthropic社のClaude、そしてイスラエル独自のAIシステムが、標的の特定から攻撃の実行まで中核的な役割を果たした。本記事では、これらの軍事AI技術の詳細を分析するとともに、ブルガリアの医学部で人の命を救う学問を学ぶ一人の学生が、命を奪う技術の進化をどう見つめたかを綴る。
Introduction: Watching a War Unfold from a NATO Country
On February 28, 2026, I woke up to the news that the United States and Israel had launched a massive joint military operation against Iran. I'm a 33-year-old former IT engineer from Tokyo, now studying at a medical university in Bulgaria — a NATO member state that shares a region with Turkey, where Iranian missiles have been intercepted four times in the past month.
For me, the Iran war sits at the intersection of two worlds I inhabit: the world of technology, which I spent years working in, and the world of medicine, which I'm now learning to enter. The former is being used to take lives at machine speed. The latter exists to save them. This article is my attempt to make sense of that contradiction.

Part 1: The Maven Smart System — From Drone Footage to Kill Chains
The Origins
Project Maven began in April 2017 as a Pentagon initiative to solve a deceptively simple problem: surveillance drones were generating more video footage than human analysts could process. A single Predator drone could produce 900 hours of footage per mission. The initial budget was $70 million, and the AI's job was straightforward — detect and classify objects like vehicles, weapons, and people in drone video using convolutional neural networks trained on over four million labeled military images.
Google was the original contractor. That partnership ended dramatically in 2018 when approximately 4,000 Google employees signed a letter demanding withdrawal, and about a dozen resigned. Google published AI principles prohibiting weapons development and walked away. At the time, it felt like a clear moral line had been drawn.
Looking back from 2026, that line seems almost quaint.
Evolution into a Battlefield Intelligence Engine
After Google's departure, Palantir Technologies took over and transformed Maven into the Maven Smart System (MSS). In May 2024, Palantir won a $480 million contract to expand it. By 2025, that ceiling was raised to $1.3 billion through 2029.
The modern MSS operates through three integrated layers:
Layer 1 — Data Ingestion: Over 150 sources feed into the system: satellite imagery, drone video, signals intelligence, communications intercepts, vessel tracking, infrared sensors, and geolocation metadata.
Layer 2 — AI Analysis: Computer vision scans imagery for targets. Pattern recognition flags deviations from historical baselines. Large language models process text-based intelligence — reports, translated communications, and historical records.
Layer 3 — Decision Support: A unified interface where yellow-outlined boxes mark potential targets and blue boxes mark friendly forces or no-strike zones. Fire decisions can be transmitted directly to weapons systems via tactical data links.
The performance gains are staggering. Fire-support coordinators can now approve 80 targets per hour with AI assistance, compared to 30 without. Targeting data processing dropped from 12 hours in 2020 to under one minute. During exercises, MSS enabled 20 soldiers to match the targeting output that required 2,000 people during the Iraq War.
By 2025, over 20,000 military personnel were using MSS across nearly every US combatant command. NATO adopted its own version, MSS NATO, calling it "NATO's first AI-enabled warfighting command and control system."
Part 2: Operation Epic Fury — AI Goes to War
The First 24 Hours
On February 28, 2026, the US operation codenamed "Epic Fury" and Israel's parallel "Roaring Lion" struck approximately 900 targets in the first 12 hours. Supreme Leader Ali Khamenei was killed in the initial wave along with dozens of senior officials. The targets spanned Iranian leadership, missile sites, air defenses, naval assets, and nuclear facilities.
Cameron Stanley, the DoD's Chief Digital and AI Officer, later confirmed that MSS had consolidated "eight or nine systems into a single visualization tool," stating: "We've gone from identifying the target to now coming up with a course of action, to now actioning that target, all from one system. This is revolutionary."
Anthropic's Claude on the Battlefield
This is where the story becomes deeply uncomfortable for me. I use Claude — the AI made by Anthropic — every day. I used it to build this blog. I use it to study. And according to the Wall Street Journal, the same underlying technology was deployed on classified Pentagon networks to synthesize intelligence, prioritize targets, and simulate battle scenarios during the strikes on Iran.
Anthropic had tried to draw ethical lines. The company maintained prohibitions against using Claude for mass surveillance or fully autonomous weapons. When the Pentagon demanded unrestricted access, CEO Dario Amodei refused. One day before the Iran strikes began, the Trump administration designated Anthropic a "supply chain risk" — a classification historically reserved for foreign adversaries.
Yet Claude continued operating in the war. The technology was too deeply embedded in classified military networks to remove overnight. Palantir CEO Alex Karp confirmed on March 12: "The Department of War is planning to phase out Anthropic; currently, it's not phased out."
The tool I use to study medicine was simultaneously being used to help select targets in a war that has killed over 1,900 people.
Israel's AI Arsenal
Israel deployed its own AI systems. "The Gospel" generates targets for structures. "Lavender" analyzes behavioral patterns. Israeli cyber units hacked Iran's BadeSaba prayer app — with over five million downloads — to broadcast anti-regime messages. Hacked traffic camera footage was synthesized with billions of data points to build target banks.
Evidence from Israel's earlier Gaza operations showed that AI systems like Lavender marked approximately 37,000 Palestinians as suspected militants, and intelligence officers reportedly reviewed AI targeting recommendations in as little as 20 seconds. "Human in the loop" had become, in practice, a rubber stamp.
Part 3: The Numbers That Keep Me Awake
As of late March 2026:
- Iran: Over 1,900 killed, including approximately 170 schoolgirls at the Shajareh Tayyebeh elementary school in Minab — struck on the first day of the war
- Lebanon: Over 1,100 killed since Israel intensified strikes against Hezbollah on March 2
- Israel: 19 civilians killed, over 5,700 injured
- US military: 13 service members killed
- Gulf states: Over 30 killed, many of them migrant workers
- Allied forces have struck over 15,000 targets — a pace of 1,000+ per day that experts say would be physically impossible without AI assistance
- Iran's internet has been blacked out for over 30 days, cutting off 90 million people from information
The Minab school strike haunts me. Whether it resulted from outdated targeting data, an algorithmic error, or a deliberate calculation, it demonstrates what happens when machines help select targets at scale. AI doesn't feel the weight of those lives. It processes coordinates.
Part 4: The Strait of Hormuz — Where Technology Meets Geography
One of the most striking aspects of this war is how Iran has leveraged geography against technological superiority. The Strait of Hormuz, through which roughly one-fifth of the world's oil passes, has been effectively closed to commercial shipping since the war began.
President Trump threatened to "obliterate" Iran's power plants if the strait wasn't reopened. Iran responded that it would completely close the strait and target all US and Israeli energy infrastructure in the region. The deadline has been pushed to April 6.
Brent crude has surged nearly 50% to over $100 per barrel. The Nikkei 225 dropped 2,800 points in a single morning. Qatar's LNG exports have been knocked out. The International Energy Agency has called it "the largest supply disruption in the history of the global oil market."
All the AI in the world cannot change the fact that a narrow waterway, some cheap drones, and sea mines can bring the global economy to its knees.
Part 5: The Question Only Humans Can Answer
I'm studying to become a doctor. The Hippocratic tradition begins with "first, do no harm." Meanwhile, the technology I use daily is being deployed in systems designed to do maximum harm with maximum efficiency.
Small Wars Journal proposed three benchmarks for meaningful human control over military AI in March 2026:
- Can the operator explain why the AI recommends a particular target?
- Does the operator have adequate time for genuine judgment?
- Can the operator meaningfully override the system?
With kill chains compressed to under 10 minutes and over 1,000 targets struck per day, these questions become increasingly difficult to answer affirmatively.
The international regulatory framework is barely keeping up. The UN's Group of Governmental Experts on Lethal Autonomous Weapons Systems won't produce results until late 2026 at the earliest. A December 2024 UNGA resolution calling for restrictions passed with 166 votes in favor — but the United States and Russia were among five nations opposing binding constraints. Secretary-General Guterres has called autonomous weapons "politically unacceptable" and "morally repugnant."
Conclusion: The View from a Small Desk in Bulgaria
I sit at a desk in a small classroom in Bulgaria. I study chemistry, biology, and English — the building blocks of a medical education. Outside this room, AI systems are processing intelligence, generating targets, and compressing the time between "identify" and "kill" to mere minutes.
The 2026 Iran war has been called "the first AI war." What began as a 2017 project to help analysts watch drone footage has become an architecture for warfare at machine speed. The Maven Smart System, Claude, The Gospel, Lavender — these are no longer experimental tools. They are operational realities shaping who lives and who dies.
I don't have a neat conclusion. I don't think one exists. But I believe the question is no longer whether AI will be central to warfare. It already is. The question — the only one that matters — is whether humans can maintain genuine control over systems that operate faster than human judgment allows.
And whether a world that builds such systems can also build enough doctors, enough teachers, enough people who choose to heal rather than harm.
I hope so. I have to.
Yuichi is a former IT engineer from Tokyo, currently studying medicine in Bulgaria. He writes about technology, life abroad, and the places where they intersect at yuichi.blog.