Abstract
In January 2026, the United States deployed a commercial artificial intelligence model on classified networks during the raid to capture Venezuelan President Nicolás Maduro, the first confirmed use of a large language model in a kinetic military operation. Weeks later, during Operation Epic Fury, AI was used at theatre scale for intelligence fusion, target identification, and multi-domain synchronisation across strikes on Iran. On the night of 28 February, an Iranian drone struck Al Minhad Air Base in the UAE, where approximately 80 Australian Defence Force personnel were stationed. Australia officially did not participate in the initial campaign. This essay argues that the operational deployment of AI in two major combat operations within 60 days has rendered the ADF’s current AI integration timelines obsolete and exposed a critical interoperability gap with our primary ally. We now have an urgent professional obligation for every ADF member to understand the technology now shaping the battlespace they will operate in.
Inside the wire
On the night of 28 February 2026, a Shahed-type one-way attack drone launched by the Islamic Revolutionary Guard Corps penetrated United Arab Emirates airspace and detonated within the perimeter of Al Minhad Air Base. Camp Baird, the Australian Defence Force’s (ADF) Headquarters Middle East, sits inside that base. Approximately 80 to 100 ADF personnel were on the ground: logistics specialists, air operations support staff, intelligence coordinators. All of them sheltered. All of them survived.
The drone was one component of Iran’s Operation True Promise IV, a retaliatory barrage of over 1,200 loitering munitions and 770 ballistic missiles launched across the Persian Gulf at targets in Israel, the UAE, Qatar, Kuwait, and Bahrain. Debris from intercepted missiles damaged the Burj Al Arab and hotels on Palm Jumeirah, killing foreign nationals in Dubai.
The strikes that provoked this retaliation, Operation Epic Fury, were conducted by the United States and Israel. On the first day the US-Israeli joint operation killed Iran’s Supreme Leader, destroyed ballistic missile infrastructure, and sank Iranian naval vessels. The Australian Government explicitly supported the strategic objectives of the United States. Prime Minister Albanese, Defence Minister Marles, and Foreign Minister Wong issued a joint statement on 28 February: ‘We support the United States acting to prevent Iran from obtaining a nuclear weapon and to prevent Iran continuing to threaten international peace and security.’
Australia did not participate in the strikes. Australia’s troops were targeted anyway.
That paradox is confronting enough. But the deeper issue is not the drone. It is what powered the campaign that provoked it.
Sixty days, two operations, one threshold crossed
On 3 January 2026, United States special operations forces conducted a massive raid on Caracas, capturing Venezuelan President Nicolás Maduro. In the weeks that followed, reporting by the Wall Street Journal, Axios, and Reuters confirmed that Anthropic’s Claude, a commercial large language model (LLM), had been deployed on classified networks via Palantir Technologies during the active operation. This was the first confirmed use of a commercial AI model in a classified, kinetic military operation.
The specific role of Claude remains classified, but system architecture analysis indicates it functioned as an intelligence synthesiser. This is an embedded and trusted decision and action superiority system. Traditional intelligence cycles – gathering satellite imagery, intercepting communications, synthesising signals and human intelligence – despite decades of automation, typically require hours of human analysis to formulate an actionable briefing for a joint task force commander. Palantir’s automated agent loops bypassed this entirely. With Palantir’s agent loops continuously feeding real-time battlefield data into the model and returning instant pattern analysis, translation, and threat prediction. The result was a compression of the sensor-to-commander timeline from hours to minutes, and in discrete instances, to what defence analysts project as sub-90-second decision loops.[1] That is the 90-second war: not a metaphor, but the analytical reality of an Observe, Orient, Decide, Act (OODA) loop collapsing to a speed that traditional human-centric command structures are unable to match. At that tempo, a ‘human in the loop’ is no longer a decision-maker. They are a bottleneck. The operational reality is a shift to ‘human on the loop’: a commander who supervises and retains the authority to intervene, but who can no longer meaningfully originate every decision. Western military doctrine has not yet reconciled itself to this fact. The battlespace already has. Our major ally has completed this revolutionary shift in the speed of applying military power; we remain interested observers.
Less than eight weeks later, Operation Epic Fury demonstrated AI employment at theatre scale. United States Central Command (CENTCOM) used AI models to process real-time sensor data, prioritise high-value targets, run operational simulations, and synchronise kinetic strikes with cyber and space effects. Dr Tehilla Shwartz Altshuler of the Institute for National Security Studies described this as the moment AI moved from ‘back-office analytical tool to operational embed in battlefield decision-making.’[2]
The operational tempo was staggering. AI systems recommended over 1,000 discrete targets in the first 24 hours; approximately 900 strikes were executed within the first 12 hours alone.[3] By 11 March, Admiral Brad Cooper confirmed coalition forces had struck more than 5,500 targets inside Iran.[4] By Day 13, that figure had reached 15,000.[5] The mathematical ceiling for human review at the opening tempo was under 48 seconds per target. Before kinetic strikes hit, cyber operations compromised Iranian state media and a religious calendar application with over five million downloads, delivering targeted psychological operations directly to Iranian citizens. Joint Chiefs Chairman General Dan Caine confirmed that Space Command and Cyber Command acted first, layering non-kinetic effects to disrupt, degrade, and blind Iranian air defences. The orchestration of these multi-domain effects at that speed is only possible with AI-enabled battle management.
Between these two operations, a highly public dispute erupted between the Pentagon and Anthropic over the model’s safety guardrails. Anthropic maintained two non-negotiable restrictions: no mass domestic surveillance, and no fully autonomous weapons. When the company refused to remove these restrictions, on 27 February 2026, one day before Epic Fury commenced, President Trump ordered all federal agencies to cease using Anthropic technology.[6] Secretary of War Pete Hegseth designated Anthropic a ‘supply chain risk to national security’, a classification previously reserved for foreign adversaries. Yet Claude remained operational during the opening strikes. Pentagon officials admitted that disentangling the model from the Maven architecture would take three to twelve months.[7] Anthropic has since filed a civil lawsuit alleging unconstitutional retaliation, while OpenAI has announced a deal to replace Anthropic in classified environments.[8] The six-month phase-out window means Anthropic’s architecture remains embedded in active combat operations today. OpenAI CEO Sam Altman subsequently told employees regarding military applications: ‘You do not get to make operational decisions.’
The underlying message from the United States defence establishment is unambiguous: operational speed and decision advantage will not be sacrificed for civilian-mandated safety frameworks. AI is in the kill chain. The debate about whether it should be is already over. How it operates in the kill chain is likely to differ between coalition partners.
The gap
For the ADF, these events expose a capability and literacy gap that is no longer theoretical. It is measurable. If the ADF cannot operate at allied tempo, it loses meaningful sovereignty, even if it retains legal authority.
Australia’s Defence Artificial Intelligence Centre (DAIC) was established in July 2024 to coordinate AI adoption across Defence. In its first year of operation, the DAIC delivered a Defence AI Playbook, a Defence AI Register, an AI Standards Profile, accountable-officer training for senior leaders, and broad user training. This is important foundational work. It is also the kind of work an organisation does when it believes it has time.
It does not.
In the same month that the Advanced Strategic Capabilities Accelerator awarded $40 million across 14 ‘decision advantage’ AI projects in Canberra, the United States used a commercial AI model to help capture a sitting head of state. The GenAI.mil platform, launched in December 2025, reached one million US Defence users within its first month. This is providing frontier AI capability across both classified and unclassified networks. The ADF has no equivalent. Enterprise AI adoption within Defence remains concentrated almost entirely in administrative applications: drafting procurement contracts, streamlining citizen services, optimising budget management. The gap between Microsoft Copilot and AI-enabled intelligence fusion on a classified operational network is not incremental. It is a chasm. And it is precisely the chasm that separates the ADF’s current posture from the capability the United States deployed in combat. As a US ally, our ability for combined operations at their speed is not possible.
The workforce timelines are similarly dislocated. The Defence Innovation Lifecycle Program targets 80 per cent baseline AI literacy across the defence workforce by 2030. The Royal Australian Navy’s RAS-AI Strategy 2040 aims to equip the entire naval workforce with foundational AI knowledge by 2040. These were reasonable targets when they were set. They are no longer feasible. The United States has already deployed AI in two major combat operations. A 2030 literacy target and a 2040 workforce aspiration are not plans; they are admissions that the institution has not yet grasped the speed at which the environment has changed. Stability is no longer available. Continuous enterprise adaptation remains elusive.
Coalition drag
The AI capability gap becomes a strategic liability the moment Australia operates alongside the United States. The ADF’s value to the alliance rests on interoperability: the ability to plug seamlessly into US-led operations and contribute meaningfully to combined effects. If the US is running Combined Joint All-Domain Command and Control (CJADC2) on AI-accelerated decision cycles, and the ADF is relying on manual intelligence synthesis and traditional staff processes, Australia does not multiply the force. It slows it down.
This is not hypothetical. CJADC2 is transitioning to a data-centric architecture where security controls are applied to the data itself, not the network. Every data object must carry standardised metadata tags. An AI algorithm reads the user’s digital profile, determines their access, and instantly decides what they can see. If an Australian sensor generates a track without US-compliant metadata, the CJADC2 algorithms will discard it as untrusted data. The Australian contribution becomes invisible to the coalition.
The ADF is investing heavily in the systems that must eventually interface with this architecture. AIR 6500 explicitly relies on AI and machine learning to fuse sensor data across domains. LAND 200 has selected the SitaWare platform, widely used by Five Eyes partners, with open architecture designed for future AI integration. The Google Australia sovereign cloud agreement signed in December 2025 provides the air-gapped hyperscale infrastructure necessary for classified AI processing. These are the right investments. They are also incomplete. The gap between ‘designed for future AI integration’ and ‘operational under fire’ is precisely the gap that Epic Fury exposed.
AUKUS Pillar II is the primary vehicle for closing this gap. The Maritime Big Play exercise in February 2026 tested autonomous undersea capabilities across all three AUKUS nations. Talisman Sabre 2025 included AI-enabled air battle management experimentation with US forces. But experimentation is not operational deployment. The United States Studies Centre has warned that Pillar II initiatives remain ‘stove-piped’ and that elements of the Australian bureaucracy are hesitant about the maturity of AI technologies. The events of January to March 2026 prove that this technology is not immature. It is combat tested.
The consequences of AI-accelerated targeting are not confined to Western countries. During Epic Fury, a Chinese geospatial startup, MizarVision, used automated object detection algorithms to identify US F-22 Raptors and allied air defence postures from commercial satellite imagery within hours of acquisition.[9] That intelligence appeared to align temporally with Iranian retaliatory targeting sequences. AI is not solely a Western advantage; it is a ubiquitous surveillance mechanism available to any actor with access to commercial imagery and a capable algorithm. Separately, Human Rights Watch has confirmed that the AI-directed strike on the Shajareh Tayyebeh Primary School in Minab on 28 February killed at least 48 children, struck by precision-guided munitions consistent with a deliberately selected target rather than a misfire.[10] The ICRC has condemned the attack as a violation of International Humanitarian Law.[11] These are the stakes of algorithmic warfare at machine speed: the same targeting architecture that enables 15,000 strikes in 13 days also creates the conditions for catastrophic error at a tempo that precludes meaningful human review.
The interoperability question also carries a harder edge. Australian joint facilities and regional sensor networks contribute to the same allied intelligence architecture that US AI models draw on to generate targets and fuse operational pictures. If Australian data feeds into CJADC2, then Australia is functionally inside the AI-enabled kill chain regardless of its diplomatic position on any given operation. The drone that struck Camp Baird did not audit Australia’s policy settings. Adversaries do not distinguish between the nation that launched the strike and the nation whose sensors helped build the picture. This does not argue for disengagement from the alliance. It argues for a rigorous framework of risk-aware employment. No commander can audit the internal logic of a frontier neural network in real time. That is not possible even for the engineers who built it. But commanders can, and must, understand the system’s operational boundaries: what it has been trained on, where it hallucinates, how it performs under adversarial conditions, and when professional judgment demands overriding its output. Feeding sovereign sensor data into an algorithm whose failure modes you have not examined is an abdication of command responsibility. Sovereignty in the AI era is not about comprehending every parameter. It is about knowing when to trust the machine and when to trust the soldier.
What this demands
Colonel Tom McDermott's 2025 Cove article ‘Hello Niner+, Take Over’ explored whether a battlegroup commander might one day team with an AI assistant. That question is now behind us. The United States has already deployed AI in command roles during active operations. The question for the ADF is not whether to engage with this technology, but whether we will do so fast enough to remain a credible coalition partner and maintain sovereign command authority in an AI-enabled battlespace.
Three things must change
Change 1: AI literacy must be treated as a core professional competency. Not a 2030 aspiration. Not a COVE+ elective. A professional obligation on par with weapons handling, physical fitness, or the capacity to write a military appreciation. Every officer, NCO, and soldier deploying into a coalition environment needs to understand, at a minimum, what AI can and cannot do, how to evaluate AI-generated outputs, and when to override them. In practice, this means a platoon commander using AI-generated intelligence summaries on exercise should be trained to identify when the output conflicts with ground truth, just as they are trained to identify when a map does not match the terrain. It means an S2 receiving an AI-fused threat assessment in a coalition headquarters must know the right questions to ask about the data sources, the model’s confidence level, and its known failure modes. The profession of arms has always demanded that its members master the tools of their era. AI is the defining tool of this one.
Change 2: AUKUS Pillar II AI and autonomy programs must accelerate from experimentation to operational deployment. The Maritime Big Play and Talisman Sabre exercises are valuable. They are not sufficient. Australia needs classified AI capability on operational networks, interoperable with CJADC2, before the next coalition operation, not after it. The Google sovereign cloud infrastructure and the AIR 6500 AI architecture provide the technical foundation. What is missing is the institutional urgency to field these capabilities at the speed the strategic environment now demands.
Change 3: Every ADF member must understand that they are already operating in an AI-enabled battlespace. The 80+ Australians at Camp Baird on the night of 28 February were inside the blast radius of an AI-accelerated campaign. They did not need to use AI themselves. The technology shaped their operating environment regardless. Australia’s deepening integration into Epic Fury’s operational envelope confirms this. The Government deployed an E-7A Wedgetail early warning aircraft to the Gulf to secure allied airspace. The Prime Minister confirmed that Royal Australian Navy submariners were serving aboard a United States Navy nuclear submarine during combat engagements in the theatre.[12] Concurrently, the ADF has accelerated validation of its own autonomous systems: the Autonomous Tactical Light Armour System Collaborative Combat Vehicle has been successfully validated, and the third generation of the MQ-28A Ghost Bat is now fully weaponised and operational.[13] These are encouraging signals. They are also confirmation that the ADF recognises it must adapt to the algorithmic warfare doctrines that Epic Fury has validated. The next coalition operation, whether in the littoral approaches of the Indo-Pacific or another theatre entirely, will almost certainly involve AI-enabled command and intelligence systems. The ADF’s own systems may or may not be AI-enabled by then. The adversary’s systems, and our primary ally’s systems, will be.
The sub-90-second intelligence cycle demonstrated in Venezuela, and the theatre-scale algorithmic acceleration that struck 15,000 targets in 13 days during Operation Epic Fury, represent the slowest this technology will ever be.
Frontier AI capability is doubling approximately every 90 days. The question for the ADF is not whether to adapt. It is whether we can adapt at the speed the technology demands. The profession of arms has never had the luxury of waiting for change to become comfortable and now that change is coming at machine speed. What did you do today to be ready?
End Notes
[1] DebugLies, 'Capturing Emerging Technology Benefits Without Catastrophic Risks,' 5 March 2026. Defence analysts project 'sub-90-second decision loops' for AI-enabled multidomain precision warfare. Neither the Pentagon nor CENTCOM has publicly validated a precise 90-second metric as a measured statistical reality for Epic Fury.
[2] Dr Tehilla Shwartz Altshuler, 'AI-first' warfare: America's algorithmic edge in Operation Epic Fury – opinion, Jerusalem Post, 3 March 2026.
[3] Homeland Security Today, 'Algorithmic Warfare in the Iran Conflict: Operation Epic Fury and Dawn of the AI Battlefield,' March 2026.
[4] Gulf News, '5,500 targets inside Iran, including 60+ ships, eliminated: US commander,' 11 March 2026.
[5] Pentagon Briefing, Secretary of War Pete Hegseth and Chairman of the Joint Chiefs General Dan Caine, 16 March 2026. Confirmed 15,000 targets struck within 13 days.
[6] Congressional Research Service, 'Pentagon-Anthropic Dispute over Autonomous Weapon Systems: Potential Issues for Congress,' March 2026; PBS NewsHour, 'Trump orders federal agencies to stop using Anthropic tech over AI safety dispute,' February 2026.
[7] Defense One, 'Pentagon's war on Anthropic based on dubious legal thinking and ideology, not real risk, sources say,' March 2026.
[8] CBS News, 'Trump order cutting ties with Anthropic likely coming later this week,' March 2026; Military Times, 'Pentagon says it is labeling Anthropic a supply chain risk effective immediately,' 6 March 2026.
[9] Defence Security Asia, 'Operation Epic Fury: Chinese AI Exposes F-22, THAAD With Satellite Intelligence,' March 2026.
[10] Human Rights Watch, 'US/Israel: Investigate Iran School Attack as War Crime,' March 2026.
[11] International Committee of the Red Cross, 'Statement on escalating military conflict in Iran and the Middle East,' March 2026.
[12] Defense News, 'Australia deploys early-warning aircraft to the Middle East amid Iran attacks,' 11 March 2026.
[13] Australian Defender Digital, March 2026. ATLAS CCV validation and MQ-28A Ghost Bat third generation weaponisation confirmed.