Skip to content

US Air Force Experiment Reveals AI’s 400-Fold Speed in Generating Military Attack Plans, But Highlights Critical Limitations

Washington, D.C. – September 29, 2025 – A groundbreaking U.S. Air Force experiment has demonstrated that artificial intelligence can produce detailed military attack plans at speeds 400 times faster than human operators, underscoring the transformative potential of AI in modern warfare while exposing persistent flaws that demand human oversight. Conducted as part of the second Decision Advantage Sprint for Human-Machine Teaming (DASH-2), the trial revealed AI’s ability to generate 10 viable Courses of Action (COAs) in mere seconds, compared to the 16 minutes required by human teams to develop just three. However, many AI-generated plans contained subtle errors, such as mismatched sensors for adverse weather conditions, emphasizing that full autonomy remains unfeasible in high-stakes combat scenarios.

The DASH-2 exercise, hosted by the 805th Combat Training Squadron at its unclassified facility in downtown Las Vegas in late July, tasked participants with devising COAs for striking specified targets using predefined aircraft and weaponry mixes. Human staff, relying on traditional methods, averaged one COA every 5.3 minutes, while AI systems churned out 1.25 plans per second— a dramatic leap from the inaugural DASH-1 wargame earlier in 2025, where AI accelerated planning by a factor of seven without increased error rates. 0 Brigadier General Robert Claude, the Space Force’s representative on the Air Force’s Advanced Battle Management System (ABMS) Cross-Functional Team, shared these findings at the Air Force Association’s annual Air, Space, and Cyber Conference, noting the results’ eye-opening implications for joint operations.

This rapid prototyping, completed in a two-week coding sprint involving six unnamed software vendors, integrated AI microservices to ingest battlefield data and rank effectors—such as missiles or drones—for optimal target engagement. The simulations incorporated multi-domain assets, including Army Terminal High Altitude Area Defense batteries, Navy guided-missile destroyers, and cyber tools, challenging participants to navigate complex, cross-service kill chains. 7 Preliminary analyses indicated AI not only boosted output volume by 30 times but also enabled simultaneous execution of multiple kill chains, providing commanders with a broader array of options under time pressure. Yet, the technology’s haste came at a cost: While errors were not catastrophic—like assigning tanks to aerial sorties—they were insidious, potentially undermining mission success in real-world fog-of-war conditions.

These outcomes align with broader U.S. military efforts to embed AI within Command, Control, Communications, Computers, Intelligence, Surveillance, and Reconnaissance (C4ISR) frameworks. Over the past decade, AI has enhanced data fusion, tasking, collection, processing, exploitation, and dissemination (TCPED) cycles, yielding more reliable intelligence analyses and predictive warnings. 1 The DASH series, part of the Department of Defense’s Combined Joint All-Domain Command and Control (CJADC2) initiative—rebranded under the current administration as a cornerstone of the “Department of War”—aims to modernize decision-making by breaking down command-and-control processes into 52 discrete warfighter choices, each ripe for AI augmentation. Colonel Jonathan Zall, ABMS Capability Integration chief, described the exercise as proof that human-machine teaming has transitioned from concept to capability, fusing operator expertise with algorithmic velocity to forge “decision advantage” in contested environments.

The experiment’s tempered optimism echoes global debates on AI’s battlefield role. At the 2025 Paris AI Summit, stakeholders from defense industries, military structures, and human rights organizations grappled with ethical integration, concluding that fully autonomous lethal systems remain prohibited due to AI’s inability to reliably distinguish combatants from civilians. Experts stress verification loops where humans validate AI outputs, particularly in targeting, to mitigate risks like algorithmic bias or “hallucinations.” Colonel John Ohlund, ABMS director, highlighted upcoming DASH-3 at Nellis Air Force Base’s Shadow Operations Center, which will probe AI’s capacity to quantify risks, opportunities, and resource trade-offs in COA generation.

As peer adversaries like China and Russia accelerate AI militarization, the U.S. Air Force’s iterative sprints underscore a pragmatic path: Leverage AI for speed and scale, but anchor it with human judgment to ensure viability. This hybrid model could redefine operational tempo, yet it demands rigorous data integration and extended development cycles beyond exploratory prototypes. With DASH-3 underway, the service edges closer to operationalizing these tools, potentially reshaping joint and coalition warfare by 2030.

For further details on the DASH series, refer to official Air Force releases and Breaking Defense coverage.

author avatar
LabNews Media LLC
LabNews: Biotech. Digital Health. Life Sciences. Pugnalom: Environmental News. Nature Conservation. Climate Change. augenauf.blog: Wir beobachten Missstände