We scan new podcasts and send you the top 5 insights daily.
AI targeting systems excel at generating vast target lists for rapid, shock-and-awe campaigns. However, they are currently being applied to a slower, attritional conflict. This misapplication turns operational excellence into a strategic dead end, where the machine simply produces more targets without a causal link to defeating the enemy.
The critical national security risk for the U.S. isn't failing to invent frontier AI, but failing to integrate it. Like the French who invented the tank but lost to Germany's superior "Blitzkrieg" doctrine, the U.S. could lose its lead through slow operational adoption by its military and intelligence agencies.
The strategy's focus on AI simulation acknowledges a key risk: AI systems can develop winning tactics by exploiting unrealistic aspects of a simulation. If simulation physics or capabilities don't perfectly match reality, these AI-derived strategies could fail catastrophically when deployed.
Contrary to sci-fi tropes, AI's most impactful military use is as a bureaucratic technology. It excels at tedious but vital tasks like report generation, sanitizing intelligence for allies, and processing data, freeing up human operators rather than replacing them in combat.
In the Iran conflict, AI like Claude is finally solving the military's chronic problem of having more intelligence data than it can analyze. The AI processes vast sensor data in real-time to identify critical, time-sensitive targets like mobile missile launchers.
While fears focus on tactical "killer robots," the more plausible danger is automation bias at the strategic level. Senior leaders, lacking deep technical understanding, might overly trust AI-generated war plans, leading to catastrophic miscalculations about a war's ease or outcome.
In warfare or business, an opponent's sheer speed can render superior intelligence irrelevant. A novice chess player making four moves for every one of a grandmaster's will win. Similarly, AI systems that can execute faster will defeat more intelligent but slower counterparts.
Defense tech firm Smack Technologies clarifies the objective is not to remove humans entirely. Instead, AI should handle low-value tasks to free up personnel for critical, high-value decisions. This framework, 'intelligent autonomy,' orchestrates manned and unmanned systems while keeping humans in the loop.
Contrary to the 'killer robots' narrative, the military is cautious when integrating new AI. Because system failures can be lethal, testing and evaluation standards are far stricter than in the commercial sector. This conservatism is driven by warfighters who need tools to work flawlessly.
Smack Technologies argues that general-purpose LLMs fail in military strategy because they rely on historical labeled data. For novel, high-stakes conflicts, a different approach like deep reinforcement learning is required, training models within physics-grounded simulations of potential future battlefields.
In operations, AI models like Anthropic's Claude are used for intelligence analysis, summarizing media chatter, and running simulations to aid commanders. They are not used for autonomous targeting; any outputs go through layers of human review before influencing battlefield decisions.