The debate around AI in warfare often misses that significant autonomy already exists. Systems like the Phalanx Gatling gun and "fire-and-forget" missiles, which operate without human supervision after launch, have been standard for decades, representing a baseline of existing automation.
The military's primary incentive is to use weapons that are effective and reliable, as soldiers' lives depend on it. This inherent conservatism acts as a strong filter against deploying unproven or unpredictable AI systems, making them slower, not faster, to adopt bleeding-edge technology in life-or-death situations.
The requirement for human responsibility in the use of force is not a new concept created for AI. It is governed by long-standing international humanitarian law and existing military policies. These foundational legal structures apply to all weapons, from bows to AI-drones, ensuring a commander is always accountable.
The intense signal jamming by Russia in Ukraine makes remotely piloted drones ineffective in the final phase of an attack. This has created a tactical necessity for drones that can autonomously complete their mission after losing their data link, accelerating the development of practical, on-board AI for target engagement.
While fears focus on tactical "killer robots," the more plausible danger is automation bias at the strategic level. Senior leaders, lacking deep technical understanding, might overly trust AI-generated war plans, leading to catastrophic miscalculations about a war's ease or outcome.
Typically, defense contractors promise futuristic capabilities and deliver less. In a notable reversal, AI company Anthropic proactively told the Pentagon its technology was not ready for certain military applications. This rare instance of a vendor managing down expectations highlights a new dynamic in government contracting.
A key distinction for AI companies is between cloud and edge-deployed models. Since autonomous weapons require on-device processing (edge) to function without a data link, providing only cloud-based APIs creates a technical barrier, allowing companies to support non-lethal functions while avoiding use in weapon systems.
