Anthropic does not have a philosophical objection to autonomous weapons. Their controversial stance is that their LLM, Claude, is currently not reliable enough for such high-stakes tasks. They are willing to work with the Pentagon to improve it, making the conflict a technical disagreement, not a moral one.
The military's AI use is overwhelmingly focused on non-lethal applications like logistics and processing intelligence data. The 'pointy end' of autonomous weapons represents just one small category within a much broader AI strategy that mirrors corporate use cases.
To prevent a scenario where 'the algorithm did it,' the U.S. military relies on the legal principle of 'human responsibility for the use of force.' This ensures a specific commander is always accountable for deploying any weapon, autonomous or not, sidestepping the accountability gap that worries AI ethicists.
Ukraine is pioneering 'last mile autonomy' not as a strategic push for automation, but as a tactical necessity. When Russia jams the data link to a drone, the system can autonomously complete the final leg of its attack on a pre-identified target, countering electronic warfare.
The public fear of 'killer robots' overlooks history. Systems like the U.S. Navy's Phalanx CIWS, used since the 1980s by dozens of countries, can autonomously select and engage incoming threats. The current debate is about the sophistication of the algorithms, not the concept itself.
Debates over systems like Israel's 'Lavender' often focus on the AI. However, the more critical issue may be the human-defined 'rules of engagement'—specifically, what level of algorithmic confidence (e.g., 55% accuracy) leadership deems acceptable to authorize a strike. This is a policy problem, not just a technology one.
While China's official doctrine on responsible military AI appears similar to that of the U.S., the real concern stems from its political structure. An autocratic regime's incentive to centralize power by removing human decision-makers could lead it to deploy unsafe AI systems, regardless of official policy.
