Contrary to common fears, the Pentagon is not using generative AI to autonomously identify targets. Its primary application is in synthesizing intelligence, summarizing reports, and generating memos—acting as an efficiency tool for human analysts, not a weaponized chatbot.
Instead of automating decisions, the Pentagon's AI strategy focuses on synthesizing vast amounts of data—assets, weather, potential reactions—to expand a human operator's situational awareness, enabling them to make better, more informed choices.
The Under Secretary of War personally debunks the popular theory that a spike in late-night pizza deliveries to the Pentagon signals imminent military action. He claims to not even know how to get a pizza delivered into the building and suggests the indicator is easily corruptible and should not be taken seriously.
Anthropic was deemed a supply chain risk not because of a simple contract dispute, but because the Pentagon feared the company's internal values could be encoded into its models. This could lead to unpredictable "refusals" or "hallucinations" in critical military systems developed by contractors using their AI.
Countering the idea that slow, manual processes add valuable friction to warfare decisions, the Pentagon's view is that AI maintains critical checks and balances (rules of engagement, approvals). It only removes the inefficient friction of "hunting and pecking" for data, leading to faster and better-informed decisions.
Beyond offensive capabilities, the military sees AI as a tool for harm reduction. An LLM trained on visual data could act as a final check, flagging potential targets that show signs of civilian presence—like a playground outside a building—thereby augmenting human decision-making to prevent tragic errors.
The US military is less concerned about its own AI going rogue and more worried that adversaries like China, who distrust their own generals due to graft or incompetence, will fully automate military decision-making to eliminate human risk, creating a dangerous strategic imbalance.
To combat inefficiency, the Pentagon is moving away from paying contractors for time and materials ('cost-plus'). The new model emphasizes business-oriented, fixed-price contracts where companies are paid upon successful, on-time delivery of a working product, introducing more risk and profit incentive for vendors.
According to the Under Secretary, the foundational mistake that led to the Anthropic conflict was a previous administration's decision to rely on one AI provider. This created a monopolistic scenario, giving the vendor outsized leverage. The current strategy mandates a multi-vendor ecosystem to ensure competition and balance power.
The key takeaway from conflicts in Ukraine and Iran is the severe cost imbalance created by drones. Cheap, disposable drones can threaten multi-million dollar assets, forcing a strategic shift toward developing low-cost, mass-produced "attributable weapons" to level the economic playing field.
