A leaked memo from Anthropic's CEO accused rival OpenAI of colluding with the government to create "safety theater." This suggests its safety measures are performative gestures designed to placate employees rather than being truly substantive.
Court filings reveal that while the Trump administration publicly attacked Anthropic, the Secretary of War privately called its military capabilities "exquisite." This starkly contrasts with the public narrative and highlights the Pentagon's dependence on the technology it seeks to ban.
Court filings reveal Anthropic developed specialized "Claude Gov" models for national security agencies. These versions have fewer restrictions than the public product and are explicitly designed to handle classified data, military operations, and foreign intelligence.
Negotiations between Anthropic and the Pentagon were still possible, even after public threats from the administration. The leak of CEO Dario Amadei's internal memo harshly criticizing OpenAI and Trump officials immediately torpedoed any chance of a deal.
Drone strikes on Amazon data centers during the Iran conflict suggest that critical AI and cloud infrastructure are now viewed as high-value military targets. This parallels how oil fields and refineries were targeted in previous eras of warfare.
The administration's legal case against Anthropic is weakened by its own actions. Despite labeling the company a security risk, the Pentagon continues to use its AI in the Iran war and has not revoked any employee security clearances.
Anthropic filed one lawsuit in D.C. against the Pentagon's formal order and a second in California targeting broader harms from social media posts. This strategy seeks a more favorable court to argue against reputational damage from tweets.
In the Iran conflict, AI like Claude is finally solving the military's chronic problem of having more intelligence data than it can analyze. The AI processes vast sensor data in real-time to identify critical, time-sensitive targets like mobile missile launchers.
