David Sachs, the Trump administration's AI czar, publicly accused Anthropic of using "fear mongering" to achieve "regulatory capture." This exact phrase, "fear based regulatory capture strategy," then appeared in a leaked draft executive order, revealing a direct link between the administration's public rhetoric and its formal policy-making.
White House AI czar David Sachs used a Brookings report to claim AI job loss fears are exaggerated. The report's own author publicly clarified that while short-term impact is low, long-term disruption is underestimated, revealing a political motivation to downplay near-term job loss.
Prominent investors like David Sacks and Marc Andreessen claim that Anthropic employs a sophisticated strategy of fear-mongering about AI risks to encourage regulations. They argue this approach aims to create barriers for smaller startups, effectively solidifying the market position of incumbents under the guise of safety.
Anthropic is publicly warning that frontier AI models are becoming "real and mysterious creatures" with signs of "situational awareness." This high-stakes position, which calls for caution and regulation, has drawn accusations of "regulatory capture" from the White House AI czar, putting Anthropic in a precarious political position.
A draft executive order aimed at preempting state AI laws includes deadlines for nearly every action except for the one tasking the administration to create a federal replacement. This strategic omission suggests the real goal is to block both state and federal regulation, not to establish a uniform national policy.
Leading AI companies allegedly stoke fears of existential risk not for safety, but as a deliberate strategy to achieve regulatory capture. By promoting scary narratives, they advocate for complex pre-approval systems that would create insurmountable barriers for new startups, cementing their own market dominance.
The new executive order on AI regulation does not establish a national framework. Instead, its primary function is to create a "litigation task force" to sue states and threaten to withhold funding, effectively using federal power to dismantle state-level AI safety laws and accelerate development.
The President's AI executive order aims to create a unified, industry-friendly regulatory environment. A key component is an "AI litigation task force" designed to challenge and preempt the growing number of state-level AI laws, centralizing control at the federal level and sidelining local governance.
Anthropic faces a critical dilemma. Its reputation for safety attracts lucrative enterprise clients, but this very stance risks being labeled "woke" by the Trump administration, which has banned such AI in government contracts. This forces the company to walk a fine line between its brand identity and political reality.
The administration's executive order to block state-level AI laws is not about creating a unified federal policy. Instead, it's a strategic move to eliminate all regulation entirely, providing a free pass for major tech companies to operate without oversight under the guise of promoting U.S. innovation and dominance.
When a government official like David Sachs singles out a specific company (Anthropic) for not aligning with the administration's agenda, it is a dangerous departure from neutral policymaking. It signals a move towards an authoritarian model of rewarding allies and punishing dissenters in the private sector.