The narrative of AI doom isn't just organic panic. It's being leveraged by established players who are actively seeking "regulatory capture." They aim to create a cartel that chokes off innovation from startups right from the start.

Related Insights

Prominent investors like David Sacks and Marc Andreessen claim that Anthropic employs a sophisticated strategy of fear-mongering about AI risks to encourage regulations. They argue this approach aims to create barriers for smaller startups, effectively solidifying the market position of incumbents under the guise of safety.

David Sachs, the Trump administration's AI czar, publicly accused Anthropic of using "fear mongering" to achieve "regulatory capture." This exact phrase, "fear based regulatory capture strategy," then appeared in a leaked draft executive order, revealing a direct link between the administration's public rhetoric and its formal policy-making.

The political left requires a central catastrophe narrative to justify its agenda of economic regulation and information control. As the "climate doomerism" narrative loses potency, "AI doomerism" is emerging as its successor—a new, powerful rationale for centralizing power over the tech industry.

The rhetoric around AI's existential risks is framed as a competitive tactic. Some labs used these narratives to scare investors, regulators, and potential competitors away, effectively 'pulling up the ladder' to cement their market lead under the guise of safety.

Leading AI companies allegedly stoke fears of existential risk not for safety, but as a deliberate strategy to achieve regulatory capture. By promoting scary narratives, they advocate for complex pre-approval systems that would create insurmountable barriers for new startups, cementing their own market dominance.

Silicon Valley's economic engine is "permissionless innovation"—the freedom to build without prior government approval. Proposed AI regulations requiring pre-approval for new models would dismantle this foundation, favoring large incumbents with lobbying power and stifling the startup ecosystem.

Laws like California's SB243, allowing lawsuits for "emotional harm" from chatbots, create an impossible compliance maze for startups. This fragmented regulation, while well-intentioned, benefits incumbents who can afford massive legal teams, thus stifling innovation and competition from smaller players.

The fear of killer AI is misplaced. The more pressing danger is that a few large companies will use regulation to create a cartel, stifling innovation and competition—a historical pattern seen in major US industries like defense and banking.

Both Sam Altman and Satya Nadella warn that a patchwork of state-level AI regulations, like Colorado's AI Act, is unmanageable. While behemoths like Microsoft and OpenAI can afford compliance, they argue this approach will crush smaller startups, creating an insurmountable barrier to entry and innovation in the US.

The push for AI regulation combines two groups: "Baptists" who genuinely fear its societal impact and call for controls, and "Bootleggers" (incumbent corporations) who cynically use that moral panic to push for regulations that create a government-protected, highly profitable cartel for themselves.

Incumbents Exploit Public AI Fear to Push for Anti-Competitive Regulation | RiffOn