Large AI labs cynically use existential risk arguments, originally from 'effective altruist' communities, to lobby for regulations that stifle competition. This strategy aims to create monopolies by targeting open-source models and international rivals like China.
The exaggerated fear of AI annihilation, while dismissed by practitioners, has shaped US policy. This risk-averse climate discourages domestic open-source model releases, creating a vacuum that more permissive nations are filling and leading to a strategic dependency on their models.
While mitigating catastrophic AI risks is critical, the argument for safety can be used to justify placing powerful AI exclusively in the hands of a few actors. This centralization, intended to prevent misuse, simultaneously creates the monopolistic conditions for the Intelligence Curse to take hold.
The narrative of AI doom isn't just organic panic. It's being leveraged by established players who are actively seeking "regulatory capture." They aim to create a cartel that chokes off innovation from startups right from the start.
Prominent investors like David Sacks and Marc Andreessen claim that Anthropic employs a sophisticated strategy of fear-mongering about AI risks to encourage regulations. They argue this approach aims to create barriers for smaller startups, effectively solidifying the market position of incumbents under the guise of safety.
The political left requires a central catastrophe narrative to justify its agenda of economic regulation and information control. As the "climate doomerism" narrative loses potency, "AI doomerism" is emerging as its successor—a new, powerful rationale for centralizing power over the tech industry.
The rhetoric around AI's existential risks is framed as a competitive tactic. Some labs used these narratives to scare investors, regulators, and potential competitors away, effectively 'pulling up the ladder' to cement their market lead under the guise of safety.
Leading AI companies allegedly stoke fears of existential risk not for safety, but as a deliberate strategy to achieve regulatory capture. By promoting scary narratives, they advocate for complex pre-approval systems that would create insurmountable barriers for new startups, cementing their own market dominance.
The fear of killer AI is misplaced. The more pressing danger is that a few large companies will use regulation to create a cartel, stifling innovation and competition—a historical pattern seen in major US industries like defense and banking.
The push for AI regulation combines two groups: "Baptists" who genuinely fear its societal impact and call for controls, and "Bootleggers" (incumbent corporations) who cynically use that moral panic to push for regulations that create a government-protected, highly profitable cartel for themselves.
Jensen Huang suggests that established AI players promoting "end-of-the-world" scenarios to governments may be attempting regulatory capture. These fear-based narratives could lead to regulations that stifle startups and protect the incumbents' market position.