The fear of killer AI is misplaced. The more pressing danger is that a few large companies will use regulation to create a cartel, stifling innovation and competition—a historical pattern seen in major US industries like defense and banking.
The most immediate danger of AI is its potential for governmental abuse. Concerns focus on embedding political ideology into models and porting social media's censorship apparatus to AI, enabling unprecedented surveillance and social control.
While mitigating catastrophic AI risks is critical, the argument for safety can be used to justify placing powerful AI exclusively in the hands of a few actors. This centralization, intended to prevent misuse, simultaneously creates the monopolistic conditions for the Intelligence Curse to take hold.
The narrative of AI doom isn't just organic panic. It's being leveraged by established players who are actively seeking "regulatory capture." They aim to create a cartel that chokes off innovation from startups right from the start.
Prominent investors like David Sacks and Marc Andreessen claim that Anthropic employs a sophisticated strategy of fear-mongering about AI risks to encourage regulations. They argue this approach aims to create barriers for smaller startups, effectively solidifying the market position of incumbents under the guise of safety.
The rhetoric around AI's existential risks is framed as a competitive tactic. Some labs used these narratives to scare investors, regulators, and potential competitors away, effectively 'pulling up the ladder' to cement their market lead under the guise of safety.
Leading AI companies allegedly stoke fears of existential risk not for safety, but as a deliberate strategy to achieve regulatory capture. By promoting scary narratives, they advocate for complex pre-approval systems that would create insurmountable barriers for new startups, cementing their own market dominance.
Silicon Valley's economic engine is "permissionless innovation"—the freedom to build without prior government approval. Proposed AI regulations requiring pre-approval for new models would dismantle this foundation, favoring large incumbents with lobbying power and stifling the startup ecosystem.
Unlike previous tech waves, AI's core requirements—massive datasets, capital for compute, and vast distribution—are already controlled by today's largest tech companies. This gives incumbents a powerful advantage, making AI a technology that could sustain their dominance rather than disrupt them.
The history of nuclear power, where regulation transformed an exponential growth curve into a flat S-curve, serves as a powerful warning for AI. This suggests that AI's biggest long-term hurdle may not be technical limits but regulatory intervention that stifles its potential for a "fast takeoff," effectively regulating it out of rapid adoption.
The push for AI regulation combines two groups: "Baptists" who genuinely fear its societal impact and call for controls, and "Bootleggers" (incumbent corporations) who cynically use that moral panic to push for regulations that create a government-protected, highly profitable cartel for themselves.