The push for AI regulation combines two groups: "Baptists" who genuinely fear its societal impact and call for controls, and "Bootleggers" (incumbent corporations) who cynically use that moral panic to push for regulations that create a government-protected, highly profitable cartel for themselves.
The political landscape for AI is not a simple binary. Policy expert Dean Ball identifies three key factions: AI safety advocates, a pro-AI industry camp, and an emerging "truly anti-AI" group. The decisive factor will be which direction the moderate "consumer protection" and "kids safety" advocates lean.
AR Rahman believes AI tools that can replace human jobs are a destructive force that must be regulated. He compares it to firearms, arguing that just as there are rules for ownership, there should be rules preventing the deployment of AI that makes entire skill sets worthless.
The narrative of AI doom isn't just organic panic. It's being leveraged by established players who are actively seeking "regulatory capture." They aim to create a cartel that chokes off innovation from startups right from the start.
Prominent investors like David Sacks and Marc Andreessen claim that Anthropic employs a sophisticated strategy of fear-mongering about AI risks to encourage regulations. They argue this approach aims to create barriers for smaller startups, effectively solidifying the market position of incumbents under the guise of safety.
The rhetoric around AI's existential risks is framed as a competitive tactic. Some labs used these narratives to scare investors, regulators, and potential competitors away, effectively 'pulling up the ladder' to cement their market lead under the guise of safety.
Leading AI companies allegedly stoke fears of existential risk not for safety, but as a deliberate strategy to achieve regulatory capture. By promoting scary narratives, they advocate for complex pre-approval systems that would create insurmountable barriers for new startups, cementing their own market dominance.
AI companies minimizing existential risk mirrors historical examples like the tobacco and leaded gasoline industries. Immense, long-term public harm was knowingly caused for comparatively small corporate gains, enabled by powerful self-deception and rationalization.
Silicon Valley's economic engine is "permissionless innovation"—the freedom to build without prior government approval. Proposed AI regulations requiring pre-approval for new models would dismantle this foundation, favoring large incumbents with lobbying power and stifling the startup ecosystem.
The fear of killer AI is misplaced. The more pressing danger is that a few large companies will use regulation to create a cartel, stifling innovation and competition—a historical pattern seen in major US industries like defense and banking.
Advocating for a single national AI policy is often a strategic move by tech lobbyists and friendly politicians to preempt and invalidate stricter regulations emerging at the state level. Under the guise of creating a unified standard, this approach effectively ensures the actual policy is weak or non-existent, allowing the industry to operate with minimal oversight.