The political left requires a central catastrophe narrative to justify its agenda of economic regulation and information control. As the "climate doomerism" narrative loses potency, "AI doomerism" is emerging as its successor—a new, powerful rationale for centralizing power over the tech industry.

Related Insights

The most pressing danger from AI isn't a hypothetical superintelligence but its use as a tool for societal control. The immediate risk is an Orwellian future where AI censors information, rewrites history for political agendas, and enables mass surveillance—a threat far more tangible than science fiction scenarios.

The most immediate danger of AI is its potential for governmental abuse. Concerns focus on embedding political ideology into models and porting social media's censorship apparatus to AI, enabling unprecedented surveillance and social control.

By framing competition with China as an existential threat, tech leaders create urgency and justification for government intervention like subsidies or favorable trade policies. This transforms a commercial request for financial support into a matter of national security, making it more compelling for policymakers.

The two dominant negative narratives about AI—that it's a fake bubble and that it's on the verge of creating a dangerous superintelligence—are mutually exclusive. If AI is a bubble, it's not super powerful; if it's super powerful, the economic activity is justified. This contradiction exposes the ideological roots of the doomer movement.

Unlike previous technologies like the internet or smartphones, which enjoyed years of positive perception before scrutiny, the AI industry immediately faced a PR crisis of its own making. Leaders' early and persistent "AI will kill everyone" narratives, often to attract capital, have framed the public conversation around fear from day one.

The negative public discourse around AI may be heavily influenced by a few tech billionaires funding a "Doomer Industrial Complex." Through organizations like the Future of Life Institute, they finance journalism fellowships and academic grants that consistently produce critical AI coverage, distorting the public debate.

The rhetoric around AI's existential risks is framed as a competitive tactic. Some labs used these narratives to scare investors, regulators, and potential competitors away, effectively 'pulling up the ladder' to cement their market lead under the guise of safety.

AI is experiencing a political backlash from day one, unlike social media's long "honeymoon" period. This is largely self-inflicted, as industry leaders like Sam Altman have used apocalyptic, "it might kill everyone" rhetoric as a marketing tool, creating widespread fear before the benefits are fully realized.

Leading AI companies allegedly stoke fears of existential risk not for safety, but as a deliberate strategy to achieve regulatory capture. By promoting scary narratives, they advocate for complex pre-approval systems that would create insurmountable barriers for new startups, cementing their own market dominance.

Influencers from opposite ends of the political spectrum are finding common ground in their warnings about AI's potential to destroy jobs and creative fields. This unusual consensus suggests AI is becoming a powerful, non-traditional wedge issue that could reshape political alliances and public discourse.