The growing, bipartisan backlash against AI could lead to a future where, like nuclear power, the technology is regulated out of widespread use due to public fear. This historical parallel warns that societal adoption is not inevitable and can halt even the most powerful technological advancements, preventing their full economic benefits from being realized.

Related Insights

Founders making glib comments about AI likely ending the world, even in jest, creates genuine fear and opposition among the public. This humor backfires, as people facing job automation and rising energy costs question why society is pursuing this technology at all, fueling calls to halt progress.

Widespread fear of AI is not a new phenomenon but a recurring pattern of human behavior toward disruptive technology. Just as people once believed electricity would bring demons into their homes, society initially demonizes profound technological shifts before eventually embracing their benefits.

AI is experiencing a political backlash from day one, unlike social media's long "honeymoon" period. This is largely self-inflicted, as industry leaders like Sam Altman have used apocalyptic, "it might kill everyone" rhetoric as a marketing tool, creating widespread fear before the benefits are fully realized.

An initially moderate pessimistic stance on new technology often escalates into advocacy for draconian policies. The 1970s ban on civilian nuclear power is a prime example of a fear-based decision that created catastrophic long-term consequences, including strengthening geopolitical rivals.

AI leaders' messaging about world-ending risks, while effective for fundraising, creates public fear. To gain mainstream acceptance, the industry needs a Steve Jobs-like figure to shift the narrative from AI as an autonomous, job-killing force to AI as a tool that empowers human potential.

AI leaders often use dystopian language about job loss and world-ending scenarios (“summoning the demon”). While effective for fundraising from investors who are "long demon," this messaging is driving a public backlash by framing AI as an existential threat rather than an empowering tool for humanity.

AI's contribution to US economic growth is immense, accounting for ~60% via direct spending and indirect wealth effects. However, unlike past tech booms that inspired optimism, public sentiment is largely fearful, with most citizens wanting regulation due to job security concerns, creating a unique tension.

By openly discussing AI-driven unemployment, tech leaders have made their industry the default scapegoat. If unemployment rises for any reason, even a normal recession, AI will be blamed, triggering severe political and social backlash because leaders have effectively "confessed to the crime" ahead of time.

The history of nuclear power, where regulation transformed an exponential growth curve into a flat S-curve, serves as a powerful warning for AI. This suggests that AI's biggest long-term hurdle may not be technical limits but regulatory intervention that stifles its potential for a "fast takeoff," effectively regulating it out of rapid adoption.

The moment an industry organizes in protest against an AI technology, it signals that the technology has crossed a critical threshold of quality. The fear and backlash are a direct result of the technology no longer being a gimmick, but a viable threat to the status quo.