We scan new podcasts and send you the top 5 insights daily.
Every major innovation, from the bicycle ('bicycle face') to the internet, has been met with a 'moral panic'—a widespread fear that it will ruin society. Recognizing this as a historical pattern allows innovators to anticipate and navigate the inevitable backlash against their work.
The public conversation about AI focuses on job loss, which generates immense fear. This unaddressed fear leads to political polarization and antisocial behavior, or "social ripples." These emotional reactions pose a greater societal threat than the technological disruption itself.
The perceived speed of technological displacement is more critical than the change itself. A 20-year horizon allows industries and individuals to adapt, learn, and integrate new tools. A rapid 2-year horizon, however, creates widespread fear and unrest because it outpaces society's ability to adjust.
Initial public fear over new technologies like AI therapy, while seemingly negative, is actually productive. It creates the social and political pressure needed to establish essential safety guardrails and regulations, ultimately leading to safer long-term adoption.
Unlike previous technologies like the internet or smartphones, which enjoyed years of positive perception before scrutiny, the AI industry immediately faced a PR crisis of its own making. Leaders' early and persistent "AI will kill everyone" narratives, often to attract capital, have framed the public conversation around fear from day one.
Widespread fear of AI is not a new phenomenon but a recurring pattern of human behavior toward disruptive technology. Just as people once believed electricity would bring demons into their homes, society initially demonizes profound technological shifts before eventually embracing their benefits.
The growing, bipartisan backlash against AI could lead to a future where, like nuclear power, the technology is regulated out of widespread use due to public fear. This historical parallel warns that societal adoption is not inevitable and can halt even the most powerful technological advancements, preventing their full economic benefits from being realized.
The dot-com era, despite bubble fears, was characterized by widespread public optimism. In stark contrast, the current AI boom is met with significant anxiety, with over 30% of Americans fearing AI could end humanity. This level of dread marks a fundamental shift in public sentiment toward new technology.
Societal fears, or "moral panics," are cyclical. While the targets change (from witchcraft to 5G wireless), the underlying tactics of exploiting fears around child safety and innocence remain consistent throughout history, repeating the same patterns.
Kara Swisher observes a historical pattern where it takes about 25 years for society and regulators to catch up to a disruptive technology. She believes we are at that inflection point for the internet and social media, where widespread public frustration finally creates the political will for meaningful regulation.
The moment an industry organizes in protest against an AI technology, it signals that the technology has crossed a critical threshold of quality. The fear and backlash are a direct result of the technology no longer being a gimmick, but a viable threat to the status quo.