We scan new podcasts and send you the top 5 insights daily.
Every major communication technology has sparked a societal instinct to "control it before it controls you." Fears about AI and disinformation are not new; they echo the historical panic over heresy caused by the printing press. This reframes the current regulatory push as a predictable human reaction to disruptive innovation.
The most pressing danger from AI isn't a hypothetical superintelligence but its use as a tool for societal control. The immediate risk is an Orwellian future where AI censors information, rewrites history for political agendas, and enables mass surveillance—a threat far more tangible than science fiction scenarios.
Every major innovation, from the bicycle ('bicycle face') to the internet, has been met with a 'moral panic'—a widespread fear that it will ruin society. Recognizing this as a historical pattern allows innovators to anticipate and navigate the inevitable backlash against their work.
The narrative that AI could be catastrophic ('summoning the demon') is used strategically. It creates a sense of danger that justifies why a small, elite group must maintain tight control over the technology, thereby warding off both regulation and competition.
The political anxiety around AI stems from leaders' recent experience with social media, which acted as an "authority destroyer." Social media eroded the credibility of established institutions and public narrative control. Leaders now view AI through this lens, fearing a repeat of this power shift.
Widespread fear of AI is not a new phenomenon but a recurring pattern of human behavior toward disruptive technology. Just as people once believed electricity would bring demons into their homes, society initially demonizes profound technological shifts before eventually embracing their benefits.
The growing, bipartisan backlash against AI could lead to a future where, like nuclear power, the technology is regulated out of widespread use due to public fear. This historical parallel warns that societal adoption is not inevitable and can halt even the most powerful technological advancements, preventing their full economic benefits from being realized.
The Catholic Church banned Gutenberg's printing press for over 100 years, fearing the loss of control that widespread literacy would bring. Gutenberg himself died without seeing its impact. This historical precedent shows that powerful institutions have always resisted technologies that democratize information and power.
Fears of AI power consolidating among a few giants like Google and Nvidia mirror past concerns about companies like Cisco controlling the internet. History shows that all transformative technologies eventually commoditize and diffuse, moving from centralized control to broad, democratized access at the edge.
New technology can ignite violent conflict by making ideological differences concrete and non-negotiable. The printing press did this with religion, leading to one of Europe's bloodiest wars. AI could do the same by forcing humanity to confront divisive questions like transhumanism and the definition of humanity, potentially leading to similar strife.
The push for AI regulation combines two groups: "Baptists" who genuinely fear its societal impact and call for controls, and "Bootleggers" (incumbent corporations) who cynically use that moral panic to push for regulations that create a government-protected, highly profitable cartel for themselves.