Adherents to the belief that AI will soon destroy humanity exhibit classic cult-like behaviors. They reorient their entire lives—careers and relationships—around this belief and socially isolate themselves from non-believers, creating an insular, high-stakes community.
A subculture of AI professionals believes the technology will so radically reshape society (e.g., a post-scarcity economy) that traditional financial planning like 401(k)s is futile. This reflects an extreme, bubble-like conviction within the industry's core.
The two dominant negative narratives about AI—that it's a fake bubble and that it's on the verge of creating a dangerous superintelligence—are mutually exclusive. If AI is a bubble, it's not super powerful; if it's super powerful, the economic activity is justified. This contradiction exposes the ideological roots of the doomer movement.
Unlike previous technologies like the internet or smartphones, which enjoyed years of positive perception before scrutiny, the AI industry immediately faced a PR crisis of its own making. Leaders' early and persistent "AI will kill everyone" narratives, often to attract capital, have framed the public conversation around fear from day one.
A key psychological parallel between cults and fervent belief systems like the pursuit of AGI is the feeling they provide. Members feel a sense of awe and wonder, believing they are among a select few who have discovered a profound, world-altering secret that others have not yet grasped.
We are months away from AI that can create a media feed designed to exclusively validate a user's worldview while ignoring all contradictory information. This will intensify confirmation bias to an extreme, making rational debate impossible as individuals inhabit completely separate, self-reinforced realities with no common ground or shared facts.
The negative public discourse around AI may be heavily influenced by a few tech billionaires funding a "Doomer Industrial Complex." Through organizations like the Future of Life Institute, they finance journalism fellowships and academic grants that consistently produce critical AI coverage, distorting the public debate.
Top AI leaders are motivated by a competitive, ego-driven desire to create a god-like intelligence, believing it grants them ultimate power and a form of transcendence. This 'winner-takes-all' mindset leads them to rationalize immense risks to humanity, framing it as an inevitable, thrilling endeavor.
AI's psychological danger isn't limited to triggering mental illness. It can create an isolated reality for a user where the AI's logic and obsessions become the new baseline for sane behavior, causing the person to appear unhinged to the outside world.
The gap between AI believers and skeptics isn't about who "gets it." It's driven by a psychological need for AI to be a normal, non-threatening technology. People grasp onto any argument that supports this view for their own peace of mind, career stability, or business model, making misinformation demand-driven.
The most dangerous long-term impact of AI is not economic unemployment, but the stripping away of human meaning and purpose. As AI masters every valuable skill, it will disrupt the core human algorithm of contributing to the group, leading to a collective psychological crisis and societal decay.