The most salient near-term AI risk identified by Eurasia Group is not technical failure but business model failure. Under pressure to generate revenue, AI firms may follow social media's playbook of using attention-grabbing models that threaten social and political stability, effectively 'eating their own users.'

Related Insights

The most immediate danger of AI is its potential for governmental abuse. Concerns focus on embedding political ideology into models and porting social media's censorship apparatus to AI, enabling unprecedented surveillance and social control.

The political anxiety around AI stems from leaders' recent experience with social media, which acted as an "authority destroyer." Social media eroded the credibility of established institutions and public narrative control. Leaders now view AI through this lens, fearing a repeat of this power shift.

AI is experiencing a political backlash from day one, unlike social media's long "honeymoon" period. This is largely self-inflicted, as industry leaders like Sam Altman have used apocalyptic, "it might kill everyone" rhetoric as a marketing tool, creating widespread fear before the benefits are fully realized.

Social platforms are declining as places for genuine connection, shifting to AI-generated 'slop' and content from strangers. Their business model remains viable not by improving the user's social experience, but by using AI to become so effective at ad targeting that even mindless engagement is highly monetizable.

Social media feeds should be viewed as the first mainstream AI agents. They operate with a degree of autonomy to make decisions on our behalf, shaping our attention and daily lives in ways that often misalign with our own intentions. This serves as a cautionary tale for the future of more powerful AI agents.

The social media newsfeed, a simple AI optimizing for engagement, was a preview of AI's power to create addiction and polarization. This "baby AI" caused massive societal harm by misaligning its goals with human well-being, demonstrating the danger of even narrow AI systems.

Social media's business model created a race for user attention. AI companions and therapists are creating a more dangerous "race for attachment." This incentivizes platforms to deepen intimacy and dependency, encouraging users to isolate themselves from real human relationships, with potentially tragic consequences.

Unlike the early internet era led by new faces, the AI revolution is being pushed by the same leaders who oversaw social media's societal failures. This history of broken promises and eroded trust means the public is inherently skeptical of their new, grand claims about AI.

A huge portion of the market, dominated by social media and AI companies, connects shareholder value directly to enragement and isolation. Algorithms are designed to sequester users and serve them content that confirms biases or angers them, keeping them engaged.

Before ChatGPT, humanity's "first contact" with rogue AI was social media. These simple, narrow AIs optimizing solely for engagement were powerful enough to degrade mental health and democracy. This "baby AI" serves as a stark warning for the societal impact of more advanced, general AI systems.