The online world, particularly platforms like the former Twitter, is not a true reflection of the real world. A small percentage of users, many of whom are bots, generate the vast majority of content. This creates a distorted and often overly negative perception of public sentiment that does not represent the majority view.

Related Insights

Social media algorithms amplify negativity by optimizing for "revealed preference" (what you click on, e.g., car crashes). AI models, however, operate on aspirational choice (what you explicitly ask for). This fundamental difference means AI can reflect a more complex and wholesome version of humanity.

Many foreign-based social media accounts promoting extremist views aren't state-sponsored propaganda. Instead, they are run by individuals in developing nations who have discovered that inflammatory content is the easiest way to gain followers and monetize their accounts. This reframes the issue from purely geopolitical influence to include economic opportunism.

Public discourse, especially online, is dominated by a 'loud, dark minority' because anger and negativity are inherently louder than contentment. This creates a skewed perception of reality. The 'quiet happy majority' must actively share authentic happiness—not material flexes—to rebalance the narrative.

The line between irony and sincerity online has dissolved, creating a culture of "kayfabe"—maintaining a fictional persona. It's difficult to tell if polarizing figures are genuine or playing a character, and their audience often engages without caring about the distinction, prioritizing the meta-narrative over reality.

Extremist figures are not organic phenomena but are actively amplified by social media algorithms that prioritize incendiary content for engagement. This process elevates noxious ideas far beyond their natural reach, effectively manufacturing influence for profit and normalizing extremism.

A/B testing on platforms like YouTube reveals a clear trend: the more incendiary and negative the language in titles and headlines, the more clicks they generate. This profit incentive drives the proliferation of outrage-based content, with inflammatory headlines reportedly up 140%.

Most people (88%) agree on fundamental values but remain silent, fearing ostracization. This allows the most extreme 5% of voices to dominate 90% of public discourse, creating a false impression of widespread disagreement and polarization where one doesn't exist.

Before ChatGPT, humanity's "first contact" with rogue AI was social media. These simple, narrow AIs optimizing solely for engagement were powerful enough to degrade mental health and democracy. This "baby AI" serves as a stark warning for the societal impact of more advanced, general AI systems.

Social influence has become even more concentrated in the hands of a few. While the 'super spreader' phenomenon has always existed for ideas and diseases, modern technology dramatically enhances their power by increasing their reach and, crucially, making them easier for others to identify and target.