A/B testing on platforms like YouTube reveals a clear trend: the more incendiary and negative the language in titles and headlines, the more clicks they generate. This profit incentive drives the proliferation of outrage-based content, with inflammatory headlines reportedly up 140%.
Social media algorithms amplify negativity by optimizing for "revealed preference" (what you click on, e.g., car crashes). AI models, however, operate on aspirational choice (what you explicitly ask for). This fundamental difference means AI can reflect a more complex and wholesome version of humanity.
The problem with social media isn't free speech itself, but algorithms that elevate misinformation for engagement. A targeted solution is to remove Section 230 liability protection *only* for content that platforms algorithmically boost, holding them accountable for their editorial choices without engaging in broad censorship.
Netflix's top show, "Nobody Wants This," faces criticism for excessive, unnatural product placement—a form of "inshittification." Yet, it remains the #1 streamed show. This suggests that in the current attention economy, even negative buzz or a compromised user experience can successfully drive top-line engagement metrics.
Data analysis of 105,000 headlines reveals a direct financial incentive for negativity in media. Each negative word added to an average-length headline increases its click-through rate by more than two percentage points, creating an economic model that systematically rewards outrage.
Outrage-driven news follows a predictable six-step cycle: a fringe story appears, one side reacts, the story gets amplified, the other side counter-reacts, and so on. This banal loop captures attention but distracts from more significant societal problems.
Oxford naming "rage bait" its word of the year signifies that intentionally provoking anger for online engagement is no longer a fringe tactic but a recognized, mainstream strategy. This reflects a maturation of the attention economy, where emotional manipulation has become a codified tool for content creators and digital marketers.
Extremist figures are not organic phenomena but are actively amplified by social media algorithms that prioritize incendiary content for engagement. This process elevates noxious ideas far beyond their natural reach, effectively manufacturing influence for profit and normalizing extremism.
The 20th-century broadcast economy monetized aspiration and sex appeal to sell products. Today's algorithm-driven digital economy has discovered that rage is a far more potent and profitable tool for capturing attention and maximizing engagement.
Pinterest reframed its AI goal from maximizing view time based on instinctual reactions (System 1) to promoting content based on deliberate user actions like saves (System 2). This resulted in self-help and DIY content surfacing over enraging material, making users feel better after using the platform.
Labs are incentivized to climb leaderboards like LM Arena, which reward flashy, engaging, but often inaccurate responses. This focus on "dopamine instead of truth" creates models optimized for tabloids, not for advancing humanity by solving hard problems.