Get your free personalized podcast brief

We scan new podcasts and send you the top 5 insights daily.

Nuanced health discussions are lost on social media algorithms that reward extreme takes. While more experts should engage, the long-term solution is to build new platforms, likely AI-driven, that prioritize substance over engagement and aren't designed to exploit our primitive impulses for profit.

Related Insights

The feeling of deep societal division is an artifact of platform design. Algorithms amplify extreme voices because they generate engagement, creating a false impression of widespread polarization. In reality, without these amplified voices, most people's views on contentious topics are quite moderate.

The ability to label a deepfake as 'fake' doesn't solve the problem. The greater danger is 'frequency bias,' where repeated exposure to a false message forms a strong mental association, making the idea stick even when it's consciously rejected as untrue.

AI can easily generate a list of health recommendations. However, human adherence to a protocol is far more likely when the underlying mechanism is understood. For AI to be an effective health coach, it must go beyond listing 'what' to do and excel at explaining the 'why,' just as effective human communicators do.

As AI-generated content and virtual influencers saturate social media, consumer trust will erode, leading to 'Peak Social.' This wave of distrust will drive people away from anonymous influencers and back towards known entities and credible experts with genuine authority in their fields.

We are months away from AI that can create a media feed designed to exclusively validate a user's worldview while ignoring all contradictory information. This will intensify confirmation bias to an extreme, making rational debate impossible as individuals inhabit completely separate, self-reinforced realities with no common ground or shared facts.

Before generative AI, the simple algorithms optimizing newsfeeds for engagement acted as a powerful, yet misaligned, "baby AI." This narrow system, pointed at the human brain, was potent enough to create widespread anxiety, depression, and polarization by prioritizing attention over well-being.

Extremist figures are not organic phenomena but are actively amplified by social media algorithms that prioritize incendiary content for engagement. This process elevates noxious ideas far beyond their natural reach, effectively manufacturing influence for profit and normalizing extremism.

A key risk for AI in healthcare is its tendency to present information with unwarranted certainty, like an "overconfident intern who doesn't know what they don't know." To be safe, these systems must display "calibrated uncertainty," show their sources, and have clear accountability frameworks for when they are inevitably wrong.

The social media newsfeed, a simple AI optimizing for engagement, was a preview of AI's power to create addiction and polarization. This "baby AI" caused massive societal harm by misaligning its goals with human well-being, demonstrating the danger of even narrow AI systems.

Before ChatGPT, humanity's "first contact" with rogue AI was social media. These simple, narrow AIs optimizing solely for engagement were powerful enough to degrade mental health and democracy. This "baby AI" serves as a stark warning for the societal impact of more advanced, general AI systems.

Escaping Health Misinformation Requires New AI-Driven Information Platforms | RiffOn