Get your free personalized podcast brief

We scan new podcasts and send you the top 5 insights daily.

Even if platforms agree to make changes, there's no industry or societal consensus on what constitutes "safe social media." It's unclear if removing specific features like autoplay or infinite scroll would actually improve mental health, making it difficult for companies to address liability or for regulators to craft effective rules.

Related Insights

Recent legal victories against tech giants like Meta and Google bypass Section 230 protections. Instead of focusing on harmful content, plaintiffs successfully argue that features like infinite scroll and personalized algorithms are deliberately designed to be addictive, presenting a product liability issue.

The legal strategy against social media giants mirrors the 90s tobacco lawsuits. The case isn't about excessive use, but about proving that features like infinite scroll were intentionally designed to addict users, creating a public health issue. This shifts liability from the user to the platform's design.

Analogies between social media and tobacco in liability lawsuits are flawed. While tobacco offers no health benefits, social media is a 'mixed-use' technology that enables thriving communities and provides real social value. This duality makes regulation extremely difficult, as targeting harm without destroying benefits is a delicate balance.

While features like autoplay can be separated from speech, algorithmic personalization is much closer to protected editorial discretion. Attempts to regulate how platforms recommend content—the likely cause of many user harms—will face severe First Amendment challenges, making it the thorniest issue for policymakers.

To overcome Section 230 protections shielding platforms from liability for user content, recent lawsuits focus on the inherent design of the platforms. The argument is that features like infinite scroll and algorithmic feeds are themselves defective, addictive products, making companies liable for product design flaws rather than user posts.

The next wave of social media regulation is moving beyond content moderation to target core platform design. The EU and US legal actions are scrutinizing features like infinite scroll and personalized algorithms as potentially "addictive." This focus on platform architecture could fundamentally alter the user experience for both teens and adults.

A landmark verdict against Meta and YouTube reveals a new legal strategy to bypass Section 230 immunity. By suing over the intentional, addictive design of features like infinite scroll and autoplay, plaintiffs can frame the platform itself as a defective product, shifting the legal battle from content moderation to product liability.

Recent verdicts against Meta and Google succeed by framing the problem as "defective product design" (like autoplay and infinite scroll) rather than harmful user content. This novel legal strategy circumvents the broad immunity that Section 230 of the Communications Decency Act typically provides to tech platforms.

A targeted approach to social media regulation is to remove Section 230 liability protection specifically for content that platforms' algorithms choose to amplify. If a company reverse-engineers a user's behavior to promote harmful content, they should be held liable, just as a bartender is for over-serving a customer.

A landmark case against Meta has validated a novel legal theory that sidesteps Section 230 protections. By suing over harmful and addictive product design rather than user-generated content, plaintiffs have created a new and potent legal threat to social media platforms, holding them liable for their core algorithms.

Defining "Safe Social Media" Is Nearly Impossible, Complicating Any Regulatory Fix | RiffOn