We scan new podcasts and send you the top 5 insights daily.
While features like autoplay can be separated from speech, algorithmic personalization is much closer to protected editorial discretion. Attempts to regulate how platforms recommend content—the likely cause of many user harms—will face severe First Amendment challenges, making it the thorniest issue for policymakers.
Unlike legacy media, which had standards and practices departments, the modern creator economy operates without gatekeepers. Content optimized for maximum engagement—often featuring sex, violence, and controversy—is pushed to the top by algorithms, leaving young and vulnerable audiences exposed to unfiltered and often harmful material.
The problem with social media isn't free speech itself, but algorithms that elevate misinformation for engagement. A targeted solution is to remove Section 230 liability protection *only* for content that platforms algorithmically boost, holding them accountable for their editorial choices without engaging in broad censorship.
Recent legal victories against tech giants like Meta and Google bypass Section 230 protections. Instead of focusing on harmful content, plaintiffs successfully argue that features like infinite scroll and personalized algorithms are deliberately designed to be addictive, presenting a product liability issue.
Recommendation algorithms don't just predict what users like; they actively nudge users toward more extreme preferences. This makes behavior easier to predict and monetize, effectively creating an automated radicalization pipeline for the algorithm's own efficiency.
The power of AI algorithms extends beyond content recommendation. By subtly shaping search results, feeds, and available information, a small group of tech elites can construct a bespoke version of reality for each user, guiding their perceptions and conclusions invisibly.
Even if platforms agree to make changes, there's no industry or societal consensus on what constitutes "safe social media." It's unclear if removing specific features like autoplay or infinite scroll would actually improve mental health, making it difficult for companies to address liability or for regulators to craft effective rules.
The next wave of social media regulation is moving beyond content moderation to target core platform design. The EU and US legal actions are scrutinizing features like infinite scroll and personalized algorithms as potentially "addictive." This focus on platform architecture could fundamentally alter the user experience for both teens and adults.
A targeted approach to social media regulation is to remove Section 230 liability protection specifically for content that platforms' algorithms choose to amplify. If a company reverse-engineers a user's behavior to promote harmful content, they should be held liable, just as a bartender is for over-serving a customer.
Current regulatory focus on privacy misses the core issue of algorithmic harm. A more effective future approach is to establish a "right to algorithmic transparency," compelling companies like Amazon to publicly disclose how their recommendation and pricing algorithms operate.
The core legal question for social media and AI is shifting from content moderation (Section 230) to whether the platform's design is a liable "product" (like tobacco) or protected "expression" (like speech), setting a precedent for future AI cases.