Get your free personalized podcast brief

We scan new podcasts and send you the top 5 insights daily.

Instead of trying to identify and censor specific "bad" content, a more effective strategy is to use non-targeted, "soft" approaches. This involves temporarily deranking any content spreading too quickly and injecting randomness into recommendation algorithms to break up echo chambers and soften feedback loops.

Related Insights

The problem with social media isn't free speech itself, but algorithms that elevate misinformation for engagement. A targeted solution is to remove Section 230 liability protection *only* for content that platforms algorithmically boost, holding them accountable for their editorial choices without engaging in broad censorship.

The feeling of deep societal division is an artifact of platform design. Algorithms amplify extreme voices because they generate engagement, creating a false impression of widespread polarization. In reality, without these amplified voices, most people's views on contentious topics are quite moderate.

Creating reliable AI detectors is an endless arms race against ever-improving generative models, which often have detectors built into their training process (like GANs). A better approach is using algorithmic feeds to filter out low-quality "slop" content, regardless of its origin, based on user behavior.

Social media algorithms can be trained. By actively blocking or marking unwanted content as "not interested," users can transform their "for you" page from a source of distracting content into a valuable, curated feed of recommended information.

Medium's CEO argues the true measure of success against spam is not the volume of "AI slop" received, but how little reaches end-users. The fight is won through sophisticated recommendation and filtering algorithms that protect the reader experience, rather than just blocking content at the source.

Effective content moderation is more than just removing violative videos. YouTube employs a "grayscale" approach. For borderline content, it removes the two primary incentives for creators: revenue (by demonetizing) and audience growth (by removing it from recommendation algorithms). This strategy aims to make harmful content unviable on the platform.

Algorithms optimize for engagement, and outrage is highly engaging. This creates a vicious cycle where users are fed increasingly polarizing content, which makes them angrier and more engaged, further solidifying their radical views and deepening societal divides.

Instead of outright banning topics, platforms create subtle friction—warnings, errors, and inconsistencies. This discourages users from pursuing sensitive topics, achieving suppression without the backlash of explicit censorship.

A targeted approach to social media regulation is to remove Section 230 liability protection specifically for content that platforms' algorithms choose to amplify. If a company reverse-engineers a user's behavior to promote harmful content, they should be held liable, just as a bartender is for over-serving a customer.

Social media algorithms are not a one-way street; they are trainable. If your feed is making you unhappy, you can fix it in minutes by intentionally searching for and liking content related to topics you enjoy, putting you back in control of your digital environment.