Tyler Cowen's experience actively moderating his "Marginal Revolution" blog has made him more tolerant of large tech platforms removing content. Seeing the necessity of curation to improve discourse firsthand, he views platform moderation not as censorship but as a private owner's prerogative to maintain quality.
YouTube's CEO justifies stricter past policies by citing the extreme uncertainty of early 2020 (e.g., 5G tower conspiracies). He implies moderation is not static but flexible, adapting to the societal context. Today's more open policies reflect the world's changed understanding, suggesting a temporal rather than ideological approach.
YouTube's content rules change weekly without warning. A sudden demonetization or age-restriction can cripple an episode's reach after it's published, highlighting the significant platform risk creators face when distribution is controlled by a third party with unclear policies.
Elon Musk explains that shadow banning isn't about outright deletion but about reducing visibility. He compares it to the joke that the best place to hide a dead body is the second page of Google search results—the content still exists, but it's pushed so far down that it's effectively invisible.
The problem with social media isn't free speech itself, but algorithms that elevate misinformation for engagement. A targeted solution is to remove Section 230 liability protection *only* for content that platforms algorithmically boost, holding them accountable for their editorial choices without engaging in broad censorship.
Social media algorithms can be trained. By actively blocking or marking unwanted content as "not interested," users can transform their "for you" page from a source of distracting content into a valuable, curated feed of recommended information.
Medium's CEO argues the true measure of success against spam is not the volume of "AI slop" received, but how little reaches end-users. The fight is won through sophisticated recommendation and filtering algorithms that protect the reader experience, rather than just blocking content at the source.
A novel framework rates tech giants based on content policies: Apple is PG (no adult content on iOS), Microsoft is G (professional focus), Google is PG-13 (YouTube content), and Amazon is NC-17 (Kindle erotica). This clarifies their distinct brand positions on sensitive content.
Threads' goal to be a more civil platform has successfully differentiated it from the 'hyper-polarized' X. However, this moderation comes at a cost: it lacks the high-conflict conversations that drive news cycles and cultural relevance, which still happen on its more chaotic rivals.
Avoid building your primary content presence on platforms like Medium or Quora. These platforms inevitably shift focus from serving users to serving advertisers and their own bottom line, ultimately degrading reach and control for creators. Use them as spokes, but always own your central content hub.
While both the Biden administration's pressure on YouTube and Trump's threats against ABC are anti-free speech, the former is more insidious. Surreptitious, behind-the-scenes censorship is harder to identify and fight publicly, making it a greater threat to open discourse than loud, transparent attacks that can be openly condemned.