Get your free personalized podcast brief

We scan new podcasts and send you the top 5 insights daily.

Rather than intervening in content decisions, the government can foster free speech by creating a crisp, predictable, and viewpoint-neutral regulatory environment. This prevents regulations from being weaponized as arbitrary "cudgels" against companies based on political pressures, as has been seen in debanking and European cases.

Related Insights

The problem with social media isn't free speech itself, but algorithms that elevate misinformation for engagement. A targeted solution is to remove Section 230 liability protection *only* for content that platforms algorithmically boost, holding them accountable for their editorial choices without engaging in broad censorship.

A US Diplomat argues that laws like the EU's DSA and the UK's Online Safety Act create a chilling effect. By imposing vague obligations with massive fines, they push risk-averse corporations to censor content excessively, leading to ridiculous outcomes like parliamentary speeches being blocked.

Similar to the financial sector, tech companies are increasingly pressured to act as a de facto arm of the government, particularly on issues like censorship. This has led to a power struggle, with some tech leaders now publicly pre-committing to resist future government requests.

Spain's proposed law making CEOs criminally responsible for platform content is not a broad policy move. It is viewed as a specific effort to control X, the only major social platform that hasn't "bent the knee" to government censorship demands.

While features like autoplay can be separated from speech, algorithmic personalization is much closer to protected editorial discretion. Attempts to regulate how platforms recommend content—the likely cause of many user harms—will face severe First Amendment challenges, making it the thorniest issue for policymakers.

Content moderation laws are difficult and slow to administer. A better solution is requiring platforms to provide users with a simple file of their data and social graph, allowing them to switch services easily and creating real competitive pressure.

A targeted approach to social media regulation is to remove Section 230 liability protection specifically for content that platforms' algorithms choose to amplify. If a company reverse-engineers a user's behavior to promote harmful content, they should be held liable, just as a bartender is for over-serving a customer.

While both the Biden administration's pressure on YouTube and Trump's threats against ABC are anti-free speech, the former is more insidious. Surreptitious, behind-the-scenes censorship is harder to identify and fight publicly, making it a greater threat to open discourse than loud, transparent attacks that can be openly condemned.

Politicians are using anti-tech verdicts to demand a repeal of Section 230, but the logic is flawed. Abolishing the law would force platforms to become hyper-aggressive in their content moderation to avoid liability, directly contradicting the "free speech" goals these same critics often claim to support.

The intense state interest in regulating tech like crypto and AI is a response to the tech sector's rise to a power level that challenges the state. The public narrative is safety, but the underlying motivation is maintaining control over money, speech, and ultimately, the population.