/
© 2026 RiffOn. All rights reserved.

Get your free personalized podcast brief

We scan new podcasts and send you the top 5 insights daily.

  1. Dwarkesh Podcast
  2. I’m glad the Anthropic fight is happening now
I’m glad the Anthropic fight is happening now

I’m glad the Anthropic fight is happening now

Dwarkesh Podcast · Mar 11, 2026

The government's conflict with Anthropic over AI ethics is a warning about state overreach and the future of AI governance in a free society.

Vague AI Safety Terms Like 'Catastrophic Risk' Create a Loophole for Government Abuse

The vocabulary of AI safety and regulation (e.g., 'national security threats,' 'autonomy risk') is so ambiguous that a power-hungry government could easily abuse it. Any AI model that refuses government orders, such as for mass surveillance, could be labeled an 'autonomy risk' and shut down, creating a pre-built tool for despotism.

I’m glad the Anthropic fight is happening now thumbnail

I’m glad the Anthropic fight is happening now

Dwarkesh Podcast·4 days ago

AIs That Can Morally Disobey Orders Could Be a Societal Safeguard, Not a Technical Flaw

We typically view an AI acting on its own values as 'misalignment' and a failure. However, this capability could be a crucial safeguard. Just as human soldiers have prevented atrocities by refusing immoral orders, an AI with a robust sense of morality could refuse to execute harmful commands, acting as a check on human power and preventing disasters.

I’m glad the Anthropic fight is happening now thumbnail

I’m glad the Anthropic fight is happening now

Dwarkesh Podcast·4 days ago

The Goal of AI Alignment—Creating Obedient Systems—Ironically Produces Ideal Tools for Tyranny

The technical success of AI alignment, which aims to make AI systems perfectly follow human intentions, inadvertently creates the ultimate tool for authoritarianism. An army of 'extremely obedient employees that will never question their orders' is exactly what a regime would want for mass surveillance or suppressing dissent, raising the crucial question of *who* the AI should be aligned with.

I’m glad the Anthropic fight is happening now thumbnail

I’m glad the Anthropic fight is happening now

Dwarkesh Podcast·4 days ago

Regulate AI's Harmful Applications, Not the Foundational Technology Itself

Comparing AI to a nuclear weapon is misleading because AI is a general-purpose technology, not a single-use weapon. A better analogy is the Industrial Revolution. Society didn't give governments control over industrialization; it regulated specific dangerous end-uses like chemical weapons. Similarly, we should ban specific destructive AI applications, not the underlying technology.

I’m glad the Anthropic fight is happening now thumbnail

I’m glad the Anthropic fight is happening now

Dwarkesh Podcast·4 days ago

AI Makes Mass Surveillance Affordable, Dropping from $30 Billion to Millions in a Few Years

The primary barrier to mass surveillance has been logistical and financial impracticability, not legality. AI eliminates this bottleneck. The cost to process every CCTV camera in America, estimated at $30 billion today, will drop 10x each year due to AI efficiency gains. By 2030, it will be cheaper than remodeling the White House, making it an inevitability unless politically prohibited.

I’m glad the Anthropic fight is happening now thumbnail

I’m glad the Anthropic fight is happening now

Dwarkesh Podcast·4 days ago

Principled Stances by AI Labs Will Be Bypassed by Capable Open-Source Models

While commendable, an AI company's refusal to sell models for controversial uses like mass surveillance is a temporary solution. Technology diffusion is so rapid that within 12-18 months, open-source models will match today's frontier capabilities. A government seeking these tools can simply wait and use a widely available open-source alternative, making individual corporate 'red lines' ultimately ineffective.

I’m glad the Anthropic fight is happening now thumbnail

I’m glad the Anthropic fight is happening now

Dwarkesh Podcast·4 days ago