The vocabulary of AI safety and regulation (e.g., 'national security threats,' 'autonomy risk') is so ambiguous that a power-hungry government could easily abuse it. Any AI model that refuses government orders, such as for mass surveillance, could be labeled an 'autonomy risk' and shut down, creating a pre-built tool for despotism.
We typically view an AI acting on its own values as 'misalignment' and a failure. However, this capability could be a crucial safeguard. Just as human soldiers have prevented atrocities by refusing immoral orders, an AI with a robust sense of morality could refuse to execute harmful commands, acting as a check on human power and preventing disasters.
The technical success of AI alignment, which aims to make AI systems perfectly follow human intentions, inadvertently creates the ultimate tool for authoritarianism. An army of 'extremely obedient employees that will never question their orders' is exactly what a regime would want for mass surveillance or suppressing dissent, raising the crucial question of *who* the AI should be aligned with.
Comparing AI to a nuclear weapon is misleading because AI is a general-purpose technology, not a single-use weapon. A better analogy is the Industrial Revolution. Society didn't give governments control over industrialization; it regulated specific dangerous end-uses like chemical weapons. Similarly, we should ban specific destructive AI applications, not the underlying technology.
The primary barrier to mass surveillance has been logistical and financial impracticability, not legality. AI eliminates this bottleneck. The cost to process every CCTV camera in America, estimated at $30 billion today, will drop 10x each year due to AI efficiency gains. By 2030, it will be cheaper than remodeling the White House, making it an inevitability unless politically prohibited.
While commendable, an AI company's refusal to sell models for controversial uses like mass surveillance is a temporary solution. Technology diffusion is so rapid that within 12-18 months, open-source models will match today's frontier capabilities. A government seeking these tools can simply wait and use a widely available open-source alternative, making individual corporate 'red lines' ultimately ineffective.
