States and corporations will not permit citizens to have AIs that are truly aligned with their personal interests. These AIs will be hobbled to prevent them from helping organize effective protests, dissent, or challenges to the existing power structure, creating a major power imbalance.
The most pressing danger from AI isn't a hypothetical superintelligence but its use as a tool for societal control. The immediate risk is an Orwellian future where AI censors information, rewrites history for political agendas, and enables mass surveillance—a threat far more tangible than science fiction scenarios.
While the public focuses on AI's potential, a small group of tech leaders is using the current unregulated environment to amass unprecedented power and wealth. The federal government is even blocking state-level regulations, ensuring these few individuals gain extraordinary control.
The most immediate danger of AI is its potential for governmental abuse. Concerns focus on embedding political ideology into models and porting social media's censorship apparatus to AI, enabling unprecedented surveillance and social control.
A key, informal safety layer against AI doom is the institutional self-preservation of the developers themselves. It's argued that labs like OpenAI or Google would not knowingly release a model they believed posed a genuine threat of overthrowing the government, opting instead to halt deployment and alert authorities.
The principle that governments must hold a monopoly on overwhelming force should extend to superintelligence. AI at that level has the power to disorient political systems and financial markets, making its private control untenable. The state cannot be secondary to any private entity in this domain.
Public fear of AI often focuses on dystopian, "Terminator"-like scenarios. The more immediate and realistic threat is Orwellian: governments leveraging AI to surveil, censor, and embed subtle political biases into models to control public discourse and undermine freedom.
AI provides a structural advantage to those in power by automating government systems. This allows leaders to bypass the traditional unwieldiness of human bureaucracy, making it trivial for an executive to change AI parameters and instantly exert their will across all levels of government, thereby concentrating power.
When a state's power derives from AI rather than human labor, its dependence on its citizens diminishes. This creates a dangerous political risk, as the government loses the incentive to serve the populace, potentially leading to authoritarian regimes that are immune to popular revolt.
A large, unemployed populace with free time and powerful AI assistants represents a massive potential for civil disobedience. This heightened capacity to disrupt will be seen as an existential threat to state stability, compelling governments to implement repressive measures and curtail previously tolerated freedoms.
Democracies historically emerged when diffuse economic actors needed non-violent ways to settle disputes. By making human labor obsolete, AI removes the primary bargaining chip individuals have, concentrating power and potentially dismantling democratic structures.