AI tools could give the president granular, real-time control over the entire federal bureaucracy. This concept of a 'unitary artificial executive' threatens to centralize immense power, enabling a president to override the independent functions and expertise of civil servants at scale.
The most pressing danger from AI isn't a hypothetical superintelligence but its use as a tool for societal control. The immediate risk is an Orwellian future where AI censors information, rewrites history for political agendas, and enables mass surveillance—a threat far more tangible than science fiction scenarios.
While the public focuses on AI's potential, a small group of tech leaders is using the current unregulated environment to amass unprecedented power and wealth. The federal government is even blocking state-level regulations, ensuring these few individuals gain extraordinary control.
The most immediate danger of AI is its potential for governmental abuse. Concerns focus on embedding political ideology into models and porting social media's censorship apparatus to AI, enabling unprecedented surveillance and social control.
While mitigating catastrophic AI risks is critical, the argument for safety can be used to justify placing powerful AI exclusively in the hands of a few actors. This centralization, intended to prevent misuse, simultaneously creates the monopolistic conditions for the Intelligence Curse to take hold.
Public fear of AI often focuses on dystopian, "Terminator"-like scenarios. The more immediate and realistic threat is Orwellian: governments leveraging AI to surveil, censor, and embed subtle political biases into models to control public discourse and undermine freedom.
AI provides a structural advantage to those in power by automating government systems. This allows leaders to bypass the traditional unwieldiness of human bureaucracy, making it trivial for an executive to change AI parameters and instantly exert their will across all levels of government, thereby concentrating power.
When a state's power derives from AI rather than human labor, its dependence on its citizens diminishes. This creates a dangerous political risk, as the government loses the incentive to serve the populace, potentially leading to authoritarian regimes that are immune to popular revolt.
The White House plans an executive order to "kneecap state laws aimed at regulating AI." This move, favored by some tech startups, would eliminate the existing patchwork of state-level safeguards around discrimination and privacy without necessarily replacing them with federal standards, creating a regulatory vacuum.
The President's AI executive order aims to create a unified, industry-friendly regulatory environment. A key component is an "AI litigation task force" designed to challenge and preempt the growing number of state-level AI laws, centralizing control at the federal level and sidelining local governance.
While making powerful AI open-source creates risks from rogue actors, it is preferable to centralized control by a single entity. Widespread access acts as a deterrent based on mutually assured destruction, preventing any one group from using AI as a tool for absolute power.