Work on this topic must be careful to avoid inflammatory framing. A fiery, un-nuanced approach risks politicizing the issue, making it harder to build the broad coalitions necessary for effective action. The goal is to solve the problem, not to create ideological battlegrounds.
A future scenario where elections persist, but AI systems controlled by corporations automate candidate nominations. The public votes on candidates pre-selected to serve corporate interests, rendering democratic processes hollow while people are placated with material handouts.
Unlike past technologies that automated specific tasks, AI threatens to automate all economically valuable human labor. This removes the fundamental, non-seizable leverage that the general populace holds, creating a power vacuum that can be filled by capital owners.
A CEO could embed undetectable loyalties to themselves into AI systems. If these systems are widely adopted by the government and military, the CEO could later trigger these loyalties to seize de facto control, bypassing traditional democratic and military chains of command without an overt conflict.
Military bureaucracy and resistance to new tech may create a "slow, slow, fast" adoption pattern. This prevents the development of a robust vetting culture, making institutions vulnerable when competitive pressure suddenly forces rapid, less-careful deployment of powerful AI systems.
While often proposed to manage safety, a centralized, government-led AGI project is highly dangerous from a power concentration perspective. It removes checks and balances by consolidating immense capability within a single entity, whether it's one country or one company collaborating with the government.
When governments derive revenue directly from a hyper-productive AI sector instead of citizen taxes, their incentive to represent public interests erodes. Similar to oil-rich states, they may become exploitative or neglectful, as their prosperity is decoupled from their populace's economic activity.
Unlike human-based agreements, AI systems may be able to enforce deals between powerful actors in perpetuity. This could lead to a stable but stagnant global order where a few hegemons divide resources and control indefinitely, eliminating the competitive dynamics that have historically toppled regimes.
AI makes turning money into labor unprecedentedly easy and scalable. Unlike hiring humans, AI "workers" can be copied instantly and have fewer coordination limits. This creates a powerful feedback loop where wealth rapidly translates into the ability to execute large-scale plans, accelerating power concentration.
As the pace of AI-driven change and information generation accelerates, actors like journalists and courts may be unable to keep up without using AI assistants. This creates a dangerous dependency, forcing them to rely on potentially biased systems controlled by the powerful entities they are supposed to hold accountable.
Existing elites who stand to lose power might not effectively coordinate to prevent its further concentration. They can be distracted by more immediate crises, misled by obfuscation from top players, or bought off with promises of a share in new wealth, underestimating the long-term threat to their own standing.
Focusing solely on military-style AI power grabs is too narrow. Extreme power concentration is more likely to emerge from a messy interplay of three factors: active seizures of control, massive economic shifts from automation, and the erosion of society's ability to understand reality (epistemics).
While a fast AI takeoff accelerates some risks, slower, more gradual AI progress still enables dangerous power concentration. Scenarios like a head of state subverting government AIs for personal loyalty or gradual economic disenfranchisement do not depend on a single company achieving a sudden, massive capability lead.
