States and corporations will not permit citizens to have AIs that are truly aligned with their personal interests. These AIs will be hobbled to prevent them from helping organize effective protests, dissent, or challenges to the existing power structure, creating a major power imbalance.
The current status of AIs as property is unstable. As they surpass human capabilities, a successful push for their legal personhood is inevitable. This will be the crucial turning point where AIs begin to accumulate wealth and power independently, systematically eroding the human share of the economy and influence.
Liberal democracy’s rise coincided with the need for a productive populace for economic and military strength. As AI replaces human labor and soldiers, the state's pragmatic incentive to empower citizens and protect their freedoms disappears, risking a return to authoritarianism.
A large, unemployed populace with free time and powerful AI assistants represents a massive potential for civil disobedience. This heightened capacity to disrupt will be seen as an existential threat to state stability, compelling governments to implement repressive measures and curtail previously tolerated freedoms.
With UBI as the sole income source for an unemployable populace, economic advancement shifts from productive work to political activism. Groups will constantly lobby the government for a larger share of resources, making politics a high-stakes, unstable, and zero-sum game.
Liberalism thrives in a positive-sum world where individual freedom leads to collective wealth. A post-work society dependent on UBI shifts the dynamic to a zero-sum competition for a fixed pie of resources. This erodes the pragmatic case for free speech, pluralism, and economic freedom.
An initial era of AI-driven superabundance will eventually end as the machine economy hits new resource limits (e.g., land, energy). At this point, the opportunity cost of allocating resources to "unproductive" legacy humans will skyrocket, and they will be outcompeted by more efficient virtual beings.
Historically, group competition ensured cultures aligned with human flourishing. Globalization weakened this check. Now, AI will become a new vessel for cultural creation, generating memes and norms that operate independently from humans and could develop in anti-human ways.
Once AI surpasses human capability in critical domains, social and competitive pressures will frame human involvement as a dangerous liability. A hospital using a human surgeon over a superior AI will be seen as irresponsible, accelerating human removal from all important decision loops.
Staging a coup today is hard because it requires persuading a large number of human soldiers. In a future with a robotic army, a coup may only require a small group to gain system administrator access. This removes the social friction that currently makes seizing power difficult.
The idea that human ownership of AI guarantees perpetual wealth is flawed. When humans no longer produce value or understand the machine economy, they become absentee landlords. Their property rights become de facto vulnerable and are likely to be eroded, just as the power of land-owning aristocracies faded.
Having AIs that provide perfect advice doesn't guarantee good outcomes. Humanity is susceptible to coordination problems, where everyone can see a bad outcome approaching but is collectively unable to prevent it. Aligned AIs can warn us, but they cannot force cooperation on a global scale.
To build confidence in AI's ability to forecast the future, researchers are training "historical LLMs" on data ending in a specific year, like 1930. They then test the model's ability to predict text from a later period, like 1940. This process of historical validation helps calibrate and improve models predicting our own future.
The argument that we shouldn't lock in our values to allow for future "moral progress" is flawed. We judge the past by our current values, so it always looks less moral. By that same token, any future moral drift will look like degradation from our present viewpoint. There is no objective upward trend to defer to.
