Previous technological revolutions automated physical labor but enhanced human thinking. AI's goal is to replicate and surpass human cognitive abilities, creating a categorical shift that threatens the core of human economic value.
To avoid a future where a few companies control AI and hold society hostage, the underlying intelligence layer must be commoditized. This prevents "landlords" of proprietary models from extracting rent and ensures broader access and competition.
Pensioners receive benefits because they spent decades working, contributing to the system, and accumulating political bargaining power. A society of "forever pensioners" who never had that economic leverage would be at the mercy of the ruling elite's whims.
The primary bottleneck for advancing AI is high-quality, tacit data—skills and local insights that are hard to digitize. Individuals can retain economic value by guarding this information and using it to train personalized AI tools that work for them, not their employers.
Just as oil wealth allows elites in some countries to ignore their populations, control over AI could empower a new elite to maintain power without cultivating human productivity, leading to societal decay and loss of democratic legitimacy.
While mitigating catastrophic AI risks is critical, the argument for safety can be used to justify placing powerful AI exclusively in the hands of a few actors. This centralization, intended to prevent misuse, simultaneously creates the monopolistic conditions for the Intelligence Curse to take hold.
Previously, data privacy concerns were abstract for most, leading only to worse ads. Now, giving AI companies unfettered access to your professional data provides them with the exact material needed to train models that will automate your job.
Democracies historically emerged when diffuse economic actors needed non-violent ways to settle disputes. By making human labor obsolete, AI removes the primary bargaining chip individuals have, concentrating power and potentially dismantling democratic structures.
The "pyramid replacement" theory posits that AI will first make junior analyst and other entry-level positions obsolete. As AI becomes more agentic, it will climb the corporate ladder, systematically replacing roles from the base of the pyramid upwards.
Large companies will increasingly use AI to automate rote tasks and shrink payrolls. The safest career path is no longer a stable corporate job but rather becoming an "n of 1" expert who is irreplaceable or pursuing a high-risk entrepreneurial venture before the window of opportunity closes.
Even if jobs like judges are legally protected from direct AI replacement, they can be de facto automated. If every judge uses the same AI model for decision support, the outcome is systemic homogenization of judgment, creating a centralized point of failure without any formal automation.
