We scan new podcasts and send you the top 5 insights daily.
Major AI companies are described as modern 'empires' that operate by claiming resources not their own (data, IP), exploiting a global workforce, controlling knowledge production, and justifying their dominance with a 'good vs. evil' narrative.
Focusing solely on military-style AI power grabs is too narrow. Extreme power concentration is more likely to emerge from a messy interplay of three factors: active seizures of control, massive economic shifts from automation, and the erosion of society's ability to understand reality (epistemics).
While the public focuses on AI's potential, a small group of tech leaders is using the current unregulated environment to amass unprecedented power and wealth. The federal government is even blocking state-level regulations, ensuring these few individuals gain extraordinary control.
The narrative that AI could be catastrophic ('summoning the demon') is used strategically. It creates a sense of danger that justifies why a small, elite group must maintain tight control over the technology, thereby warding off both regulation and competition.
Just as oil wealth allows elites in some countries to ignore their populations, control over AI could empower a new elite to maintain power without cultivating human productivity, leading to societal decay and loss of democratic legitimacy.
The relationship between governments and AI labs is analogous to European powers and chartered firms like the British East India Company, which wielded immense, semi-sovereign power. This private company raised its own army and conquered India, highlighting how today's private tech firms shape new frontiers with opaque power.
Large AI labs cynically use existential risk arguments, originally from 'effective altruist' communities, to lobby for regulations that stifle competition. This strategy aims to create monopolies by targeting open-source models and international rivals like China.
Unlike previous tech waves, AI's core requirements—massive datasets, capital for compute, and vast distribution—are already controlled by today's largest tech companies. This gives incumbents a powerful advantage, making AI a technology that could sustain their dominance rather than disrupt them.
The concept of data colonialism—extracting value from a population's data—is no longer limited to the Global South. It now applies to creative professionals in Western countries whose writing, music, and art are scraped without consent to build generative AI systems, concentrating wealth and power in the hands of a few tech firms.
Meredith Whittaker argues the biggest AI threat is not a sci-fi apocalypse, but the consolidation of power. AI's core requirements—massive data, computing infrastructure, and distribution channels—are controlled by a handful of established tech giants, further entrenching their dominance.
By employing or bankrolling a majority of AI researchers, large tech firms dictate the research agenda. They also censor or fire researchers, like Dr. Timnit Gebru at Google, whose work exposes the harms and limitations of their commercial models.