Get your free personalized podcast brief

We scan new podcasts and send you the top 5 insights daily.

As AI evolves into a significant source of power, private companies developing it cannot ignore governments. Ben Thompson argues that the state, defined by its monopoly on violence (the "people with guns"), will inevitably assert control over any technology this powerful, overriding corporate autonomy.

Related Insights

The dispute highlights a core tension for democracies: how to compete with authoritarian states like China, which can command its AI labs without debate. The pressure to maintain a military edge may force the U.S. to adopt more coercive policies towards its own private tech companies, compromising the free market principles it aims to defend.

The principle that governments must hold a monopoly on overwhelming force should extend to superintelligence. AI at that level has the power to disorient political systems and financial markets, making its private control untenable. The state cannot be secondary to any private entity in this domain.

Ben Thompson argues that AI companies like Anthropic cannot operate in a vacuum of ideals. The fundamental reality is that laws and property rights are enforced by the state's monopoly on violence. As AI becomes a significant source of power, the government will inevitably assert control over it, making any private company's defiance a direct challenge to the state's authority.

The standoff between Anthropic and the Pentagon marks the moment abstract discussions about AI ethics became concrete geopolitical conflicts. The power to define the ethical boundaries of AI is now synonymous with the power to shape societal norms and military doctrine, making it a highly contested and critical area of national power.

The conflict between Anthropic and the government is not a simple policy dispute but the beginning of a larger societal shift. Thompson posits that as AI becomes a true source of power, it forces us to re-examine fundamental questions about governance, rights, and authority that have been considered settled for centuries. The nature of who holds power and how it is wielded is back on the table.

The relationship between governments and AI labs is analogous to European powers and chartered firms like the British East India Company, which wielded immense, semi-sovereign power. This private company raised its own army and conquered India, highlighting how today's private tech firms shape new frontiers with opaque power.

The core conflict is not a simple contract dispute, but a fundamental question of governance. Should unelected tech executives set moral boundaries on military technology, or should democratically elected leaders have full control over its lawful use? This highlights the challenge of integrating powerful, privately-developed AI into state functions.

By threatening to force Anthropic to remove military use restrictions, the Pentagon is acting against the free-market principles that fostered US tech dominance. This government overreach, telling a private company how to run its business and set its policies, resembles state-controlled economies.

Ben Thompson argues that if AI is as powerful as its creators claim, they must anticipate a forceful government response. Private companies unilaterally setting restrictions on dual-use technology will be seen as an intolerable challenge to state power, leading to direct conflict.

The intense state interest in regulating tech like crypto and AI is a response to the tech sector's rise to a power level that challenges the state. The public narrative is safety, but the underlying motivation is maintaining control over money, speech, and ultimately, the population.