The common analogy between regulating AI and nuclear weapons is flawed. Nuclear development requires physically trackable, interceptable materials and facilities like enrichment plants. In contrast, AI models are software and weights, which are diffuse and far more difficult to monitor and control, presenting a fundamentally different and harder regulatory challenge.
Thompson highlights a critical tension for OpenAI. By agreeing to work with the Pentagon, OpenAI aligns with the broader American public's expectations but clashes with the anti-authoritarian ethos of its core talent base in San Francisco. This creates a difficult internal and recruitment dynamic that Anthropic, whose stance is popular in the tech community, largely avoids.
The conflict between Anthropic and the government is not a simple policy dispute but the beginning of a larger societal shift. Thompson posits that as AI becomes a true source of power, it forces us to re-examine fundamental questions about governance, rights, and authority that have been considered settled for centuries. The nature of who holds power and how it is wielded is back on the table.
The government's stated concern about Anthropic being a 'supply chain risk' is not merely a procurement issue. Thompson interprets it as a strategic move to punish the company. The underlying goal is to prevent any entity that won't be 'subservient' to the state from building an independent power base, especially one derived from a technology as potent as AI.
Ben Thompson argues that AI companies like Anthropic cannot operate in a vacuum of ideals. The fundamental reality is that laws and property rights are enforced by the state's monopoly on violence. As AI becomes a significant source of power, the government will inevitably assert control over it, making any private company's defiance a direct challenge to the state's authority.
The debate over Anthropic's stance is presented as a choice: trust CEO Dario Amadei's judgment or the messy democratic process. Thompson notes that while frustration with politics is understandable, explicitly preferring an unelected, unaccountable executive to make weighty national decisions is a conscious move away from democratic principles, a fraught implication many supporters overlook.
Counterintuitively, Thompson argues against cutting China off from Taiwan's semiconductor manufacturing (TSMC). A China dependent on Taiwan is less likely to act aggressively toward it. Creating a situation where the U.S. relies on Taiwan while China does not increases the risk of conflict, as China's optimal move could become disabling that key U.S. asset.
The massive upfront CapEx for AI models is only viable when serving the entire market, not just government contracts. Thompson cites Intel's early decision to design for the large consumer market, not just the military, which accelerated its capabilities far beyond what government-funded projects could. This economic reality ensures private companies will remain at the forefront of AI development.
