As AI evolves into a significant source of power, private companies developing it cannot ignore governments. Ben Thompson argues that the state, defined by its monopoly on violence (the "people with guns"), will inevitably assert control over any technology this powerful, overriding corporate autonomy.
Ben Thompson presents a counterintuitive geopolitical argument: allowing China dependency on Taiwan for semiconductors creates a safer equilibrium. Cutting China off removes this critical dependency, potentially making a military strike on TSMC an optimal, if devastating, strategic move for Beijing.
The popular comparison of AI to nuclear weapons has a critical flaw. Nuclear regulation relies on tracking scarce, physical, and interceptable fissionable materials. AI, as software and weights, can be copied and distributed far more easily, making the nuclear non-proliferation playbook a poor and dangerous model for AI governance.
Drawing a parallel to Intel's early strategy, the immense capital costs of AI development necessitate serving the largest possible market (consumers and businesses). This private, market-driven approach inherently conflicts with government expectations for control, as the government becomes just one of many customers for a globally-scaled technology.
When AI leaders unilaterally refuse to sell to the military on moral grounds, they are implicitly stating their judgment is superior to that of elected officials. This isn't just a business decision; it's a move toward a system where unelected, unaccountable executives make decisions with national security implications, challenging the democratic process itself.
AI companies face a strategic split. A firm like Anthropic, by resisting government work, gains a local advantage in recruiting from Silicon Valley's talent pool. However, this creates a national public relations problem. Conversely, OpenAI's cooperation aligns with the public but may alienate its San Francisco employee base.
