The core issue isn't about specific terms but a fundamental conflict over whether a private tech company can dictate national security policy to a sovereign government, especially concerning technologies with world-altering potential akin to nuclear weapons.
CEO Dario Amodei reportedly gives employees 'The Making of the Atomic Bomb,' suggesting he views powerful AI as analogous to nuclear technology. This implies he anticipated an inevitable confrontation with the government that could lead to nationalization, not just a simple commercial partnership.
The Department of War's aggressive actions against Anthropic stemmed from information asymmetry. Knowing war was imminent, the government viewed Anthropic's contractual debates and unresponsiveness not as principled stands but as critical unreliability and supply chain risk in a moment of crisis.
Netflix's bid for Warner Bros was a masterstroke that drove up the price, forcing competitor Paramount into a highly leveraged acquisition with a difficult integration. Netflix not only weakened two rivals but also collected a $2.8 billion breakup fee in the process.
Seemingly reasonable terms like 'no autonomous lethal weapons' are impossible for a private company to enforce. They require moral and legal judgments about warfare—like defining a civilian or collateral damage—that are the exclusive and complex domain of a sovereign government, not a tech vendor.
Andreessen recounts meetings where officials detailed a plan to control AI by limiting it to 'two or three big companies working closely with the government.' This strategy involves protecting these giants from startup competition and even classifying the underlying math to centralize power.
Block's 40% layoffs may be more indicative of a necessary correction for years of over-hiring and inefficiency, rather than a pure AI displacement story. The anecdote of employees with 'no tasks' suggests the company was bloated, and AI provides a forward-looking justification for rightsizing.
