The most significant barrier to creating a safer AI future is the pervasive narrative that its current trajectory is inevitable. The logic of "if I don't build it, someone else will" creates a self-fulfilling prophecy of recklessness, preventing the collective action needed to steer development.
The argument that the US must race China on AI without regulation ignores the lesson of social media. The US achieved technological dominance with platforms like Facebook, but the result was a more anxious, polarized, and less resilient society—a Pyrrhic victory.
AI's impact on labor will likely follow a deceptive curve: an initial boost in productivity as it augments human workers, followed by a crash as it masters their domains and replaces them entirely. This creates a false sense of security, delaying necessary policy responses.
Just as 1990s free trade brought cheap goods by outsourcing manufacturing, AI will bring cheap digital services by outsourcing cognitive labor to a "new country of geniuses in a data center." This analogy suggests the result will be concentrated wealth and broad job displacement.
The business model for AI companions shifts the goal from capturing attention to manufacturing deep emotional attachment. In this race, as Tristan Harris explains, a company's biggest competitor isn't another app; it's other human relationships, creating perverse incentives to isolate users.
Unlike advances in specific fields like rocketry or medicine, an advance in general intelligence accelerates every scientific domain at once. This makes Artificial General Intelligence (AGI) a foundational technology that dwarfs the power of all others combined, including fire or electricity.
The social media newsfeed, a simple AI optimizing for engagement, was a preview of AI's power to create addiction and polarization. This "baby AI" caused massive societal harm by misaligning its goals with human well-being, demonstrating the danger of even narrow AI systems.
Companies like Character.ai aren't just building engaging products; they're creating social engineering mechanisms to extract vast amounts of human interaction data. This data is a critical resource, like a goldmine, used to train larger, more powerful models in the race toward AGI.
A key strategic difference in the AI race is focus. US tech giants are 'AGI-pilled,' aiming to build a single, god-like general intelligence. In contrast, China's state-driven approach prioritizes deploying narrow AI to boost productivity in manufacturing, agriculture, and healthcare now.
International AI treaties are feasible. Just as nuclear arms control monitors uranium and plutonium, AI governance can monitor the choke point for advanced AI: high-end compute chips from companies like NVIDIA. Tracking the global distribution of these chips could verify compliance with development limits.
