The race to build AGI, framed with "religious zealotry," puts hyperscalers in a prisoner's dilemma where none can slow down. This narrative justifies abandoning prior 'net zero by 2030' commitments in favor of immediate, power-intensive buildouts using fossil fuels, under the belief that the eventual 'machine God' will solve the resulting climate problems.
The development of AI won't stop because of game theory. For competing nations like the US and China, the risk of falling behind is greater than the collective risk of developing the technology. This dynamic makes the AI race an unstoppable force, mirroring the Cold War nuclear arms race and rendering calls for a pause futile.
The idea that AI development is a winner-take-all race to AGI is a compelling story that simplifies complex realities. This narrative is strategically useful as it creates a pretext for aggressive, 'do whatever it takes' behavior, sidestepping the messier nature of real-world conflict.
Many top AI CEOs openly admit the extinction-level risks of their work, with some estimating a 25% chance. However, they feel powerless to stop the race. If a CEO paused for safety, investors would simply replace them with someone willing to push forward, creating a systemic trap where everyone sees the danger but no one can afford to hit the brakes.
Major tech companies view the AI race as a life-or-death struggle. This 'existential crisis' mindset explains their willingness to spend astronomical sums on infrastructure, prioritizing survival over short-term profitability. Their spending is a defensive moat-building exercise, not just a rational pursuit of new revenue.
For years, the tech industry criticized Bitcoin's energy use. Now, the massive energy needs of AI training have forced Silicon Valley to prioritize energy abundance over purely "green" initiatives. Companies like Meta are building huge natural gas-powered data centers, a major ideological shift.
Top AI leaders are motivated by a competitive, ego-driven desire to create a god-like intelligence, believing it grants them ultimate power and a form of transcendence. This 'winner-takes-all' mindset leads them to rationalize immense risks to humanity, framing it as an inevitable, thrilling endeavor.
A fundamental tension within OpenAI's board was the catch-22 of safety. While some advocated for slowing down, others argued that being too cautious would allow a less scrupulous competitor to achieve AGI first, creating an even greater safety risk for humanity. This paradox fueled internal conflict and justified a rapid development pace.
Regardless of potential dangers, AI will be developed relentlessly. Game theory dictates that any nation or company that pauses or slows down will be at a catastrophic disadvantage to competitors who don't. This competitive pressure ensures the technology will advance without brakes.
Companies are spending unsustainable amounts on AI compute, not because the ROI is clear, but as a form of Pascal's Wager. The potential reward of leading in AGI is seen as infinite, while the cost of not participating is catastrophic, justifying massive, otherwise irrational expenditures.
Current AI models suffer from negative unit economics, where costs rise with usage. To justify immense spending despite this, builders pivot from business ROI to "faith-based" arguments about AGI, framing it as an invaluable call option on the future.