The two dominant negative narratives about AI—that it's a fake bubble and that it's on the verge of creating a dangerous superintelligence—are mutually exclusive. If AI is a bubble, it's not super powerful; if it's super powerful, the economic activity is justified. This contradiction exposes the ideological roots of the doomer movement.

Related Insights

Blinder asserts that while AI is a genuine technological revolution, historical parallels (autos, PCs) show such transformations are always accompanied by speculative bubbles. He argues it would be contrary to history if this wasn't the case, suggesting a major market correction and corporate shakeout is inevitable.

The massive capital expenditure in AI is largely confined to the "superintelligence quest" camp, which bets on godlike AI transforming the economy. Companies focused on applying current AI to create immediate economic value are not necessarily in a bubble.

Unlike previous technologies like the internet or smartphones, which enjoyed years of positive perception before scrutiny, the AI industry immediately faced a PR crisis of its own making. Leaders' early and persistent "AI will kill everyone" narratives, often to attract capital, have framed the public conversation around fear from day one.

The negative public discourse around AI may be heavily influenced by a few tech billionaires funding a "Doomer Industrial Complex." Through organizations like the Future of Life Institute, they finance journalism fellowships and academic grants that consistently produce critical AI coverage, distorting the public debate.

The rhetoric around AI's existential risks is framed as a competitive tactic. Some labs used these narratives to scare investors, regulators, and potential competitors away, effectively 'pulling up the ladder' to cement their market lead under the guise of safety.

For current AI valuations to be realized, AI must deliver unprecedented efficiency, likely causing mass job displacement. This would disrupt the consumer economy that supports these companies, creating a fundamental contradiction where the condition for success undermines the system itself.

The most immediate systemic risk from AI may not be mass unemployment but an unsustainable financial market bubble. Sky-high valuations of AI-related companies pose a more significant short-term threat to economic stability than the still-developing impact of AI on the job market.

A genuine technological wave, like AI, creates rapid wealth, which inherently attracts speculators. Therefore, bubble-like behavior is a predictable side effect of a real revolution, not proof that the underlying technology is fake. The two phenomena come together as a pair.

The discourse around AGI is caught in a paradox. Either it is already emerging, in which case it's less a cataclysmic event and more an incremental software improvement, or it remains a perpetually receding future goal. This captures the tension between the hype of superhuman intelligence and the reality of software development.

The continuous narrative that AGI is "right around the corner" is no longer just about technological optimism. It has become a financial necessity to justify over a trillion dollars in expended or committed capital, preventing a catastrophic collapse of investment in the AI sector.