The AI landscape has three groups: 1) Frontier labs on a "superintelligence quest," absorbing most capital. 2) Fundamental researchers who think the current approach is flawed. 3) Pragmatists building value with today's "good enough" AI.

Related Insights

With industry dominating large-scale compute, academia's function is no longer to train the biggest models. Instead, its value lies in pursuing unconventional, high-risk research in areas like new algorithms, architectures, and theoretical underpinnings that commercial labs, focused on scaling, might overlook.

The US AI strategy is dominated by a race to build a foundational "god in a box" Artificial General Intelligence (AGI). In contrast, China's state-directed approach currently prioritizes practical, narrow AI applications in manufacturing, agriculture, and healthcare to drive immediate economic productivity.

To balance AI hype with reality, leaders should create two distinct teams. One focuses on generating measurable ROI this quarter using current AI capabilities. A separate "tiger team" incubates high-risk, experimental projects that operate at startup speed to prevent long-term disruption.

Instead of a single "AGI" event, AI progress is better understood in three stages. We're in the "powerful tools" era. The next is "powerful agents" that act autonomously. The final stage, "autonomous organizations" that outcompete human-led ones, is much further off due to capability "spikiness."

Companies like DeepMind, Meta, and SSI are using increasingly futuristic job titles like "Post-AGI Research" and "Safe Superintelligence Researcher." This isn't just semantics; it's a branding strategy to attract elite talent by framing their work as being on the absolute cutting edge, creating distinct sub-genres within the AI research community.

The massive capital expenditure in AI is largely confined to the "superintelligence quest" camp, which bets on godlike AI transforming the economy. Companies focused on applying current AI to create immediate economic value are not necessarily in a bubble.

OpenAI's CEO believes the term "AGI" is ill-defined and its milestone may have passed without fanfare. He proposes focusing on "superintelligence" instead, defining it as an AI that can outperform the best human at complex roles like CEO or president, creating a clearer, more impactful threshold.

A key strategic difference in the AI race is focus. US tech giants are 'AGI-pilled,' aiming to build a single, god-like general intelligence. In contrast, China's state-driven approach prioritizes deploying narrow AI to boost productivity in manufacturing, agriculture, and healthcare now.

The true commercial impact of AI will likely come from small, specialized "micro models" solving boring, high-volume business tasks. While highly valuable, these models are cheap to run and cannot economically justify the current massive capital expenditure on AGI-focused data centers.

Ilya Sutskever argues that the AI industry's "age of scaling" (2020-2025) is insufficient for achieving superintelligence. He posits that the next leap requires a return to the "age of research" to discover new paradigms, as simply making existing models 100x larger won't be enough for a breakthrough.