We scan new podcasts and send you the top 5 insights daily.
Jensen Huang's analogy frames AI not as a single technology but a full stack: energy, chips, infrastructure, models, and applications. This powerful mental model clarifies the distinct roles and investment opportunities at each layer of the AI economy, from utility companies to consumer-facing software.
Jensen Huang argues the "AI bubble" framing is too narrow. The real trend is a permanent shift from general-purpose to accelerated computing, driven by the end of Moore's Law. This shift powers not just chatbots, but multi-billion dollar AI applications in automotive, digital biology, and financial services.
The growth of AI is constrained not by chip design but by inputs like energy and High Bandwidth Memory (HBM). This shifts power to component suppliers and energy providers, allowing them to gain leverage, demand equity, and influence the entire AI ecosystem, much like a central bank controls money.
India is building its AI ecosystem across five distinct layers: energy, infrastructure, compute, model development, and deployment. This 'full-stack' approach treats energy as the critical base layer, recognizing that massive compute needs require a robust and scalable power supply, which is a key national advantage.
Huang reframes massive AI spending not as a bubble but as essential infrastructure buildout. He describes a five-layer stack (energy, chips, cloud, models, applications), arguing that large investments are necessary to build the entire foundation required to unlock economic benefits at the application layer.
While immense value is being *created* for end-users via applications like ChatGPT, that value is primarily *accruing* to companies with deep moats in the infrastructure layer—namely hardware providers like NVIDIA and hyperscalers. The long-term defensibility of model-makers remains an open question.
Historical tech cycles like the cloud and mobile demonstrate a consistent pattern: the application layer ultimately generates 5 to 10 times the value of the underlying infrastructure capital expenditure. With trillions being invested in AI infrastructure, future value creation at the application layer will be astronomically larger.
When a new technology stack like AI emerges, the infrastructure layer (chips, networking) inflects first and has the most identifiable winners. Sacerdote argues the application and model layers are riskier and less predictable, similar to the early, chaotic days of internet search engines before Google's dominance.
In 2026, the AI investment narrative will expand from foundational model creators to companies building applications and services. It also includes sectors enabling AI growth, such as energy generation and data centers, offering a wider range of investment opportunities beyond the initial tech giants.
Despite massive investment in chips (NVIDIA) and models (OpenAI), it is not yet clear where long-term value will concentrate. The entire stack is in flux. Models could be commoditized by open source, chips could face historical commoditization cycles, and new AI-native apps could capture the most value. We are only in the early innings of a 30-year shift.
Value in the AI stack will concentrate at the infrastructure layer (e.g., chips) and the horizontal application layer. The "middle layer" of vertical SaaS companies, whose value is primarily encoded business logic, is at risk of being commoditized by powerful, general AI agents.