Unlike previous tech waves, AI's core requirements—massive datasets, capital for compute, and vast distribution—are already controlled by today's largest tech companies. This gives incumbents a powerful advantage, making AI a technology that could sustain their dominance rather than disrupt them.
NVIDIA investing in startups that then buy its chips isn't a sign of a bubble but a rational competitive strategy. With Google bundling its TPUs with labs like Anthropic, NVIDIA must fund its own customer ecosystem to prevent being locked out of key accounts.
The compute-heavy nature of AI makes traditional 80%+ SaaS gross margins impossible. Companies should embrace lower margins as proof of user adoption and value delivery. This strategy mirrors the successful on-premise to cloud transition, which ultimately drove massive growth for companies like Microsoft.
The competitive landscape for AI chips is not a crowded field but a battle between two primary forces: NVIDIA’s integrated system (hardware, software, networking) and Google's TPU. Other players like AMD and Broadcom are effectively a combined secondary challenger offering an open alternative.
AI enables a fundamental shift in business models away from selling access (per seat) or usage (per token) towards selling results. For example, customer support AI will be priced per resolved ticket. This outcome-based model will become the standard as AI's capabilities for completing specific, measurable tasks improve.
The current AI infrastructure build-out avoids the dot-com bubble's waste. In 2000, 97% of telecom fiber was unused ('dark'). Today, all GPUs are actively utilized, and the largest investors (big tech) are seeing positive returns on their capital, indicating real demand and value creation.
