After being left out of the AI narrative in previous quarters, Amazon's strong earnings were propelled by its cloud and AI business. A key indicator was the 150% quarterly growth of its custom Tranium 2 chip, showing it's effectively competing with other hyperscalers' custom silicon like Google's TPU.

Related Insights

Despite intense competition, Amazon's core principle of being 'customer obsessed' means AWS would likely provide Google's TPU chips if key customers demand them. This prioritizes customer retention over platform exclusivity in the AI chip wars.

The strongest evidence that corporate AI spending is generating real ROI is that major tech companies are not just re-ordering NVIDIA's chips, but accelerating those orders quarter over quarter. This sustained, growing demand from repeat customers validates the AI trend as a durable boom.

While custom silicon is important, Amazon's core competitive edge is its flawless execution in building and powering data centers at massive scale. Competitors face delays, making Amazon's reliability and available power a critical asset for power-constrained AI companies.

While AI models and coding agents scale to $100M+ revenues quickly, the truly exponential growth is in the hardware ecosystem. Companies in optical interconnects, cooling, and power are scaling from zero to billions in revenue in under two years, driven by massive demand from hyperscalers building AI infrastructure.

Google successfully trained its top model, Gemini 3 Pro, on its own TPUs, proving a viable alternative to NVIDIA's chips. However, because Google doesn't sell these TPUs, NVIDIA retains its monopoly pricing power over every other company in the market.

While AWS's Tranium chip lags Nvidia's general-purpose GPUs in raw performance, its success with startup Descartes in real-time video highlights a viable strategy: win by becoming the best-in-class solution for specific, high-value workloads rather than competing head-on.

Beyond capital, Amazon's deal with OpenAI includes a crucial stipulation: OpenAI must use Amazon's proprietary Trainium AI chips. This forces adoption by a leading AI firm, providing a powerful proof point for Trainium as a viable competitor to Nvidia's market-dominant chips and creating a captive customer for Amazon's hardware.

AWS CEO Matt Garman's emphasis on "customer choice," combined with Jeff Bezos's philosophy of being customer-obsessed rather than competitor-obsessed, suggests AWS might offer Google's TPUs in their data centers if customers demand them, despite the direct competition.

The deal isn't just about cloud credits; it's a strategic play to onboard OpenAI as a major customer for Amazon's proprietary Tranium AI chips. This helps Amazon compete with Nvidia by subsidizing a top AI lab to adopt and validate its hardware.

While competitors like OpenAI must buy GPUs from NVIDIA, Google trains its frontier AI models (like Gemini) on its own custom Tensor Processing Units (TPUs). This vertical integration gives Google a significant, often overlooked, strategic advantage in cost, efficiency, and long-term innovation in the AI race.