The promise of widespread enterprise AI is held back by a fundamental problem: many companies still run on legacy, on-premise systems from the 80s and 90s. This "digital transformation" bottleneck must be solved first, as AI can't be adopted until the prerequisite move to modern cloud infrastructure is complete.
Major cloud providers invest billions in AI labs like Anthropic and OpenAI, who then commit to spending those billions back on the providers' cloud services. This circular flow significantly inflates revenue backlogs, raising questions about whether the growth is sustainable or symptomatic of an AI bubble.
Unlike previous tech waves driven by system integrators, large companies are rejecting the model of outsourcing their AI strategy. According to Tessera Labs' CEO, CIOs now demand to own their AI platforms and build in-house expertise. The goal is to gain direct leverage and control over their AI journey, not rent it from consultants.
AMD's success isn't just about stealing market share from competitors. The rise of 'agentic inference' in AI is massively expanding the total addressable market for data center CPUs. This creates a "share-grabbing" scenario where new demand provides greenfield growth opportunities for all major players.
The demand for AI processing power so vastly outstrips supply that it creates a "compute deficit." This forces major AI players to adopt any viable chip solution they can find, including from AMD. It's not about being better than NVIDIA; it's about being available, ensuring a market for second and third-tier suppliers.
In court testimony, OpenAI's Greg Brockman revealed a key source of friction with Elon Musk: a perceived lack of AI intuition. Brockman cited an instance where Musk dismissed an early ChatGPT demo as "stupid," causing the OpenAI team to lose faith in his technical judgment on AI matters, which contributed to their eventual split.
