To avoid losing their allocated GPUs, some AI researchers are "gaming the system" by running repetitive, useless tasks to create the illusion of high utilization. This behavior stems from intense internal competition for scarce computing resources, leading to inefficient practices designed to protect individual access to hardware.
Google Cloud's growth is dramatically outpacing rivals, fueled by a 400% year-over-year increase in its backlog. The key is its integrated model, selling its entire AI stack from custom TPU infrastructure to Gemini apps. This full-stack approach is resonating strongly with enterprise customers.
A major paradox exists in AI development: companies are desperate for scarce GPUs, yet often fail to use them efficiently. Even well-funded labs like XAI report model flops utilization as low as 11%, far below the 40% practical target, due to inconsistent workloads and data transfer bottlenecks.
Despite having the fastest-growing ad business, Meta's stock fell after it raised its CapEx forecast to $145B for AI without a clear monetization plan. This contrasts sharply with competitors like Google and Microsoft, who demonstrate clear returns on their AI investments, making Meta's story relatively weaker for investors.
Despite strong AI revenue, Microsoft's data shows enterprise AI adoption remains early. Most M365 Copilot usage is confined to pilots, software development, and customer support. Widespread, daily adoption among general knowledge workers for productivity tasks has not yet materialized, indicating a gap between hype and reality.
While Starlink's customer base quadrupled, its average revenue per user (ARPU) fell from $99 to $81 over two years. This is a strategic shift from a niche, high-end service to a mass-market competitor, requiring aggressive price cuts that challenge early, highly optimistic financial models from analysts.
In his lawsuit against OpenAI, Elon Musk's credibility as an AI safety champion was undermined during cross-examination. He was reportedly clueless about basic industry safety practices like "system cards" and OpenAI's own safety protocols, revealing a significant gap between his public pronouncements and his technical knowledge.
