Get your free personalized podcast brief

We scan new podcasts and send you the top 5 insights daily.

Once a haven for startups struggling to get GPUs, NeoClouds like CoreWeave have shifted their strategy. They now prioritize serving the largest customers, mirroring the behavior of AWS and Azure and leaving startups with fewer alternative compute options than in 2023.

Related Insights

A new category of "NeoCloud" or "AI-native cloud" is rising, focusing specifically on AI training and inference. Unlike general-purpose clouds like AWS, these platforms are GPU-first, catering to massive AI workloads and addressing the GPU scarcity and different workload patterns found in hyperscalers.

Emerging cloud providers (“NeoClouds”) are sticking exclusively with NVIDIA, despite alternatives from AMD. The perceived performance risk is too high, as customers demand state-of-the-art inference speed and providers can't risk a multi-billion dollar investment on a non-NVIDIA stack that might offer lower throughput.

Specialized AI cloud providers like CoreWeave face a unique business reality where customer demand is robust and assured for the near future. Their primary business challenge and gating factor is not sales or marketing, but their ability to secure the physical supply of high-demand GPUs and other AI chips to service that demand.

To service its massive debt for GPU purchases, CoreWeave locks customers into multi-year contracts. This secures revenue to cover debt payments but means CoreWeave misses out on the higher margins available from rising spot market prices for GPU compute—a calculated trade-off between stability and profitability.

The primary bear case for specialized neoclouds like CoreWeave isn't just competition from AWS or Google. A more fundamental risk is a breakthrough in GPU efficiency that commoditizes deployment, diminishing the value of the neoclouds' core competency in complex, optimized racking and setup.

Large tech companies are buying up compute from smaller cloud providers not for immediate need, but as a defensive strategy. By hoarding scarce GPU capacity, they prevent competitors from accessing critical resources, effectively cornering the market and stifling innovation from rivals.

CoreWeave argues that large tech companies aren't just using them to de-risk massive capital outlays. Instead, they are buying a superior, purpose-built product. CoreWeave’s infrastructure is optimized from the ground up for parallelized AI workloads, a fundamental shift from traditional cloud architecture.

To combat the GPU shortage, top VC firms are bundling their portfolio companies' compute needs. They negotiate with cloud providers on behalf of their startups, acting as a single large customer to get better pricing and access, a novel role for investors.

A new category of cloud providers, "NeoClouds," are built specifically for high-performance GPU workloads. Unlike traditional clouds like AWS, which were retrofitted from a CPU-centric architecture, NeoClouds offer superior performance for AI tasks by design and through direct collaboration with hardware vendors like NVIDIA.

Newer AI cloud providers gain a performance advantage by building their infrastructure entirely on NVIDIA's integrated ecosystem, including specialized networking. Incumbent clouds often must patch their legacy, CPU-centric systems, creating inefficiencies that 'neo-clouds' without technical debt can avoid.