A liquid futures market for GPU compute would create price transparency, threatening the business models of hyperscale cloud providers. These giants benefit from opaque, bundled pricing and controlling supply. They will naturally resist the standardization and transparency that an open futures market would bring.
Instead of bearing the full cost and risk of building new AI data centers, large cloud providers like Microsoft use CoreWeave for 'overflow' compute. This allows them to meet surges in customer demand without committing capital to assets that depreciate quickly and may become competitors' infrastructure in the long run.
When power (watts) is the primary constraint for data centers, the total cost of compute becomes secondary. The crucial metric is performance-per-watt. This gives a massive pricing advantage to the most efficient chipmakers, as customers will pay anything for hardware that maximizes output from their limited power budget.
Large tech companies are buying up compute from smaller cloud providers not for immediate need, but as a defensive strategy. By hoarding scarce GPU capacity, they prevent competitors from accessing critical resources, effectively cornering the market and stifling innovation from rivals.
Previous attempts at tech futures like DRAM failed because prices only moved in one predictable direction: down. In contrast, the market for GPU compute will experience cycles of high demand and excess supply. This two-way volatility creates genuine hedging needs, making a futures market viable and necessary.
NVIDIA promised to buy any of CoreWeave's unused cloud service availability. This unusual arrangement, while helping CoreWeave secure debt financing, makes it difficult for investors to gauge real, organic market demand for its services, potentially hiding early signs of a market slowdown.
OpenAI's aggressive partnerships for compute are designed to achieve "escape velocity." By locking up supply and talent, they are creating a capital barrier so high (~$150B in CapEx by 2030) that it becomes nearly impossible for any entity besides the largest hyperscalers to compete at scale.
NVIDIA's vendor financing isn't a sign of bubble dynamics but a calculated strategy to build a controlled ecosystem, similar to Standard Oil. By funding partners who use its chips, NVIDIA prevents them from becoming competitors and counters the full-stack ambitions of rivals like Google, ensuring its central role in the AI supply chain.
The massive global investment required for AI will drive demand for GPUs so high that the annual market spend will exceed that of crude oil. This scale necessitates a dedicated futures market to allow participants, especially new cloud providers, to hedge price risk and lower their cost of capital.
As the current low-cost producer of AI tokens via its custom TPUs, Google's rational strategy is to operate at low or even negative margins. This "sucks the economic oxygen out of the AI ecosystem," making it difficult for capital-dependent competitors to justify their high costs and raise new funding rounds.
Accusations that hyperscalers "cook the books" by extending GPU depreciation misunderstand hardware lifecycles. Older chips remain at full utilization for less demanding tasks. High operational costs (power, cooling) provide a natural economic incentive to retire genuinely unprofitable hardware, invalidating claims of artificial earnings boosts.