We scan new podcasts and send you the top 5 insights daily.
For an exponentially growing business, linear forecasting fails. Anthropic plans for a wide range of outcomes—the "cone of uncertainty"—to make disciplined, long-term compute purchasing decisions, aiming for the top end while managing risk.
Companies like Anthropic and OpenAI could generate even more parabolic revenue if they had access to infinite power and data centers. Their financial performance is a function of supply-side bottlenecks, making traditional demand-based forecasting less relevant for now.
Krishna Rao, Anthropic's CFO, describes compute as the company's "lifeblood." The decision of how much to procure is paramount, as over-purchasing leads to bankruptcy and under-purchasing means falling behind the frontier and failing customers. This frames compute not as a COGS but as the core strategic asset.
Dario Amodei highlights the extreme financial risk in scaling AI. If Anthropic were to purchase compute assuming a continued 10x revenue growth, a delay of just one year in market adoption would be "ruinous." This risk forces a more conservative compute scaling strategy than their optimistic technical timelines might suggest.
Anthropic's growth to a $30 billion annualized run rate in just over a year is unprecedented. It added $11 billion in run rate in March 2025 alone—the equivalent of Databricks and Palantir combined. This signals that enterprise demand for intelligence has a near-infinite Total Addressable Market (TAM).
The AI industry's exponential growth in capability is predictable, but the rate at which businesses adopt these tools is not. This diffusion problem is the biggest uncertainty and financial risk for AI labs, which could go bankrupt by miscalculating demand for their massive compute investments.
For AI-first products, future value is exponentially greater (e.g., 1000x in 2 years). Therefore, Anthropic's growth team flips the typical 70/30 optimization/big-bet ratio, focusing on larger swings that unlock new markets because small optimizations can't capture the massive potential value created by model improvements.
The traditional software paradigm of treating compute as a variable cost doesn't fit Anthropic. They view their entire compute "envelope" as a fungible resource allocated between immediate revenue (inference), future R&D (model development), and internal efficiency. The key metric is the robust return on the total spend.
Dario Amodei reveals a peculiar dynamic: profitability at a frontier AI lab is not a sign of mature business strategy. Instead, it's often the result of underestimating future demand when making massive, long-term compute purchases. Overestimating demand, conversely, leads to financial losses but more available research capacity.
Rapid revenue growth at AI labs like Anthropic creates an urgent need for massive amounts of inference compute. For instance, Anthropic's projected $60 billion revenue increase implies a need for an additional 4 gigawatts of inference capacity within 10 months, separate from R&D training fleets.
Investors in the AI space are less concerned with current revenue figures and more focused on the trajectory. A 'super-linear' (exponential) growth curve, like Anthropic's, is viewed more favorably than a larger but linear growth pattern. This indicates that future potential and market capture velocity are the key valuation metrics.