Anthropic mitigates supply chain risk and optimizes cost by investing heavily in the ability to use NVIDIA, Google, and Amazon chips interchangeably for model development, internal use, and customer service. This orchestration layer is a key competitive advantage.
For an exponentially growing business, linear forecasting fails. Anthropic plans for a wide range of outcomes—the "cone of uncertainty"—to make disciplined, long-term compute purchasing decisions, aiming for the top end while managing risk.
In its compute allocation meetings, Anthropic sets a non-negotiable floor for model development compute. This ensures they stay at the AI frontier, reflecting a belief that the long-term returns on intelligence outweigh short-term revenue opportunities.
Initially driven by their mission, Anthropic's investments in safety, interpretability, and alignment have become a commercial asset. For enterprises running their most sensitive workloads on AI, this demonstrated commitment to responsible development builds the trust necessary to win large deals.
While mainly a horizontal platform, Anthropic strategically builds vertical applications. This isn't to compete with their ecosystem, but to build ahead of current model capabilities and demonstrate to the market what will be possible on their platform in the near future, accelerating adoption.
Krishna Rao, Anthropic's CFO, describes compute as the company's "lifeblood." The decision of how much to procure is paramount, as over-purchasing leads to bankruptcy and under-purchasing means falling behind the frontier and failing customers. This frames compute not as a COGS but as the core strategic asset.
The idea of AI improving itself is already a reality at Anthropic. Over 90% of their internal code, including code for the Claude Code tool itself, is written by AI. This internal use of their own frontier models is a key driver of their accelerating development pace.
Anthropic's hiring philosophy prioritizes "talent density" over "talent mass." They believe a concentrated group of top AI researchers, amplified by their own frontier models, can outperform much larger teams, making elite talent and powerful models a winning combination.
Counterintuitively, Anthropic lowered the price of its premium Opus model because it was underutilized. This move triggered the Jevons paradox: the lower price made Opus more accessible, and consumption increased by a far greater multiple than the price decrease, unlocking significant value for customers.
The traditional software paradigm of treating compute as a variable cost doesn't fit Anthropic. They view their entire compute "envelope" as a fungible resource allocated between immediate revenue (inference), future R&D (model development), and internal efficiency. The key metric is the robust return on the total spend.
The common analogy of new models being like faster but less fuel-efficient sports cars is wrong. Anthropic finds that each new model generation brings a step-function improvement in both capability and token processing efficiency, benefiting both customers and internal R&D.
Anthropic's internal teams, like finance, are power users of their own AI. They built over 70 custom skills for Claude to automate reporting. This intense "dogfooding" serves as a practical R&D lab, with internal use cases directly inspiring new commercial products like their 'Coworker' agent.
![Krishna Rao - Anthropic's CFO on Compute, Scaling to $30B ARR, and the Returns to Frontier Intelligence - [Invest Like the Best, EP.471]](https://megaphone.imgix.net/podcasts/fdfca472-4e43-11f1-bf1a-cf5adcc9aeb5/image/b1ed0c54d8c57aae09fe68adc7d1fb32.jpg?ixlib=rails-4.3.1&max-w=3000&max-h=3000&fit=crop&auto=format,compress)