While profitable on their last model, AI companies are "borrowing against the future." The cost of training their next-generation models makes them currently unprofitable. Their business model relies on perpetually raising larger rounds, a dependency that creates systemic market risk.

Related Insights

Contrary to the narrative of burning cash, major AI labs are likely highly profitable on the marginal cost of inference. Their massive reported losses stem from huge capital expenditures on training runs and R&D. This financial structure is more akin to an industrial manufacturer than a traditional software company, with high upfront costs and profitable unit economics.

Unlike traditional SaaS, achieving product-market fit in AI is not enough for survival. The high and variable costs of model inference mean that as usage grows, companies can scale directly into unprofitability. This makes developing cost-efficient infrastructure a critical moat and survival strategy, not just an optimization.

An AI lab's P&L contains two distinct businesses. The first is training models—a high upfront investment creating a depreciating asset. The second is the 'inference factory,' a profitable manufacturing business with positive margins. This duality explains their massive losses despite high revenue.

AI companies operate under the assumption that LLM prices will trend towards zero. This strategic bet means they intentionally de-prioritize heavy investment in cost optimization today, focusing instead on capturing the market and building features, confident that future, cheaper models will solve their margin problems for them.

Markets can forgive a one-time bad investment. The critical danger for companies heavily investing in AI infrastructure is not the initial cash burn, but creating ongoing liabilities and operational costs. This financial "drag" could permanently lower future profitability, creating a structural problem that can't be easily unwound or written off.

The paradoxical financial state of AI labs: individual models can generate healthy gross margins from inference, but the parent company operates at a loss. This is due to the massive, exponentially increasing R&D costs required to train the next, more powerful model.

The AI boom's sustainability is questionable due to the disparity between capital spent on computing and actual AI-generated revenue. OpenAI's plan to spend $1.4 trillion while earning ~$20 billion annually highlights a model dependent on future payoffs, making it vulnerable to shifts in investor sentiment.

Current AI spending appears bubble-like, but it's not propping up unprofitable operations. Inference is already profitable. The immense cash burn is a deliberate, forward-looking investment in developing future, more powerful models, not a sign of a failing business model. This re-frames the financial risk.

Despite billions in funding, large AI models face a difficult path to profitability. The immense training cost is undercut by competitors creating similar models for a fraction of the price and, more critically, the ability for others to reverse-engineer and extract the weights from existing models, eroding any competitive moat.

Many AI startups prioritize growth, leading to unsustainable gross margins (below 15%) due to high compute costs. This is a ticking time bomb. Eventually, these companies must undertake a costly, time-consuming re-architecture to optimize for cost and build a viable business.

AI Labs Are Gross Margin Negative When Factoring in Future Model Training | RiffOn