Get your free personalized podcast brief

We scan new podcasts and send you the top 5 insights daily.

AI companies like OpenAI are losing money on their popular subscription plans. The computational cost (inference) to serve a user, especially a power user, often exceeds the subscription fee. This subsidized model is propped up by venture capital and is not sustainable long-term.

Related Insights

Contrary to the belief that its huge user base is a key asset, ChatGPT's free tier is described as a massive liability. The cost of running millions of GPUs for non-paying users is enormous, and monetization attempts like ads risk driving users to competitors in a market with low switching costs.

As AI's utility and computational cost rise, a flat-rate "unlimited" plan becomes nonsensical. OpenAI signals that future pricing must align with the variable, and often immense, value and cost that power users generate, much like an electricity bill.

Many AI coding agents are unprofitable because their business model is broken. They charge a fixed subscription fee but pay variable, per-token costs for model inference. This means their most engaged power users, who should be their best customers, are actually their biggest cost centers, leading to negative gross margins.

Even with optimistic HSBC projections for massive revenue growth by 2030, OpenAI faces a $207 billion funding shortfall to cover its data center and compute commitments. This staggering number indicates that its current business model is not viable at scale and will require either renegotiating massive contracts or finding an entirely new monetization strategy.

Unlike traditional SaaS, achieving product-market fit in AI is not enough for survival. The high and variable costs of model inference mean that as usage grows, companies can scale directly into unprofitability. This makes developing cost-efficient infrastructure a critical moat and survival strategy, not just an optimization.

The current subsidized AI subscription model is unsustainable. The inevitable shift to pay-per-token pricing will expose the true cost of inference. For tasks like coding, where AI can "hallucinate" and burn tokens in loops, this creates unpredictable and potentially exorbitant costs, akin to gambling.

The narrative of "off the charts" AI demand is misleading. Major AI providers like OpenAI are "burning tens of billions of dollars," indicating they are not charging the true cost for their services. A realistic picture of demand will only emerge once they are forced to price for profitability, which could significantly cool the market.

Software has long commanded premium valuations due to near-zero marginal distribution costs. AI breaks this model. The significant, variable cost of inference means expenses scale with usage, fundamentally altering software's economic profile and forcing valuations down toward those of traditional industries.

Mature B2B SaaS companies, after achieving profitability, now face a new crisis: funding expensive AI agents to stay competitive. They must spend millions on inference to match venture-backed startups, creating a dilemma that could lead to their demise despite having a solid underlying business.

The traditional SaaS model—high R&D/sales costs, low COGS—is being inverted. AI makes building software cheap but running it expensive due to high inference costs (COGS). This threatens profitability, as companies now face high customer acquisition costs AND high costs of goods sold.