Current unprofitability in some AI applications, like subsidizing tokens for coding, is a deliberate strategy. Similar to Uber's early city-by-city expansion, AI labs are subsidizing usage to rapidly gain market share, gather data, and build a powerful flywheel effect that will serve as a long-term competitive moat.
Pre-reasoning AI models were static assets that depreciated quickly. The advent of reasoning allows models to learn from user interactions, re-establishing the classic internet flywheel: more usage generates data that improves the product, which attracts more users. This creates a powerful, compounding advantage for the leading labs.
Contrary to the narrative of burning cash, major AI labs are likely highly profitable on the marginal cost of inference. Their massive reported losses stem from huge capital expenditures on training runs and R&D. This financial structure is more akin to an industrial manufacturer than a traditional software company, with high upfront costs and profitable unit economics.
While OpenAI's projected losses dwarf those of past tech giants, the strategic goal is similar to Uber's: spend aggressively to achieve market dominance. If OpenAI becomes the definitive "front door to AI," the enormous upfront investment could be justified by the value of that monopoly position.
While OpenAI's projected multi-billion dollar losses seem astronomical, they mirror the historical capital burns of companies like Uber, which spent heavily to secure market dominance. If the end goal is a long-term monopoly on the AI interface, such a massive investment can be justified as a necessary cost to secure a generational asset.
Unprofitable AI models mirror Uber's early strategy. By subsidizing services, they integrate into workflows and create dependency. Once users rely on the tool (e.g., a law firm replacing an associate), prices can be increased dramatically to reflect the massive value created, ultimately achieving profitability.
AI companies operate under the assumption that LLM prices will trend towards zero. This strategic bet means they intentionally de-prioritize heavy investment in cost optimization today, focusing instead on capturing the market and building features, confident that future, cheaper models will solve their margin problems for them.
As the current low-cost producer of AI tokens via its custom TPUs, Google's rational strategy is to operate at low or even negative margins. This "sucks the economic oxygen out of the AI ecosystem," making it difficult for capital-dependent competitors to justify their high costs and raise new funding rounds.
In the AI era, token consumption is the new R&D burn rate. Like Uber spending on subsidies, startups should aggressively spend on powerful models to accelerate development, viewing it as a competitive advantage rather than a cost to be minimized.
In rapidly evolving AI markets, founders should prioritize user acquisition and market share over achieving positive unit economics. The core assumption is that underlying model costs will decrease exponentially, making current negative margins an acceptable short-term trade-off for long-term growth.
Major AI players treat the market as a zero-sum, "winner-take-all" game. This triggers a prisoner's dilemma where each firm is incentivized to offer subsidized, unlimited-use pricing to gain market share, leading to a race to the bottom that destroys profitability for the entire sector and squeezes out smaller players.