Analysts distinguish between initial revenue from training large language models (LLMs) and more sustainable, long-term revenue from 'inference'—the actual use of AI applications by end-market companies. The latter, like a bank using an AI chatbot, signals true market adoption and is considered the more valuable, 'sticky' revenue base.
The hosts challenge the conventional accounting of AI training runs as R&D (OpEx). They propose viewing a trained model as a capital asset (CapEx) with a multi-year lifespan, capable of generating revenue like a profitable mini-company. This re-framing is critical for valuation, as a company could have a long tail of profitable legacy models serving niche user bases.
While the market seeks revenue from novel AI products, the first significant financial impact has come from using AI to enhance existing digital advertising engines. This has driven unexpected growth for companies like Meta and Google, proving AI's immediate value beyond generative applications.
Early-stage AI startups should resist spending heavily on fine-tuning foundational models. With base models improving so rapidly, the defensible value lies in building the application layer, workflow integrations, and enterprise-grade software that makes the AI useful, allowing the startup to ride the wave of general model improvement.
Lin warns that much of today's AI revenue is 'experimental,' where customers test solutions without long-term commitment. He calls annualizing this pilot revenue 'a joke.' He advises founders to prioritize slower, high-quality, high-retention revenue over fast, low-quality growth that will eventually churn.
For consumer products like ChatGPT, models are already good enough for common queries. However, for complex enterprise tasks like coding, performance is far from solved. This gives model providers a durable path to sustained revenue growth through continued quality improvements aimed at professionals.
Software has long commanded premium valuations due to near-zero marginal distribution costs. AI breaks this model. The significant, variable cost of inference means expenses scale with usage, fundamentally altering software's economic profile and forcing valuations down toward those of traditional industries.
The most durable AI applications are those that directly amplify their customers' revenue streams rather than merely offering efficiency gains. For businesses with non-hourly billing models, like contingency-based law firms, AI that helps them win more cases is infinitely more valuable and defensible than AI that just saves time.
AI models are becoming commodities; the real, defensible value lies in proprietary data and user context. The correct strategy is for companies to use LLMs to enhance their existing business and data, rather than selling their valuable context to model providers for pennies on the dollar.
CoreWeave, a major AI infrastructure provider, reports its compute workload is shifting from two-thirds training to nearly 50% inference. This indicates the AI industry is moving beyond model creation to real-world application and monetization, a crucial sign of enterprise adoption and market maturity.
Unlike the dot-com era where valuations far outpaced a small, slow user base, the current AI shift is driven by products with immediate, massive adoption and revenue. The technology is delivering value today, not just promising it for the future, which fundamentally changes the financial dynamics.