We scan new podcasts and send you the top 5 insights daily.
Sam Yagan notes that while the internet made publishing free, AI introduces a marginal cost for every user interaction via token fees. This creates a COGS for consumer tech companies for the first time, forcing founders to reconsider unit economics in a way previous generations didn't have to.
AI products with a Product-Led Growth motion face a fundamental flaw in their unit economics. Customers expect predictable SaaS-like pricing (e.g., $20/month), but the company's costs are usage-based. This creates an inverse relationship where higher user engagement leads directly to lower or negative margins.
For a true AI-native product, extremely high margins might indicate it isn't using enough AI, as inference has real costs. Founders should price for adoption, believing model costs will fall, and plan to build strong margins later through sophisticated, usage-based pricing tiers rather than optimizing prematurely.
Many AI coding agents are unprofitable because their business model is broken. They charge a fixed subscription fee but pay variable, per-token costs for model inference. This means their most engaged power users, who should be their best customers, are actually their biggest cost centers, leading to negative gross margins.
Current AI pricing models, which pass on expensive LLM costs to users, are temporary. As LLM costs inevitably collapse and become commoditized, the winning companies will be those who have already evolved their monetization to be based on the value their product delivers.
The dominant per-user-per-month SaaS business model is becoming obsolete for AI-native companies. The new standard is consumption or outcome-based pricing. Customers will pay for the specific task an AI completes or the value it generates, not for a seat license, fundamentally changing how software is sold.
Unlike traditional SaaS, achieving product-market fit in AI doesn't guarantee a viable business. The high cost of goods sold (COGS) from model inference can exceed revenue, causing companies to lose more money as they scale. This forces a focus on economical model deployment from day one.
Software has long commanded premium valuations due to near-zero marginal distribution costs. AI breaks this model. The significant, variable cost of inference means expenses scale with usage, fundamentally altering software's economic profile and forcing valuations down toward those of traditional industries.
New AI companies reframe their P&L by viewing inference costs not as a COGS liability but as a sales and marketing investment. By building the best possible agent, the product itself becomes the primary driver of growth, allowing them to operate with lean go-to-market teams.
The primary short-term risk for the AI sector isn't capital expenditure but the high cost of token generation. For AI applications to become ubiquitous, the unit economics must improve. If running a single query remains prohibitively expensive for businesses, widespread, sustainable adoption will be impossible, threatening the entire investment thesis.
The traditional SaaS model—high R&D/sales costs, low COGS—is being inverted. AI makes building software cheap but running it expensive due to high inference costs (COGS). This threatens profitability, as companies now face high customer acquisition costs AND high costs of goods sold.