Many AI coding agents are unprofitable because their business model is broken. They charge a fixed subscription fee but pay variable, per-token costs for model inference. This means their most engaged power users, who should be their best customers, are actually their biggest cost centers, leading to negative gross margins.
Data businesses have high fixed costs to create an asset, not variable per-customer costs. This model shows poor initial gross margins but scales exceptionally well as revenue grows against fixed COGS. Investors often misunderstand this, penalizing data companies for a fundamentally powerful economic model.
For a true AI-native product, extremely high margins might indicate it isn't using enough AI, as inference has real costs. Founders should price for adoption, believing model costs will fall, and plan to build strong margins later through sophisticated, usage-based pricing tiers rather than optimizing prematurely.
Historically, a developer's primary cost was salary. Now, the constant use of powerful AI coding assistants creates a new, variable infrastructure expense for LLM tokens. This changes the economic model of software development, with costs per engineer potentially rising by dollars per hour.
Standard SaaS pricing fails for agentic products because high usage becomes a cost center. Avoid the trap of profiting from non-use. Instead, implement a hybrid model with a fixed base and usage-based overages, or, ideally, tie pricing directly to measurable outcomes generated by the AI.
Unlike high-margin SaaS, AI agents operate on thin 30-40% gross margins. This financial reality makes traditional seat-based pricing obsolete. To build a viable business, companies must create new systems to capture more revenue and manage agent costs effectively, ensuring profitability and growth from day one.
Unlike SaaS where marginal costs are near-zero, AI companies face high inference costs. Abuse of free trials or refunds by non-paying users ("friendly fraud") directly threatens unit economics, forcing some founders to choke growth by disabling trials altogether to survive.
Perplexity achieves profitability on its paid subscribers, countering the narrative of unsustainable AI compute costs. Critically, the cost of servicing free users is categorized as a research and development expense, as their queries are used to train and improve the system. This accounting strategy presents a clearer path to sustainable unit economics for AI services.
Contrary to traditional software evaluation, Andreessen Horowitz now questions AI companies that present high, SaaS-like gross margins. This often indicates a critical flaw: customers are not engaging with the costly, core AI features. Low margins, in this context, can be a positive signal of genuine product usage and value delivery.
Traditional SaaS metrics like 80%+ gross margins are misleading for AI companies. High inference costs lower margins, but if the absolute gross profit per customer is multiples higher than a SaaS equivalent, it's a superior business. The focus should shift from margin percentages to absolute gross profit dollars and multiples.