Get your free personalized podcast brief

We scan new podcasts and send you the top 5 insights daily.

FTV Capital's managing partner believes current high AI usage might be a "false positive" driven by subsidized, low-cost experimentation with multiple LLMs. As prices rise and the market matures, users will likely consolidate to fewer paid services, revealing that initial adoption metrics might not translate into sustainable long-term demand.

Related Insights

Unlike traditional B2B markets where only ~5% of customers are buying at any time, the AI boom has pushed nearly 100% of companies to seek solutions at once. This temporary gold rush warps perception of market size, creating a risk of over-investment similar to the COVID-era software bubble.

Data from RAMP indicates enterprise AI adoption has stalled at 45%, with 55% of businesses not paying for AI. This suggests that simply making models smarter isn't driving growth. The next adoption wave requires AI to become more practically useful and demonstrate clear business value, rather than just offering incremental intelligence gains.

The narrative of "off the charts" AI demand is misleading. Major AI providers like OpenAI are "burning tens of billions of dollars," indicating they are not charging the true cost for their services. A realistic picture of demand will only emerge once they are forced to price for profitability, which could significantly cool the market.

Analysts distinguish between initial revenue from training large language models (LLMs) and more sustainable, long-term revenue from 'inference'—the actual use of AI applications by end-market companies. The latter, like a bank using an AI chatbot, signals true market adoption and is considered the more valuable, 'sticky' revenue base.

Contrary to the idea that technology always gets cheaper, building on AI is less expensive now. The current phase is characterized by abundant venture capital and intense competition among AI tool providers, which subsidizes costs for developers. As the market consolidates, these costs will rise.

The primary short-term risk for the AI sector isn't capital expenditure but the high cost of token generation. For AI applications to become ubiquitous, the unit economics must improve. If running a single query remains prohibitively expensive for businesses, widespread, sustainable adoption will be impossible, threatening the entire investment thesis.

The narrative of insatiable AI compute demand is partially a bubble. It's fueled by inefficient early models ("token maxing") and a culture where tech executives brag about their AI spending as a status symbol, a behavior not seen with traditional cloud costs. This suggests demand could normalize.

Ramp's AI index shows paid AI adoption among businesses has stalled. This indicates the initial wave of adoption driven by model capability leaps has passed. Future growth will depend less on raw model improvements and more on clear, high-ROI use cases for the mainstream market.

The current affordability of AI tokens is not sustainable; it's propped up by venture capital funding AI companies operating at a loss. Businesses should treat this as a temporary window for aggressive learning and experimentation before prices inevitably rise to reflect true operational costs.

Unlike previous tech cycles where early revenue was a strong signal, the current AI hype creates significant "experimental demand." Companies will try, pay for, and even renew products that don't fully work. Investors must look beyond revenue to assess true product-market fit.