Get your free personalized podcast brief

We scan new podcasts and send you the top 5 insights daily.

While Shopify uses AI internally to limit headcount, its external AI assistant for merchants is creating new LLM costs. These "token costs" are now partially offsetting the scale and efficiencies gained on the customer support side, revealing a hidden tension in AI-driven business models.

Related Insights

The team managing Composio's AI pipeline for building tool integrations spends more on LLM tokens than on salaries for its engineers. This signals a new economic reality for AI-native companies where compute is a larger operational cost than labor.

To function effectively, AI agents need their own accounts for tools like Slack, Notion, and Google Docs. This means companies will pay for seats as if they were human employees, potentially doubling their SaaS budget instead of reducing it.

Many AI coding agents are unprofitable because their business model is broken. They charge a fixed subscription fee but pay variable, per-token costs for model inference. This means their most engaged power users, who should be their best customers, are actually their biggest cost centers, leading to negative gross margins.

Historically, a developer's primary cost was salary. Now, the constant use of powerful AI coding assistants creates a new, variable infrastructure expense for LLM tokens. This changes the economic model of software development, with costs per engineer potentially rising by dollars per hour.

Analysis of Shopify's internal AI usage reveals a significant trend: the top percentile of users are increasing their token consumption much faster than others. The CTO finds this skew "not ideal," fearing it could lead to extreme imbalances in resource utilization.

A paradox exists where the cost for a fixed level of AI capability (e.g., GPT-4 level) has dropped 100-1000x. However, overall enterprise spend is increasing because applications now use frontier models with massive contexts and multi-step agentic workflows, creating huge multipliers on token usage that drive up total costs.

The high operational cost of using proprietary LLMs creates 'token junkies' who burn through cash rapidly. This intense cost pressure is a primary driver for power users to adopt cheaper, local, open-source models they can run on their own hardware, creating a distinct market segment.

Heavy use of AI agents and API calls is generating significant costs, with some agents costing $100,000 annually. This creates a new financial reality where companies must budget for 'tokens' per employee, potentially making the AI's cost more than the human's salary.

The traditional SaaS model—high R&D/sales costs, low COGS—is being inverted. AI makes building software cheap but running it expensive due to high inference costs (COGS). This threatens profitability, as companies now face high customer acquisition costs AND high costs of goods sold.

Despite fears of high AI usage bills, the actual token costs for running multiple customer-facing AI applications can be trivial. SaaStr's entire suite of AI tools, including its AI VP of CS, runs on a total budget of less than $200 per month for all API usage.