It's counterintuitive, but using a more expensive, intelligent model like Opus 4.5 can be cheaper than smaller models. Because the smarter model is more efficient and requires fewer interactions to solve a problem, it ends up using fewer tokens overall, offsetting its higher per-token price.
When evaluating AI agents, the total cost of task completion is what matters. A model with a higher per-token cost can be more economical if it resolves a user's query in fewer turns than a cheaper, less capable model. This makes "number of turns" a primary efficiency metric.
Classifying a model as "reasoning" based on a chain-of-thought step is no longer useful. With massive differences in token efficiency, a so-called "reasoning" model can be faster and cheaper than a "non-reasoning" one for a given task. The focus is shifting to a continuous spectrum of capability versus overall cost.
A paradox exists where the cost for a fixed level of AI capability (e.g., GPT-4 level) has dropped 100-1000x. However, overall enterprise spend is increasing because applications now use frontier models with massive contexts and multi-step agentic workflows, creating huge multipliers on token usage that drive up total costs.
The binary distinction between "reasoning" and "non-reasoning" models is becoming obsolete. The more critical metric is now "token efficiency"—a model's ability to use more tokens only when a task's difficulty requires it. This dynamic token usage is a key differentiator for cost and performance.
Even for complex, multi-hour tasks requiring millions of tokens, current AI agents are at least an order of magnitude cheaper than paying a human with relevant expertise. This significant cost advantage suggests that economic viability will not be a near-term bottleneck for deploying AI on increasingly sophisticated tasks.
Tasklet's CEO points to pricing as the ultimate proof of an LLM's value. Despite GPT-4o being cheaper, Anthropic's Sonnet maintains a higher price, indicating customers pay a premium for its superior performance on multi-turn agentic tasks—a value not fully captured by benchmarks.
While the cost to achieve a fixed capability level (e.g., GPT-4 at launch) has dropped over 100x, overall enterprise spending is increasing. This paradox is explained by powerful multipliers: demand for frontier models, longer reasoning chains, and multi-step agentic workflows that consume exponentially more tokens.
To optimize AI costs in development, use powerful, expensive models for creative and strategic tasks like architecture and research. Once a solid plan is established, delegate the step-by-step code execution to less powerful, more affordable models that excel at following instructions.
In complex, multi-step tasks, overall cost is determined by tokens per turn and the total number of turns. A more intelligent, expensive model can be cheaper overall if it solves a problem in two turns, while a cheaper model might take ten turns, accumulating higher total costs. Future benchmarks must measure this turn efficiency.
While the cost for GPT-4 level intelligence has dropped over 100x, total enterprise AI spend is rising. This is driven by multipliers: using larger frontier models for harder tasks, reasoning-heavy workflows that consume more tokens, and complex, multi-turn agentic systems.