We scan new podcasts and send you the top 5 insights daily.
When companies measure AI adoption by counting tokens used, it creates a perverse incentive. Employees and their teams create agents to perform pointless tasks simply to boost their metrics, leading to fake productivity and problematic artifacts.
By ranking engineers on AI token consumption, Meta is experiencing Goodhart's Law: "When a measure becomes a target, it ceases to be a good measure." Employees reportedly build bots to needlessly burn tokens for status, demonstrating how gamifying a proxy metric can backfire and disconnect from actual business impact.
Companies like Meta are pushing a new practice called "token maxing," where developers are encouraged to spend heavily on AI coding assistant tokens. This is being gamified with leaderboards to accelerate output, but it raises questions about efficiency versus vanity metrics and whether it's a true indicator of productivity.
In the current 'capability exploration' phase, companies incentivize developers to use as many AI tokens as possible. This serves as a visible, albeit inefficient, signal of AI adoption to management, prioritizing quantity over quality.
A trend called "tokenmaxxing" is emerging in Silicon Valley, where companies like Meta use leaderboards to track employee AI token usage. This reflects a corporate bet that higher token consumption correlates with increased productivity, turning AI usage into a new, albeit gameable, performance metric for engineers.
Gamifying AI token consumption via internal leaderboards, as seen at Meta, creates perverse incentives. Employees may burn tokens to climb the ranks rather than to solve real business problems. This "tokenmaxxing" promotes conspicuous consumption of compute, a vanity metric that masks true productivity and ROI.
Some large companies are incentivizing employees to use the maximum amount of AI tokens, even ranking them on usage. This seemingly inefficient strategy is a deliberate investment to accelerate adoption. The goal is to retrain employee thinking to be "AI native" before optimizing for cost and efficiency.
According to Goodhart's Law, when a measure becomes a target, it ceases to be a good measure. If you incentivize employees on AI-driven metrics like 'emails sent,' they will optimize for the number, not quality, corrupting the data and giving false signals of productivity.
The push for 'token maxing' to drive AI adoption has unintended consequences. Uber burned its entire 2026 AI budget in four months, driven by coding agents. This reveals the hidden financial risks and operational challenges of scaling agentic AI within large organizations without proper controls.
An employee using AI to do 8 hours of work in 4 benefits personally by gaining free time. The company (the principal) sees no productivity gain unless that employee produces more. This misalignment reveals the core challenge of translating individual AI efficiency into corporate-level growth.
At companies like Meta, a new practice called "token maxing" is being used to measure productivity, where engineers compete on leaderboards to consume the most AI tokens. Promoted by leaders from Nvidia and Meta, this metric is criticized for being easily gamed and not necessarily reflecting true productivity.