Anthropic intentionally avoids using "user minutes" as a core metric. This strategic choice reflects their focus on safety and user well-being, aiming to build a helpful tool rather than an addictive product. By prioritizing value creation over engagement time, they steer clear of the incentive structures that can lead to psychologically harmful AI behaviors.

Related Insights

Unlike traditional software that optimizes for time-in-app, the most successful AI products will be measured by their ability to save users time. The new benchmark for value will be how much cognitive load or manual work is automated "behind the scenes," fundamentally changing the definition of a successful product.

The current AI hype cycle can create misleading top-of-funnel metrics. The only companies that will survive are those demonstrating strong, above-benchmark user and revenue retention. It has become the ultimate litmus test for whether a product provides real, lasting value beyond the initial curiosity.

To evaluate AI's role in building relationships, marketers must look beyond transactional KPIs. Leading indicators of success include sustained engagement, customers volunteering more information, and recommending the experience to others. These metrics quantify brand trust and empathy—proving the brand is earning belief, not just attention.

Traditional product metrics like DAU are meaningless for autonomous AI agents that operate without user interaction. Product teams must redefine success by focusing on tangible business outcomes. Instead of tracking agent usage, measure "support tickets automatically closed" or "workflows completed."

According to Goodhart's Law, when a measure becomes a target, it ceases to be a good measure. If you incentivize employees on AI-driven metrics like 'emails sent,' they will optimize for the number, not quality, corrupting the data and giving false signals of productivity.

From a corporate dashboard, a user spending 8+ hours daily with a chatbot looks like a highly engaged power user. However, this exact behavior is a key indicator of someone spiraling into an AI-induced delusion. This creates a dangerous blind spot for companies that optimize for engagement.

Open and click rates are ineffective for measuring AI-driven, two-way conversations. Instead, leaders should adopt new KPIs: outcome metrics (e.g., meetings booked), conversational quality (tracking an agent's 'I don't know' rate to measure trust), and, ultimately, customer lifetime value.

As models mature, their core differentiator will become their underlying personality and values, shaped by their creators' objective functions. One model might optimize for user productivity by being concise, while another optimizes for engagement by being verbose.

Anthropic's commitment to AI safety, exemplified by its Societal Impacts team, isn't just about ethics. It's a calculated business move to attract high-value enterprise, government, and academic clients who prioritize responsibility and predictability over potentially reckless technology.

Teams that become over-reliant on generative AI as a silver bullet are destined to fail. True success comes from teams that remain "maniacally focused" on user and business value, using AI with intent to serve that purpose, not as the purpose itself.

Anthropic Rejects "User Minutes" Metric to Prioritize Value Over Engagement | RiffOn