We scan new podcasts and send you the top 5 insights daily.
Uber found that rule-based AI agents failed because their internal policy documentation was incomplete and designed for human interpretation. Their new approach scraps the rules and instead provides the AI with desired outcomes (e.g., "keep this customer happy"), letting the model determine the best action.
AI's best use is not replacing agents but empowering them. By analyzing a customer's history and sentiment, AI can provide real-time guidance like "slow down" or "acknowledge past frustration." This fosters genuine, empathetic interactions at scale, moving beyond the limitations of static, impersonal scripts.
Rather than programming AI agents with a company's formal policies, a more powerful approach is to let them observe thousands of actual 'decision traces.' This allows the AI to discover the organization's emergent, de facto rules—how work *actually* gets done—creating a more accurate and effective world model for automation.
With infinitely scalable AI agents, cost and time per interaction are no longer primary constraints. Companies should abandon classic efficiency metrics like Average Handle Time and instead measure success by outcomes, such as percentage of tasks completed and improvements in Customer Satisfaction (CSAT).
Traditional customer service waits for a problem to occur and then tries to solve it. Agentic AI is moving this function 'upstream' into the digital experience itself. By anticipating and addressing issues within the user journey before they become problems, companies can prevent customer friction entirely.
An unexpected benefit of setting up an AI system is that it forces you to review customer interaction playbooks. Companies often discover their official scripts and processes are outdated, leading to crucial updates that improve both the AI's performance and the human team's effectiveness.
A world where AI agents perfectly follow policies would be brittle and frustrating. Human systems work because they have an implicit assumption of discretionary non-compliance. People value, and will pay for, the possibility that a human can bend the rules for them in a messy situation.
An attempt to use AI to assist human customer service agents backfired, as agents mistrusted the AI's recommendations and did double the work. The solution was to give AI full control over low-stakes issues, allowing it to learn and improve without creating inefficiency for human counterparts.
Prioritize using AI to support human agents internally. A co-pilot model equips agents with instant, accurate information, enabling them to resolve complex issues faster and provide a more natural, less-scripted customer experience.
Counterintuitively, Uber's AI customer service systems produced better results when given general guidance like "treat your customers well" instead of a rigid, rules-based framework. This suggests that for complex, human-centric tasks, empowering models with common-sense objectives is more effective than micromanagement.
Unlike traditional automation that follows simple rules (e.g., match competitor price), AI agents optimize for a business goal. They synthesize data from siloed systems like inventory and finance, simulate potential outcomes, and then recommend the best course of action.