We scan new podcasts and send you the top 5 insights daily.
A world where AI agents perfectly follow policies would be brittle and frustrating. Human systems work because they have an implicit assumption of discretionary non-compliance. People value, and will pay for, the possibility that a human can bend the rules for them in a messy situation.
In a Washington D.C. study, citizens expressed a desire for personal AI agents to help them navigate complex regulations and paperwork. This reveals a key user need: people want AI as a personal advocate against systemic complexity, not just as a tool for institutional optimization.
Instead of creating rigid systems, formalizing policies makes rules transparent and debatable. It allows for building explicit exceptions, where the final "axiom" in a logical system can simply be "go talk to a human." This preserves necessary flexibility and discretion while making the process auditable and clear.
The true value of human interaction in customer service lies in understanding nuance. A person can empathize with a user's underlying frustration or goal—the "story" behind the problem—which is often different from the stated issue. This ability to serve the person, not just the ticket, is a key differentiator that automated systems miss.
Premium loyalty programs, like airline status tiers, are a monetized system for accessing favorable human judgment and exceptions to standard rules. This provides a powerful market-based argument that pure, rigid AI automation will have a value ceiling because people pay to escape it.
The assumption that efficiency is the ultimate market driver is a mistake. Markets exist to serve human wants. If customers reject hyper-efficient AI systems in favor of more human, flexible experiences, then consumer preference—not raw efficiency—will shape AI's economic role.
Today's AI systems exhibit "jagged intelligence"—strong performance on many tasks but inconsistent reliability on others. This prevents full job replacement because being 95% effective is insufficient when the remaining 5% involves crucial edge cases, judgment, and discretion that still require human oversight.
The most significant enterprise challenges for AI are the 'unstated constraints'—institutional knowledge, compliance nuances, and stakeholder dynamics not documented anywhere. The human operator who can identify and translate this implicit context for AI agents becomes indispensable.
Counterintuitively, Uber's AI customer service systems produced better results when given general guidance like "treat your customers well" instead of a rigid, rules-based framework. This suggests that for complex, human-centric tasks, empowering models with common-sense objectives is more effective than micromanagement.
Customers are so accustomed to the perfect accuracy of deterministic, pre-AI software that they reject AI solutions if they aren't 100% flawless. They would rather do the entire task manually than accept an AI assistant that is 90% correct, a mindset that serial entrepreneur Elias Torres finds dangerous for businesses.
Instead of forcing AI to be as deterministic as traditional code, we should embrace its "squishy" nature. Humans have deep-seated biological and social models for dealing with unpredictable, human-like agents, making these systems more intuitive to interact with than rigid software.