An attempt to use AI to assist human customer service agents backfired, as agents mistrusted the AI's recommendations and did double the work. The solution was to give AI full control over low-stakes issues, allowing it to learn and improve without creating inefficiency for human counterparts.

Related Insights

Use a two-axis framework to determine if a human-in-the-loop is needed. If the AI is highly competent and the task is low-stakes (e.g., internal competitor tracking), full autonomy is fine. For high-stakes tasks (e.g., customer emails), human review is essential, even if the AI is good.

Effective enterprise AI deployment involves running human and AI workflows in parallel. When the AI fails, it generates a data point for fine-tuning. When the human fails, it becomes a training moment for the employee. This "tandem system" creates a continuous feedback loop for both the model and the workforce.

Beyond automating 80% of customer inquiries with AI, Sea leverages these tools as trainers for its human agents. They created an AI "custom service trainer" to improve the performance and consistency of their human support team, creating a powerful symbiotic system rather than just replacing people.

Instead of replacing humans, AI should handle repetitive, routine tasks. This frees human agents to focus on complex issues requiring empathy, listening, and critical thinking. This partnership, termed "Tandem Care," enhances both efficiency and the quality of the customer experience by combining the best of both worlds.

Frame AI agent development like training an intern. Initially, they need clear instructions, access to tools, and your specific systems. They won't be perfect at first, but with iterative feedback and training ('progress over perfection'), they can evolve to handle complex tasks autonomously.

Superhuman designs its AI to avoid "agent laziness," where the AI asks the user for clarification on simple tasks (e.g., "Which time slot do you prefer?"). A truly helpful agent should operate like a human executive assistant, making reasonable decisions autonomously to save the user time.

Companies aren't using AI to cut staff but to handle routine tasks, allowing agents to manage complex, emotional issues. This transforms the agent's role from transactional support to high-value relationship management, requiring more empathy and problem-solving skills, not less.

To mitigate risks like AI hallucinations and high operational costs, enterprises should first deploy new AI tools internally to support human agents. This "agent-assist" model allows for monitoring, testing, and refinement in a controlled environment before exposing the technology directly to customers.

Prioritize using AI to support human agents internally. A co-pilot model equips agents with instant, accurate information, enabling them to resolve complex issues faster and provide a more natural, less-scripted customer experience.

Counterintuitively, Uber's AI customer service systems produced better results when given general guidance like "treat your customers well" instead of a rigid, rules-based framework. This suggests that for complex, human-centric tasks, empowering models with common-sense objectives is more effective than micromanagement.

Uber's AI Customer Service Agents Failed Until They Were Given Full Autonomy | RiffOn