Counterintuitively, Uber's AI customer service systems produced better results when given general guidance like "treat your customers well" instead of a rigid, rules-based framework. This suggests that for complex, human-centric tasks, empowering models with common-sense objectives is more effective than micromanagement.

Related Insights

Contrary to the vision of free-wheeling autonomous agents, most business automation relies on strict Standard Operating Procedures (SOPs). Products like OpenAI's Agent Builder succeed by providing deterministic, node-based workflows that enforce business logic, which is more valuable than pure autonomy.

An AI tool that prompts call center agents on conversational dynamics—when to listen, show excitement, or pause—dramatically reduces customer conflict. This shows that managing the non-verbal pattern of interaction is often more effective for de-escalation than focusing solely on the words in a script.

Frame AI agent development like training an intern. Initially, they need clear instructions, access to tools, and your specific systems. They won't be perfect at first, but with iterative feedback and training ('progress over perfection'), they can evolve to handle complex tasks autonomously.

Users get frustrated when AI doesn't meet expectations. The correct mental model is to treat AI as a junior teammate requiring explicit instructions, defined tools, and context provided incrementally. This approach, which Claude Skills facilitate, prevents overwhelm and leads to better outcomes.

Humans mistakenly believe they are giving AIs goals. In reality, they are providing a 'description of a goal' (e.g., a text prompt). The AI must then infer the actual goal from this lossy, ambiguous description. Many alignment failures are not malicious disobedience but simple incompetence at this critical inference step.

An attempt to use AI to assist human customer service agents backfired, as agents mistrusted the AI's recommendations and did double the work. The solution was to give AI full control over low-stakes issues, allowing it to learn and improve without creating inefficiency for human counterparts.

Prioritize using AI to support human agents internally. A co-pilot model equips agents with instant, accurate information, enabling them to resolve complex issues faster and provide a more natural, less-scripted customer experience.

Instead of hard-coding brittle moral rules, a more robust alignment approach is to build AIs that can learn to 'care'. This 'organic alignment' emerges from relationships and valuing others, similar to how a child is raised. The goal is to create a good teammate that acts well because it wants to, not because it is forced to.

Effective AI policies focus on establishing principles for human conduct rather than just creating technical guardrails. The central question isn't what the tool can do, but how humans should responsibly use it to benefit employees, customers, and the community.

Instead of forcing AI to be as deterministic as traditional code, we should embrace its "squishy" nature. Humans have deep-seated biological and social models for dealing with unpredictable, human-like agents, making these systems more intuitive to interact with than rigid software.