Get your free personalized podcast brief

We scan new podcasts and send you the top 5 insights daily.

The initial robot umpire system, which called the 'textbook' strike zone, felt wrong to players and fans. To improve user acceptance, Major League Baseball reprogrammed the system to be less precise and better reflect the slightly larger, human-defined strike zone everyone was accustomed to, prioritizing feel over objective perfection.

Related Insights

Instead of protecting umpires from anger, MLB's robot system publicly highlights their every mistake on a giant scoreboard. This has turned umpire errors into viral moments of public humiliation, putting individuals under a microscope and increasing vitriol, the opposite of the technology's hoped-for effect.

An analyst argues fans watch sports not for perfect fairness, but for human elements like drama, dialogue, and quirks. This is a lesson for product design: optimizing for pure efficiency can strip a product of the very 'inefficiencies' and imperfections that make it engaging and beloved by users.

Contrary to fears that automation would make baseball sterile, the robot umpire 'challenge system' has introduced new dramatic pauses. When a player challenges a call, the entire stadium collectively looks to the scoreboard for the robot's verdict, creating a suspenseful, shared experience that enhances fan engagement.

Instead of replacing human umpires entirely, MLB introduced robot umpires as a challenge system. This human-in-the-loop approach keeps the traditional feel of the game intact while still leveraging technology for accuracy. It's a savvy change management strategy that allows players and fans to adapt gradually to a disruptive innovation.

A world where AI agents perfectly follow policies would be brittle and frustrating. Human systems work because they have an implicit assumption of discretionary non-compliance. People value, and will pay for, the possibility that a human can bend the rules for them in a messy situation.

During testing of a full robot umpire system, players were less likely to argue with a call. Knowing a machine made the decision, one furious batter stopped himself from yelling at the human umpire. This shows how automation can de-escalate conflict by shifting blame from a person to an impartial system.

Customers have a double standard for mistakes. They accept that humans err, but expect AI-driven systems to be 100% accurate from the start. This creates a significant challenge for product managers in setting realistic expectations for new AI features.

While professional engineers focus on craft and quality, the average user is satisfied if an AI tool produces a functional result, regardless of its underlying elegance or efficiency. This tendency to accept "good enough" output threatens to devalue the meticulous work of skilled developers.

A Medallia report reveals a critical insight: customers are less tolerant of mistakes made by AI than by humans. This psychological bias means brands must prioritize accuracy and defensibility in their AI tools, as the reputational damage from a "dumb bot" is greater than from a human agent's mistake.

Customers are so accustomed to the perfect accuracy of deterministic, pre-AI software that they reject AI solutions if they aren't 100% flawless. They would rather do the entire task manually than accept an AI assistant that is 90% correct, a mindset that serial entrepreneur Elias Torres finds dangerous for businesses.