Waiting for mature AI solutions is risky. Bret Taylor warns that savvy competitors can use the technology to gain structural advantages that compound over time. The urgency is a defensive strategy against being left behind and a response to shifting consumer behaviors driven by tools like ChatGPT.
OpenAI's Chairman advises against waiting for perfect AI. Instead, companies should treat AI like human staff—fallible but manageable. The key is implementing robust technical and procedural controls to detect and remediate inevitable errors, turning an unsolvable "science problem" into a solvable "engineering problem."
Instead of relying solely on human oversight, Bret Taylor advocates a layered "defense in depth" approach for AI safety. This involves using specialized "supervisor" AI models to monitor a primary agent's decisions in real-time, followed by more intensive AI analysis post-conversation to flag anomalies for efficient human review.
Contrary to fears of customer backlash, data from Bret Taylor's company Sierra shows that AI agents identifying themselves as AI—and even admitting they can make mistakes—builds trust. This transparency, combined with AI's patience and consistency, often results in customer satisfaction scores that are higher than those for previous human interactions.
Bret Taylor's firm, Sierra, is pioneering an "outcomes-based pricing" model for its AI agents. Instead of charging for software usage, they only charge clients when the AI successfully resolves a customer's problem without human escalation. This aligns vendor incentives with tangible business results like problem resolution and customer satisfaction.
With AI tools being so new, no external "experts" exist. OpenAI's Chairman argues that the individuals best positioned to lead AI adoption are existing employees. Their deep domain knowledge, combined with a willingness to learn the new technology, makes them more valuable than any outside hire. Call center managers can become "AI Architects."
Instead of using AI to generate strategic documents, which he believes short-circuits his own thinking process, Bret Taylor uses it as a critical partner. He writes his own strategy notes and then prompts ChatGPT to critique them and find flaws. This leverages AI's analytical power without sacrificing the deliberative process of writing.
