For an AI optimizing physical infrastructure like buildings, customer adoption hinges on explainability. Product leader John Boothroyd's team had to create visual representations showing how the AI made decisions to gain trust. This proves transparency is essential for automated systems with real-world consequences.
The need for explicit user transparency is most critical for nondeterministic systems like LLMs, where even creators don't always know why an output was generated. Unlike a simple rules engine with predictable outcomes, AI's "black box" nature requires giving users more context to build trust.
Customers are hesitant to trust a black-box AI with critical operations. The winning business model is to sell a complete outcome or service, using AI internally for a massive efficiency advantage while keeping humans in the loop for quality and trust.
To trust an agentic AI, users need to see its work, just as a manager would with a new intern. Design patterns like "stream of thought" (showing the AI reasoning) or "planning mode" (presenting an action plan before executing) make the AI's logic legible and give users a chance to intervene, building crucial trust.
As AI models are used for critical decisions in finance and law, black-box empirical testing will become insufficient. Mechanistic interpretability, which analyzes model weights to understand reasoning, is a bet that society and regulators will require explainable AI, making it a crucial future technology.
Instead of opaque 'black box' algorithms, MDT uses decision trees that allow their team to see and understand the logic behind every trade. This transparency is crucial for validating the model's decisions and identifying when a factor's effectiveness is decaying over time.
Stitch Fix found that providing context for its AI suggestions, especially for items outside a user's comfort zone, acts as an "amplifier." This transparency builds customer trust in the algorithm and leads to stronger, more valuable feedback signals, which in turn improves future personalization.
Contrary to fears of customer backlash, data from Bret Taylor's company Sierra shows that AI agents identifying themselves as AI—and even admitting they can make mistakes—builds trust. This transparency, combined with AI's patience and consistency, often results in customer satisfaction scores that are higher than those for previous human interactions.
Users distrust "talk to your data" tools they don't understand. Stripe's Sigma product overcomes this by generating a natural language explanation alongside every answer. It details assumptions made, like the specific dates used for "Black Friday," allowing non-technical users to verify the logic.
For AI systems to be adopted in scientific labs, they must be interpretable. Researchers need to understand the 'why' behind an AI's experimental plan to validate and trust the process, making interpretability a more critical feature than raw predictive power.
Unlike many AI tools that hide the model's reasoning, Spiral displays it by default. This intentional design choice frames the AI as a "writing partner," helping users understand its perspective, spot misunderstandings, and collaborate more effectively, which builds trust in the process.