Get your free personalized podcast brief

We scan new podcasts and send you the top 5 insights daily.

Beyond enhancing the user experience, FanDuel uses AI to build trust and promote responsible gaming. Sophisticated models analyze user behavior for abnormal patterns, triggering "real-time check-ins" to ensure customer well-being, which is crucial for long-term sustainability.

Related Insights

Counter to the typical use case, DraftKings applies AI defensively. The technology analyzes user communications across multiple touchpoints—like customer service and marketing—to detect patterns of problem gambling and flag them for review, promoting responsible platform use.

To build trust, users need Awareness (know when AI is active), Agency (have control over it), and Assurance (confidence in its outputs). This framework, from a former Google DeepMind PM, provides a clear model for designing trustworthy AI experiences by mimicking human trust signals.

Building loyalty with AI isn't about the technology, but the trust it engenders. Consumers, especially younger generations, will abandon AI after one bad experience. Providing a transparent and easy option to connect with a human is critical for adoption and preventing long-term brand damage.

AI can analyze a customer's support history to predict their behavior. For instance, if a customer consistently calls about shipping delays, an AI agent can proactively contact them with an update before they reach out, transforming a reactive, negative interaction into a positive customer experience.

The evolution of fraud prevention is shifting from a static view of "who the customer is" to a real-time understanding of "what this customer is trying to do right now." This focus on intent allows brands to adapt dynamically, either stopping abuse or creating loyalty.

While foundation models carry systemic risk, AI applications make "thicker promises" to enterprises, like guaranteeing specific outcomes in customer support. This specificity creates more immediate and tangible business risks (e.g., brand disasters, financial errors), making the application layer the primary area where trust and insurance are needed now.

For an AI optimizing physical infrastructure like buildings, customer adoption hinges on explainability. Product leader John Boothroyd's team had to create visual representations showing how the AI made decisions to gain trust. This proves transparency is essential for automated systems with real-world consequences.

To make its AI agents robust enough for production, Sierra runs thousands of simulated conversations before every release. These "AI testing AI" scenarios model everything from angry customers to background noise and different languages, allowing flaws to be found internally before customers experience them.

To navigate regulatory hurdles and build user trust, Robinhood deliberately sequenced its AI rollout. It started by providing curated, factual information (e.g., 'why did a stock move?') before attempting to offer personalized advice or recommendations, which have a much higher legal and ethical bar.

Instead of using AI to score consumers, Experian applies it to governance. AI systems monitor financial models for 'drift'—when outcomes deviate from predictions—and alert human overseers to the specific variables causing the issue, ensuring fairness and regulatory compliance.