Counter to the typical use case, DraftKings applies AI defensively. The technology analyzes user communications across multiple touchpoints—like customer service and marketing—to detect patterns of problem gambling and flag them for review, promoting responsible platform use.

Related Insights

Prediction markets serve a dual purpose. Beyond being a product, they are a strategic wedge to enter massive, untapped markets like California and Texas. Because they operate under a different regulatory framework, they provide a foothold where traditional sports betting is banned.

AI's primary value in pre-buy research isn't just accelerating diligence on promising ideas. It's about rapidly surfacing deal-breakers—like misaligned management incentives or existential risks—allowing analysts to discard flawed theses much earlier in the process and focus their deep research time more effectively.

Instead of reacting with louder marketing messages, AI systems proactively identify early behavioral warning signs of disengagement. This allows for timely, relevant interventions at moments that truly matter, fundamentally shifting retention strategy from messaging to behavior.

A key operational use of AI at Affirm is for regulatory compliance. The company deploys models to automatically scan thousands of merchant websites and ads, flagging incorrect or misleading claims about its financing products for which Affirm itself is legally responsible.

To manage immense feedback volume, Microsoft applies AI to identify high-quality, specific, and actionable comments from over 4 million annual submissions. This allows their team to bypass low-quality noise and focus resources on implementing changes that directly improve the customer experience.

Modern sports betting platforms function as sophisticated data operations. From a customer's very first bet, their models can predict long-term value with 80-90% certainty, allowing them to instantly manage risk, filter out profitable players, and maximize revenue from unprofitable ones.

From a corporate dashboard, a user spending 8+ hours daily with a chatbot looks like a highly engaged power user. However, this exact behavior is a key indicator of someone spiraling into an AI-induced delusion. This creates a dangerous blind spot for companies that optimize for engagement.

To navigate regulatory hurdles and build user trust, Robinhood deliberately sequenced its AI rollout. It started by providing curated, factual information (e.g., 'why did a stock move?') before attempting to offer personalized advice or recommendations, which have a much higher legal and ethical bar.

Purely model-based or rule-based systems have flaws. Stripe combines them for better results. For instance, a transaction with a CVC code mismatch (a rule) is only blocked if its model-generated risk score is also elevated, preventing rejection of good customers who make simple mistakes.

Instead of using AI to score consumers, Experian applies it to governance. AI systems monitor financial models for 'drift'—when outcomes deviate from predictions—and alert human overseers to the specific variables causing the issue, ensuring fairness and regulatory compliance.