Instead of using AI to score consumers, Experian applies it to governance. AI systems monitor financial models for 'drift'—when outcomes deviate from predictions—and alert human overseers to the specific variables causing the issue, ensuring fairness and regulatory compliance.

Related Insights

Instead of trying to anticipate every potential harm, AI regulation should mandate open, internationally consistent audit trails, similar to financial transaction logs. This shifts the focus from pre-approval to post-hoc accountability, allowing regulators and the public to address harms as they emerge.

Fair lending laws require banks to give specific reasons for a credit denial, which is difficult with complex AI models. To navigate this, banks first use traditional models for a decision. If it's a "no," they then use AI to find a way to approve the applicant, avoiding the regulatory disclosure hurdle.

To ensure governance and avoid redundancy, Experian centralizes AI development. This approach treats AI as a core platform capability, allowing for the reuse of models and consistent application of standards across its global operations.

In regulated industries, AI's value isn't perfect breach detection but efficiently filtering millions of calls to identify a small, ambiguous subset needing human review. This shifts the goal from flawless accuracy to dramatically improving the efficiency and focus of human compliance officers.

A key operational use of AI at Affirm is for regulatory compliance. The company deploys models to automatically scan thousands of merchant websites and ads, flagging incorrect or misleading claims about its financing products for which Affirm itself is legally responsible.

Create AI agents that embody key executive personas to monitor operations. A 'CFO agent' could audit for cost efficiency while a 'brand agent' checks for compliance. This system surfaces strategic conflicts that require a human-in-the-loop to arbitrate, ensuring alignment.

For complex cases like "friendly fraud," traditional ground truth labels are often missing. Stripe uses an LLM to act as a judge, evaluating the quality of AI-generated labels for suspicious payments. This creates a proxy for ground truth, enabling faster model iteration.

Advanced AI tools can model an organization's internal investment beliefs and processes. This allows investment committees to use the AI to "red team" proposals by prompting it to generate a memo with a negative stance or to re-evaluate a deal based on a new assumption, like a net-zero mandate.

The primary driver for Cognizant's TriZeto AI Gateway was creating a centralized system for governance. This includes monitoring requests, ensuring adherence to responsible AI principles, providing transparency to customers, and having a 'kill switch' to turn off access instantly if needed.

Contrary to belief, regulated sectors like finance and healthcare are early adopters of voice AI. This is because AI can be programmed for perfect compliance and offer a verifiable audit trail, outperforming human agents who are prone to error and harder to track.