Fair lending laws require banks to give specific reasons for a credit denial, which is difficult with complex AI models. To navigate this, banks first use traditional models for a decision. If it's a "no," they then use AI to find a way to approve the applicant, avoiding the regulatory disclosure hurdle.

Related Insights

Instead of trying to anticipate every potential harm, AI regulation should mandate open, internationally consistent audit trails, similar to financial transaction logs. This shifts the focus from pre-approval to post-hoc accountability, allowing regulators and the public to address harms as they emerge.

By eliminating outdated constraints like the six-month activity rule and incorporating time-series data and alternative inputs like rent payments, modern credit scoring models can assess millions of creditworthy individuals, such as military personnel or young people, who were previously unscorable.

A key operational use of AI at Affirm is for regulatory compliance. The company deploys models to automatically scan thousands of merchant websites and ads, flagging incorrect or misleading claims about its financing products for which Affirm itself is legally responsible.

To enable agentic e-commerce while mitigating risk, major card networks are exploring how to issue credit cards directly to AI agents. These cards would have built-in limitations, such as spending caps (e.g., $200), allowing agents to execute purchases autonomously within safe financial guardrails.

With many "Buy Now, Pay Later" (BNPL) services not reporting to credit bureaus, lenders face "stacking" risk where consumers take on invisible debt. To get a holistic view, lenders are increasingly incorporating cash flow data, like checking account trends, into their underwriting processes.

In sectors like finance or healthcare, bypass initial regulatory hurdles by implementing AI on non-sensitive, public information, such as analyzing a company podcast. This builds momentum and demonstrates value while more complex, high-risk applications are vetted by legal and IT teams.

When developing AI for sensitive industries like government, anticipate that some customers will be skeptical. Design AI features with clear, non-AI alternatives. This allows you to sell to both "AI excited" and "AI skeptical" jurisdictions, ensuring wider market penetration.

Purely model-based or rule-based systems have flaws. Stripe combines them for better results. For instance, a transaction with a CVC code mismatch (a rule) is only blocked if its model-generated risk score is also elevated, preventing rejection of good customers who make simple mistakes.

Companies like Ramp are developing financial AI agents using a tiered autonomy model akin to self-driving cars (L1-L5). By implementing robust guardrails and payment controls first, they can gradually increase an agent's decision-making power. This allows a progression from simple, supervised tasks to fully unsupervised financial operations, mirroring the evolution from highway assist to full self-driving.

Financial institutions are at a tipping point where the risk of keeping outdated legacy systems exceeds the risk of replacing them. AI-native platforms unlock significant revenue opportunities—such as processing more insurance applications—making the cost of inaction (missed revenue) too high to ignore.