We scan new podcasts and send you the top 5 insights daily.
AVS for credit cards doesn't return a simple pass/fail. It provides a range of statuses because perfect address matching is impossible due to data entry variations and stale bank records. Businesses choose an acceptable risk threshold, often just matching the ZIP code, to avoid declining legitimate sales.
Max Levchin claims any single data point that seems to dramatically improve underwriting accuracy is a red herring. He argues these 'magic bullets' are brittle and fail when market conditions shift. A robust risk model instead relies on aggregating small lifts from many subtle factors.
Fair lending laws require banks to give specific reasons for a credit denial, which is difficult with complex AI models. To navigate this, banks first use traditional models for a decision. If it's a "no," they then use AI to find a way to approve the applicant, avoiding the regulatory disclosure hurdle.
Binary decisions are brittle. For payments that are neither clearly safe nor clearly fraudulent, Stripe uses a "soft block." This triggers a 3DS authentication step, allowing legitimate users to proceed while stopping fraudsters, resolving ambiguity without losing revenue.
Businesses and financial institutions intentionally accept a certain level of fraud. The friction required to eliminate it entirely would block too many legitimate transactions, ultimately costing more in lost revenue (lower conversion) than the fraud itself. It is a calculated trade-off between security and usability.
The evolution of fraud prevention is shifting from a static view of "who the customer is" to a real-time understanding of "what this customer is trying to do right now." This focus on intent allows brands to adapt dynamically, either stopping abuse or creating loyalty.
Accurately identifying legitimate customers allows brands to move beyond just stopping abuse. This data empowers CX teams to confidently offer "surprise and delight" moments, like instant refunds, turning a potential service issue into a powerful, loyalty-building experience.
For complex cases like "friendly fraud," traditional ground truth labels are often missing. Stripe uses an LLM to act as a judge, evaluating the quality of AI-generated labels for suspicious payments. This creates a proxy for ground truth, enabling faster model iteration.
A defender's key advantage is their massive dataset of legitimate activity. Machine learning excels by modeling the messy, typo-ridden chaos of real business data. Fraudsters, however sophisticated, cannot perfectly replicate this organic "noise," causing their cleaner, fabricated patterns to stand out as anomalies.
Purely model-based or rule-based systems have flaws. Stripe combines them for better results. For instance, a transaction with a CVC code mismatch (a rule) is only blocked if its model-generated risk score is also elevated, preventing rejection of good customers who make simple mistakes.
LLMs are extremely sensitive to inconsistencies in business data across online platforms. Even minor variations in your Name, Address, and Phone (NAP) can confuse the AI, causing it to drop your business from its recommendations entirely. Strict data consistency is paramount.