Viewing fraud as its own form of infrastructure, with its own "APIs of evil," provides transferable lessons. By understanding how fraudulent systems are built and operate, we can gain insights to better architect and secure the legitimate, critical infrastructure in our lives.

Related Insights

Unlike human attackers, AI can ingest a company's entire API surface to find and exploit combinations of access patterns that individual, siloed development teams would never notice. This makes it a powerful tool for discovering hidden security holes that arise from a lack of cross-team coordination.

The same AI technology amplifying cyber threats can also generate highly secure, formally verified code. This presents a historic opportunity for a society-wide effort to replace vulnerable legacy software in critical infrastructure, leading to a durable reduction in cyber risk. The main challenge is creating the motivation for this massive undertaking.

A fraud operation can be brilliant at exploiting systemic weaknesses while being comically bad at faking basic evidence, like having one person forge dozens of signatures. This paradox is not surprising and reflects a division of labor similar to legitimate businesses, with different skill levels for strategy versus execution.

Vercel is building infrastructure based on a threat model where developers cannot be trusted to handle security correctly. By extracting critical functions like authentication and data access from the application code, the platform can enforce security regardless of the quality or origin (human or AI) of the app's code.

Instead of a moral failing, corruption is a predictable outcome of game theory. If a system contains an exploit, a subset of people will maximize it. The solution is not appealing to morality but designing radically transparent systems that remove the opportunity to exploit.

Systems like the legal and tax systems assume human-level effort, making them vulnerable to denial-of-service attacks from AI. An AI can generate millions of lawsuits or tax filings, overwhelming the infrastructure. Society must redesign these foundational systems with the assumption that they will face persistent, large-scale, intelligent attacks.

While many focus on AI for consumer apps or underwriting, its most significant immediate application has been by fraudsters. AI is driving an 18-20% annual growth in financial fraud by automating scams at an unprecedented scale, making it the most urgent AI-related challenge for the industry.

Large-scale fraud operates like a business with a supply chain of specialized services like incorporation agents, mail services, and accountants. While some tools are generic (Excel), graphing the use of shared, specialized infrastructure can quickly unravel entire fraud networks.

A defender's key advantage is their massive dataset of legitimate activity. Machine learning excels by modeling the messy, typo-ridden chaos of real business data. Fraudsters, however sophisticated, cannot perfectly replicate this organic "noise," causing their cleaner, fabricated patterns to stand out as anomalies.

For years, businesses have focused on protecting their sites from malicious bots. This same architecture now blocks beneficial AI agents acting on behalf of consumers. Companies must rethink their technical infrastructure to differentiate and welcome these new 'good bots' for agentic commerce.