A key operational use of AI at Affirm is for regulatory compliance. The company deploys models to automatically scan thousands of merchant websites and ads, flagging incorrect or misleading claims about its financing products for which Affirm itself is legally responsible.

Related Insights

To ensure accuracy in its legal AI, LexisNexis unexpectedly hired a large number of lawyers, not just data scientists. These legal experts are crucial for reviewing AI output, identifying errors, and training the models, highlighting the essential role of human domain expertise in specialized AI.

In regulated industries, AI's value isn't perfect breach detection but efficiently filtering millions of calls to identify a small, ambiguous subset needing human review. This shifts the goal from flawless accuracy to dramatically improving the efficiency and focus of human compliance officers.

Pharmaceutical advertising is the second leading source of health information for patients. AI can “de-criminalize” it by moving from untrackable broadcast ads to programmatic, personalized, and compliant digital content, turning it into a valuable and trusted patient resource monitored by the government.

Beyond data privacy, a key ethical responsibility for marketers using AI is ensuring content integrity. This means using platforms that provide a verifiable trail for every asset, check for originality, and offer AI-assisted verification for factual accuracy. This protects the brand, ensures content is original, and builds customer trust.

Generative AI tools are only as good as the content they're trained on. Lenovo intentionally delayed activating an AI search feature because they lacked confidence in their content governance. Without a system to ensure content is accurate and up-to-date, AI tools risk providing false information, which erodes seller trust.

Rather than fully replacing humans, the optimal AI model acts as a teammate. It handles data crunching and generates recommendations, freeing teams from analysis to focus on strategic decision-making and approving AI's proposed actions, like halting ad spend on out-of-stock items.

Unlike simple "Ctrl+F" searches, modern language models analyze and attribute semantic meaning to legal phrases. This allows platforms to track a single legal concept (like a "J.Crew blocker") even when it's phrased a thousand different ways across complex documents, enabling true market-wide quantification for the first time.

In sectors like finance or healthcare, bypass initial regulatory hurdles by implementing AI on non-sensitive, public information, such as analyzing a company podcast. This builds momentum and demonstrates value while more complex, high-risk applications are vetted by legal and IT teams.

For enterprises, scaling AI content without built-in governance is reckless. Rather than manual policing, guardrails like brand rules, compliance checks, and audit trails must be integrated from the start. The principle is "AI drafts, people approve," ensuring speed without sacrificing safety.