The judicial theory of "originalism" seeks to interpret laws based on their meaning at the time of enactment. This creates demand for AI tools that can perform large-scale historical linguistic analysis ("corpus linguistics"), effectively outsourcing a component of legal reasoning to AI.

Related Insights

To ensure accuracy in its legal AI, LexisNexis unexpectedly hired a large number of lawyers, not just data scientists. These legal experts are crucial for reviewing AI output, identifying errors, and training the models, highlighting the essential role of human domain expertise in specialized AI.

Previous technology shifts like mobile or client-server were often pushed by technologists onto a hesitant market. In contrast, the current AI trend is being pulled by customers who are actively demanding AI features in their products, creating unprecedented pressure on companies to integrate them quickly.

Many laws were written before technological shifts like the smartphone or AI. Companies like Uber and OpenAI found massive opportunities by operating in legal gray areas where old regulations no longer made sense and their service provided immense consumer value.

As AI models are used for critical decisions in finance and law, black-box empirical testing will become insufficient. Mechanistic interpretability, which analyzes model weights to understand reasoning, is a bet that society and regulators will require explainable AI, making it a crucial future technology.

Current AI tools are empowering laypeople to generate a flood of low-quality legal filings. This 'sludge' overwhelms the courts and creates more work for skilled attorneys who must respond to the influx of meritless litigation, ironically boosting demand for the very profession AI is meant to disrupt.

The 2017 introduction of "transformers" revolutionized AI. Instead of being trained on the specific meaning of each word, models began learning the contextual relationships between words. This allowed AI to predict the next word in a sequence without needing a formal dictionary, leading to more generalist capabilities.

The core legal battle is a referendum on "fair use" for the AI era. If AI summaries are deemed "transformative" (a new work), it's a win for AI platforms. If they're "derivative" (a repackaging), it could force widespread content licensing deals.

When pressed on whether his AI tool would help argue against birthright citizenship—a settled but politically contested issue—the LexisNexis CEO framed the product as a neutral tool. This highlights the tightrope enterprise AI providers must walk: serving all customers while avoiding being seen as politically biased.

Harvey is building agentic AI for law by modeling it on the human workflow where a senior partner delegates a high-level task to a junior associate. The associate (or AI agent) then breaks it down, researches, drafts, and seeks feedback, with the entire client matter serving as the reinforcement learning environment.

The CEO contrasts general-purpose AI with their "courtroom-grade" solution, built on a proprietary, authoritative data set of 160 billion documents. This ensures outputs are grounded in actual case law and verifiable, addressing the core weaknesses of consumer models for professional use.