The legal system, despite its structure, is fundamentally non-deterministic and influenced by human factors. Applying new, equally non-deterministic AI systems to this already unpredictable human process poses a deep philosophical challenge to the notion of law as a computable, deterministic process.
The judicial theory of "originalism" seeks to interpret laws based on their meaning at the time of enactment. This creates demand for AI tools that can perform large-scale historical linguistic analysis ("corpus linguistics"), effectively outsourcing a component of legal reasoning to AI.
AI tools are taking over foundational research and drafting, tasks traditionally done by junior associates. This automation disrupts the legal profession's apprenticeship model, raising questions about how future senior lawyers will gain essential hands-on experience and skills.
The CEO contrasts general-purpose AI with their "courtroom-grade" solution, built on a proprietary, authoritative data set of 160 billion documents. This ensures outputs are grounded in actual case law and verifiable, addressing the core weaknesses of consumer models for professional use.
To ensure accuracy in its legal AI, LexisNexis unexpectedly hired a large number of lawyers, not just data scientists. These legal experts are crucial for reviewing AI output, identifying errors, and training the models, highlighting the essential role of human domain expertise in specialized AI.
Rather than relying on a single LLM, LexisNexis employs a "planning agent" that decomposes a complex legal query into sub-tasks. It then assigns each task (e.g., deep research, document drafting) to the specific LLM best suited for it, demonstrating a sophisticated, model-agnostic approach for enterprise AI.
While AI "hallucinations" grab headlines, the more systemic risk is lawyers becoming overly reliant on AI and failing to perform due diligence. The LexisNexis CEO predicts an attorney will eventually lose their license not because the AI failed, but because the human failed to properly review the work.
When pressed on whether his AI tool would help argue against birthright citizenship—a settled but politically contested issue—the LexisNexis CEO framed the product as a neutral tool. This highlights the tightrope enterprise AI providers must walk: serving all customers while avoiding being seen as politically biased.
