Get your free personalized podcast brief

We scan new podcasts and send you the top 5 insights daily.

Tech leaders often equate the law's structured language with deterministic software code, believing it can be easily automated. They miss that our legal system requires ambiguity to function, allowing for interpretation and argument. This fundamental difference creates a collision when trying to apply a purely computational model to the law.

Related Insights

Former Michigan Chief Justice Bridget McCormack argues that the legal system's probabilistic nature, driven by human fallibility, is a core inefficiency. Greater predictability would reduce disputes by allowing businesses and individuals to plan around clear, consistently enforced rules.

The legal system, despite its structure, is fundamentally non-deterministic and influenced by human factors. Applying new, equally non-deterministic AI systems to this already unpredictable human process poses a deep philosophical challenge to the notion of law as a computable, deterministic process.

The intersection of AI and law is not a single topic but two distinct, orthogonal fields. The 'law of AI' concerns policy and regulation of the technology itself. 'AI and the law' studies how AI tools are transforming the cognitive practice of the legal profession.

Unlike simple "Ctrl+F" searches, modern language models analyze and attribute semantic meaning to legal phrases. This allows platforms to track a single legal concept (like a "J.Crew blocker") even when it's phrased a thousand different ways across complex documents, enabling true market-wide quantification for the first time.

Messy AI-generated code ("slop") can still result in a functional product, hiding imperfections from the end user. In knowledge work, a slightly "off" AI-generated contract or memo creates immediate legal or business risk, as there is no interface to abstract away the sloppiness.

A top-tier lawyer’s value mirrors that of a distinguished engineer: it's not just their network, but their ability to architect complex transactions. They can foresee subtle failure modes and understand the entire system's structure, a skill derived from experience with non-public processes and data—the valuable 'reasoning traces' AI models lack.

Our legal framework, which relies on precedent and slow, deliberate change, cannot keep up with the exponential advancement of AI. This fundamental mismatch creates a regulatory crisis where laws are instantly obsolete, suggesting the need for a new paradigm like 'lightning round legislation' to govern emerging tech.

Introducing predictive algorithms into the legal system for bail, parole, or even lawsuit viability shifts its foundation. Justice becomes a game of probabilities rather than a process based on principles. This makes it easier for guilty parties to escape, as they only need to make a case seem slightly unlikely to succeed, distorting justice.

AI's value in a compliance platform isn't in answering binary audit questions (e.g., "is X encrypted?"). Instead, it should automate the messy, non-deterministic work around them, like finding compliance obligations hidden in legal contracts, a task previously impossible to do at scale.

Giving AI a 'constitution' to follow isn't a panacea for alignment. As history shows with human legal systems, even well-written principles can be interpreted in unintended ways. North Korea’s liberal-on-paper constitution is a prime example of this vulnerability.