Get your free personalized podcast brief

We scan new podcasts and send you the top 5 insights daily.

Introducing predictive algorithms into the legal system for bail, parole, or even lawsuit viability shifts its foundation. Justice becomes a game of probabilities rather than a process based on principles. This makes it easier for guilty parties to escape, as they only need to make a case seem slightly unlikely to succeed, distorting justice.

Related Insights

Even if jobs like judges are legally protected from direct AI replacement, they can be de facto automated. If every judge uses the same AI model for decision support, the outcome is systemic homogenization of judgment, creating a centralized point of failure without any formal automation.

When an algorithm deems someone "unemployable," that person is denied jobs, thus validating the prediction. The system generates its own accuracy by creating the reality it purports to predict, leaving no error signal to correct itself. Oxford philosopher Carissa Véliz calls this a "perfect crime" as the evidence disappears.

Risk assessment tools used in courts are often trained on old data and fail to account for societal shifts in crime and policing, creating "cohort bias." This leads to massive overpredictions of an individual's likelihood to commit a crime, resulting in harsher, unjust sentences.

Former Michigan Chief Justice Bridget McCormack argues that the legal system's probabilistic nature, driven by human fallibility, is a core inefficiency. Greater predictability would reduce disputes by allowing businesses and individuals to plan around clear, consistently enforced rules.

Historically, time and cost acted as a natural defense against overwhelming systems. AI agents can now execute millions of tasks—like filing legal motions or making lowball offers—for nearly free, threatening to collapse systems not built for this scale.

The legal system, despite its structure, is fundamentally non-deterministic and influenced by human factors. Applying new, equally non-deterministic AI systems to this already unpredictable human process poses a deep philosophical challenge to the notion of law as a computable, deterministic process.

While AI can inherit biases from training data, those datasets can be audited, benchmarked, and corrected. In contrast, uncovering and remedying the complex cognitive biases of a human judge is far more difficult and less systematic, making algorithmic fairness a potentially more solvable problem.

As AI models are used for critical decisions in finance and law, black-box empirical testing will become insufficient. Mechanistic interpretability, which analyzes model weights to understand reasoning, is a bet that society and regulators will require explainable AI, making it a crucial future technology.

When a bank rejects a loan based on clear, factual criteria (e.g., insufficient funds), the applicant can take specific actions to rectify it. Rejections based on opaque predictive models are not facts but "educated guesses," which cannot be proven false, leaving applicants with no recourse and shielding institutions from accountability.

Technological advancement, particularly in AI, moves faster than legal and social frameworks can adapt. This creates 'lawless spaces,' akin to the Wild West, where powerful new capabilities exist without clear rules or recourse for those negatively affected. This leaves individuals vulnerable to algorithmic decisions about jobs, loans, and more.

Probabilistic Thinking Degrades Justice from a Principle-Based System | RiffOn