Get your free personalized podcast brief

We scan new podcasts and send you the top 5 insights daily.

Unlike traditional software that produces identical, auditable results, AI is non-deterministic and often can't explain its reasoning. This poses a major challenge for finance, an industry where processes must be repeatable and transparent to meet regulatory and client expectations for showing work.

Related Insights

The need for explicit user transparency is most critical for nondeterministic systems like LLMs, where even creators don't always know why an output was generated. Unlike a simple rules engine with predictable outcomes, AI's "black box" nature requires giving users more context to build trust.

Traditional software relies on predictable, deterministic functions. AI agents introduce a new paradigm of "stochastic subroutines," where correctness and logic are abdicated. This means developers must design systems that can achieve reliable outcomes despite the non-deterministic paths the AI might take to get there.

Leaders often misunderstand AI's probabilistic nature, thinking it's a flaw that will be "fixed." Drawing parallels to chaos theory, the slight non-determinism is an intentional feature that enables creativity and requires building systems with guardrails and human oversight, not seeking perfect predictability.

As AI models are used for critical decisions in finance and law, black-box empirical testing will become insufficient. Mechanistic interpretability, which analyzes model weights to understand reasoning, is a bet that society and regulators will require explainable AI, making it a crucial future technology.

While businesses accept that employees make mistakes, their expectation for software is absolute reliability. This unforgiving standard creates a durable moat for enterprise platforms that provide deterministic outcomes, a key challenge for probabilistic AI models in critical workflows.

To solve for AI hallucinations in high-stakes decisions, advanced platforms use the LLM as an interpreter that writes code to query raw data. If data is unavailable, it returns an error instead of fabricating an answer, making every analysis fully auditable and grounded in verifiable data.

A significant hurdle for using large vision models in production is their non-deterministic nature. The same model can produce different results for the same query at different times, making it difficult to build reliable, consistent downstream systems. This unpredictability is a key challenge alongside speed and cost.

Unlike other industries accustomed to deterministic software, the finance world is already familiar with non-deterministic systems through stochastic pricing models and market analysis. This cultural familiarity gives financial professionals a head start in embracing the probabilistic nature of modern AI tools.

Demanding interpretability from AI trading models is a fallacy because they operate at a superhuman level. An AI predicting a stock's price in one minute is processing data in a way no human can. Expecting a simple, human-like explanation for its decision is unreasonable, much like asking a chess engine to explain its moves in prose.

Unlike traditional software, AI products have unpredictable user inputs and LLM outputs (non-determinism). They also require balancing AI autonomy (agency) with user oversight (control). These two factors fundamentally change the product development process, requiring new approaches to design and risk management.