A key challenge of adopting ML in investing is its lack of explainability. When a traditional value strategy underperforms, you can point to a valuation bubble. An ML model can't offer a similar narrative, making it extremely difficult to manage client relationships during drawdowns because the 'why' is missing.

Related Insights

Ken Griffin is skeptical of AI's role in long-term investing. He argues that since AI models are trained on historical data, they excel at static problems. However, investing requires predicting a future that may not resemble the past—a dynamic, forward-looking task where these models inherently struggle.

The ambition to fully reverse-engineer AI models into simple, understandable components is proving unrealistic as their internal workings are messy and complex. Its practical value is less about achieving guarantees and more about coarse-grained analysis, such as identifying when specific high-level capabilities are being used.

As AI models are used for critical decisions in finance and law, black-box empirical testing will become insufficient. Mechanistic interpretability, which analyzes model weights to understand reasoning, is a bet that society and regulators will require explainable AI, making it a crucial future technology.

Instead of opaque 'black box' algorithms, MDT uses decision trees that allow their team to see and understand the logic behind every trade. This transparency is crucial for validating the model's decisions and identifying when a factor's effectiveness is decaying over time.

AI can quickly find data in financial reports but can't replicate an expert's ability to see crucial connections and second-order effects. This leads investors to a false sense of security, relying on a tool that provides information without the wisdom to interpret it correctly.

Cliff Asness argues that for machine learning to be truly additive, it must have a degree of opacity. If a human could fully intuit every step of the ML process, it would imply the discoveries could have been made with simpler methods. Surrendering the need for full explanation is necessary to harness its power.

When selecting foundational models, engineering teams often prioritize "taste" and predictable failure patterns over raw performance. A model that fails slightly more often but in a consistent, understandable way is more valuable and easier to build robust systems around than a top-performer with erratic, hard-to-debug errors.

Cliff Asnes explains that integrating machine learning into investment processes involves a crucial trade-off. While AI models can identify complex, non-linear patterns that outperform traditional methods, their inner workings are often uninterpretable, forcing a departure from intuitively understood strategies.

Advanced AIs, like those in Starcraft, can dominate human experts in controlled scenarios but collapse when faced with a minor surprise. This reveals a critical vulnerability. Human investors can generate alpha by focusing on situations where unforeseen events or "thick tail" risks are likely, as these are the blind spots for purely algorithmic strategies.

Demanding interpretability from AI trading models is a fallacy because they operate at a superhuman level. An AI predicting a stock's price in one minute is processing data in a way no human can. Expecting a simple, human-like explanation for its decision is unreasonable, much like asking a chess engine to explain its moves in prose.