Cliff Asness argues that for machine learning to be truly additive, it must have a degree of opacity. If a human could fully intuit every step of the ML process, it would imply the discoveries could have been made with simpler methods. Surrendering the need for full explanation is necessary to harness its power.

Related Insights

Historically, investment tech focused on speed. Modern AI, like AlphaGo, offers something new: inhuman intelligence that reveals novel insights and strategies humans miss. For investors, this means moving beyond automation to using AI as a tool for generating genuine alpha through superior inference.

Attempting to interpret every learned circuit in a complex neural network is a futile effort. True understanding comes from describing the system's foundational elements: its architecture, learning rule, loss functions, and the data it was trained on. The emergent complexity is a result of this process.

The ambition to fully reverse-engineer AI models into simple, understandable components is proving unrealistic as their internal workings are messy and complex. Its practical value is less about achieving guarantees and more about coarse-grained analysis, such as identifying when specific high-level capabilities are being used.

As AI models are used for critical decisions in finance and law, black-box empirical testing will become insufficient. Mechanistic interpretability, which analyzes model weights to understand reasoning, is a bet that society and regulators will require explainable AI, making it a crucial future technology.

Instead of opaque 'black box' algorithms, MDT uses decision trees that allow their team to see and understand the logic behind every trade. This transparency is crucial for validating the model's decisions and identifying when a factor's effectiveness is decaying over time.

John Jumper contends that science has always operated with partial understanding, citing early crystallography and Roman engineering. He suggests demanding perfect 'black box' clarity for AI is a peculiar and unrealistic standard not applied to other scientific tools.

A key challenge of adopting ML in investing is its lack of explainability. When a traditional value strategy underperforms, you can point to a valuation bubble. An ML model can't offer a similar narrative, making it extremely difficult to manage client relationships during drawdowns because the 'why' is missing.

For AI systems to be adopted in scientific labs, they must be interpretable. Researchers need to understand the 'why' behind an AI's experimental plan to validate and trust the process, making interpretability a more critical feature than raw predictive power.

Cliff Asnes explains that integrating machine learning into investment processes involves a crucial trade-off. While AI models can identify complex, non-linear patterns that outperform traditional methods, their inner workings are often uninterpretable, forcing a departure from intuitively understood strategies.

Demanding interpretability from AI trading models is a fallacy because they operate at a superhuman level. An AI predicting a stock's price in one minute is processing data in a way no human can. Expecting a simple, human-like explanation for its decision is unreasonable, much like asking a chess engine to explain its moves in prose.