Instead of opaque 'black box' algorithms, MDT uses decision trees that allow their team to see and understand the logic behind every trade. This transparency is crucial for validating the model's decisions and identifying when a factor's effectiveness is decaying over time.
The 'company age' factor is not predictive on its own. MDT's decision tree model uses it to create context, asking different questions about young companies versus mature ones. For example, valuation proves to be a much more important factor for older, established businesses.
The firm discovered a reversal effect in stocks down 70-80%. The strategy's efficacy was confirmed when their own traders instinctively wanted to override these trades due to negative headlines. This emotional bias, even among professionals, is the inefficiency the model exploits.
To trust an agentic AI, users need to see its work, just as a manager would with a new intern. Design patterns like "stream of thought" (showing the AI reasoning) or "planning mode" (presenting an action plan before executing) make the AI's logic legible and give users a chance to intervene, building crucial trust.
WCM avoids generic AI use cases. Instead, they've built a "research partner" AI model specifically tuned to codify and diagnose their core concepts of "moat trajectory" and "culture." This allows them to amplify their unique edge by systematically flagging changes across a vast universe of data, rather than just automating simple tasks.
Rather than building one deep, complex decision tree that would rely on increasingly smaller data subsets, MDT's model uses an ensemble method. It combines a 'forest' of many shallow trees, each with only two to five questions, to maintain statistical robustness while capturing complexity.
As AI models are used for critical decisions in finance and law, black-box empirical testing will become insufficient. Mechanistic interpretability, which analyzes model weights to understand reasoning, is a bet that society and regulators will require explainable AI, making it a crucial future technology.
The firm doesn't just decide a factor is obsolete. Their process begins by observing within their transparent 'glass box' model that a factor (like book-to-price) is driving fewer and fewer trades. This observation prompts a formal backtest to confirm its removal won't harm performance.
GSB professors warn that professionals who merely use AI as a black box—passing queries and returning outputs—risk minimizing their own role. To remain valuable, leaders must understand the underlying models and assumptions to properly evaluate AI-generated solutions and maintain control of the decision-making process.
Unlike many AI tools that hide the model's reasoning, Spiral displays it by default. This intentional design choice frames the AI as a "writing partner," helping users understand its perspective, spot misunderstandings, and collaborate more effectively, which builds trust in the process.
MDT deliberately avoids competing on acquiring novel, expensive datasets (informational edge). Instead, they focus on their analytical edge: applying sophisticated machine learning tools to long-history, high-quality standard datasets like financials and prices to find differentiated insights.