Rather than building one deep, complex decision tree that would rely on increasingly smaller data subsets, MDT's model uses an ensemble method. It combines a 'forest' of many shallow trees, each with only two to five questions, to maintain statistical robustness while capturing complexity.

Related Insights

The 'company age' factor is not predictive on its own. MDT's decision tree model uses it to create context, asking different questions about young companies versus mature ones. For example, valuation proves to be a much more important factor for older, established businesses.

Due to signal loss from cookie deprecation, no single model like MTA or MMM is sufficient. The new gold standard is using all available algorithms together in a machine learning framework, allowing them to influence each other for a more accurate ROI picture.

As AI models are used for critical decisions in finance and law, black-box empirical testing will become insufficient. Mechanistic interpretability, which analyzes model weights to understand reasoning, is a bet that society and regulators will require explainable AI, making it a crucial future technology.

Instead of opaque 'black box' algorithms, MDT uses decision trees that allow their team to see and understand the logic behind every trade. This transparency is crucial for validating the model's decisions and identifying when a factor's effectiveness is decaying over time.

The effectiveness of an AI system isn't solely dependent on the model's sophistication. It's a collaboration between high-quality training data, the model itself, and the contextual understanding of how to apply both to solve a real-world problem. Neglecting data or context leads to poor outcomes.

To improve the quality and accuracy of an AI agent's output, spawn multiple sub-agents with competing or adversarial roles. For example, a code review agent finds bugs, while several "auditor" agents check for false positives, resulting in a more reliable final analysis.

The most fundamental challenge in AI today is not scale or architecture, but the fact that models generalize dramatically worse than humans. Solving this sample efficiency and robustness problem is the true key to unlocking the next level of AI capabilities and real-world impact.

Fine-tuning an AI model is most effective when you use high-signal data. The best source for this is the set of difficult examples where your system consistently fails. The processes of error analysis and evaluation naturally curate this valuable dataset, making fine-tuning a logical and powerful next step after prompt engineering.

MDT deliberately avoids competing on acquiring novel, expensive datasets (informational edge). Instead, they focus on their analytical edge: applying sophisticated machine learning tools to long-history, high-quality standard datasets like financials and prices to find differentiated insights.

Instead of offering a model selector, creating a proprietary, branded model allows a company to chain different specialized models for various sub-tasks (e.g., search, generation). This not only improves overall performance but also provides business independence from the pricing and launch cycles of a single frontier model lab.