Even in hyper-quantitative fields, relying solely on logical models is a failing strategy. Stanford professor Sandy Pentland notes that traders who observe the behavior of other humans consistently perform better, as this provides context on edge cases and tail risks that equations alone cannot capture.
Cliff Asness argues that quant strategies like value investing persist through all technological eras because their true edge is arbitraging consistent human behaviors like over-extrapolation. As long as people get swept up in narratives and misprice assets, the quantitative edge will remain.
In fields like finance, communities with strong internal communication and vested interests make better long-term decisions than purely quantitative models. The group's "shared wisdom" provides a broader, more contextual view of risks and opportunities that myopic mathematical approaches often miss.
Certain individuals have a proven, high success rate in their domain. Rather than relying solely on your own intuition or A/B testing, treat these people as APIs. Query them for feedback on your ideas to get a high-signal assessment of your blind spots and chances of success.
Professor Sandy Pentland warns that AI systems often fail because they incorrectly model humans as logical individuals. In reality, 95% of human behavior is driven by "social foraging"—learning from cultural cues and others' actions. Systems ignoring this human context are inherently brittle.
As quantitative models and AI dominate traditional strategies, the only remaining source of alpha is in "weird" situations. These are unique, non-replicable events, like the Elon Musk-Twitter saga, that lack historical parallels for machines to model. Investors must shift from finding undervalued assets to identifying structurally strange opportunities where human judgment has an edge.
Even when AI performs tasks like chess at a superhuman level, humans still gravitate towards watching other imperfect humans compete. This suggests our engagement stems from fallibility, surprise, and the shared experience of making mistakes—qualities that perfectly optimized AI lacks, limiting its cultural replacement of human performance.
Citing Nobel laureate Danny Kahneman, who estimated 95% of human behavior is learned by observing others, AI systems should be designed to complement this "social foraging" nature. AI should act as an advisor providing context, rather than assuming users are purely logical decision-makers.
Cliff Asnes explains that integrating machine learning into investment processes involves a crucial trade-off. While AI models can identify complex, non-linear patterns that outperform traditional methods, their inner workings are often uninterpretable, forcing a departure from intuitively understood strategies.
Advanced AIs, like those in Starcraft, can dominate human experts in controlled scenarios but collapse when faced with a minor surprise. This reveals a critical vulnerability. Human investors can generate alpha by focusing on situations where unforeseen events or "thick tail" risks are likely, as these are the blind spots for purely algorithmic strategies.
Amateurs playing basketball compete on a horizontal plane, while NBA pros add a vertical dimension (dunking). Similarly, individual investors cannot beat quantitative funds at their game of speed, data, and leverage. The only path to winning is to change the game's dimensions entirely by focusing on "weird," qualitative factors that algorithms are not built to understand.