Get your free personalized podcast brief

We scan new podcasts and send you the top 5 insights daily.

While human analysts think linearly (e.g., higher oil -> inflation -> higher rates), LLMs process repercussions simultaneously across many dimensions (e.g., impact on ethanol, drillers, producers, yield curve). This allows for a much faster and more comprehensive understanding of market events.

Related Insights

LLMs predict the next token in a sequence. The brain's cortex may function as a general prediction engine capable of "omnidirectional inference"—predicting any missing information from any available subset of inputs, not just what comes next. This offers a more flexible and powerful form of reasoning.

The complexity in LLMs isn't intelligence emerging in silicon; it reflects our own. These models are deep because they encode the vast, causally powerful structure of human language and culture. We are looking at a high-resolution imprint of our own collective mind.

The perception of a 'critically thinking' AI doesn't come from a single, powerful model. It's the result of using multiple levels of LLMs, each with a very specific, targeted task—one for orchestrating, one for actioning, and another for responding. This specificity yields far better results than a generalist approach.

The significant leap in LLMs isn't just better text generation, but their ability to autonomously execute complex, sequential tasks. This 'agentic behavior' allows them to handle multi-step processes like scientific validation workflows, a capability earlier models lacked, moving them beyond single-command execution.

While both humans and LLMs perform Bayesian updating, humans possess a critical additional capability: causal simulation. When a pen is thrown, a human simulates its trajectory to dodge it—a causal intervention. LLMs are stuck at the level of correlation and cannot perform these essential simulations.

While summarization is useful, AI's unique power is creating a massive grid comparing perspectives from management, sell-side analysts, and expert calls on key business drivers. This helps investors quickly identify the most critical debates for deeper research.

While AI can easily generate checklists and templates, its transformative potential comes from its reasoning capabilities. It can parse decades of industry data to suggest a course of action and, more importantly, articulate the arguments and counterarguments, educating the user on the second-order consequences of their decisions.

We can now prove that LLMs are not just correlating tokens but are developing sophisticated internal world models. Techniques like sparse autoencoders untangle the network's dense activations, revealing distinct, manipulable concepts like "Golden Gate Bridge." This conclusively demonstrates a deeper, conceptual understanding within the models.

Hunt reveals their initial, hand-built models were like a small net that missed most signals. The probabilistic approach of modern LLMs allowed them to build a vastly more effective system, exceeding their 5-6x improvement estimate by orders of magnitude.

In global macro, theses often rely on small data sets (e.g., few historical recessions). AI expands this sample size by identifying fundamentally similar crises across different countries and eras, or by so deeply modeling the economic logic that a large sample becomes less necessary for conviction.