AI can quickly find data in financial reports but can't replicate an expert's ability to see crucial connections and second-order effects. This leads investors to a false sense of security, relying on a tool that provides information without the wisdom to interpret it correctly.

Related Insights

Using AI to generate content without adding human context simply transfers the intellectual effort to the recipient. This creates rework, confusion, and can damage professional relationships, explaining the low ROI seen in many AI initiatives.

As platforms like AlphaSense automate the grunt work of research, the advantage is no longer in finding information. The new "alpha" for investors comes from asking better, more creative questions, identifying cross-industry trends, and being more adept at prompting the AI to uncover non-obvious connections.

Ken Griffin is skeptical of AI's role in long-term investing. He argues that since AI models are trained on historical data, they excel at static problems. However, investing requires predicting a future that may not resemble the past—a dynamic, forward-looking task where these models inherently struggle.

By replacing the foundational, detail-oriented work of junior analysts, AI prevents them from gaining the hands-on experience needed to build sophisticated mental models. This will lead to a future shortage of senior leaders with the deep judgment that only comes from being "in the weeds."

AI finds the most efficient correlation in data, even if it's logically flawed. One system learned to associate rulers in medical images with cancer, not the lesion itself, because doctors often measure suspicious spots. This highlights the profound risk of deploying opaque AI systems in critical fields.

While AI can accelerate tasks like writing, the real learning happens during the creative process itself. By outsourcing the 'doing' to AI, we risk losing the ability to think critically and synthesize information. Research shows our brains are physically remapping, reducing our ability to think on our feet.

Unlike deterministic SaaS software that works consistently, AI is probabilistic and doesn't work perfectly out of the box. Achieving 'human-grade' performance (e.g., 99.9% reliability) requires continuous tuning and expert guidance, countering the hype that AI is an immediate, hands-off solution.

GSB professors warn that professionals who merely use AI as a black box—passing queries and returning outputs—risk minimizing their own role. To remain valuable, leaders must understand the underlying models and assumptions to properly evaluate AI-generated solutions and maintain control of the decision-making process.

Advanced AI tools like "deep research" models can produce vast amounts of information, like 30-page reports, in minutes. This creates a new productivity paradox: the AI's output capacity far exceeds a human's finite ability to verify sources, apply critical thought, and transform the raw output into authentic, usable insights.

The most significant recent AI advance is models' ability to use chain-of-thought reasoning, not just retrieve data. However, most business users are unaware of this 'deep research' capability and continue using AI as a simple search tool, missing its transformative potential for complex problem-solving.