While AI can effectively replicate an executive's communication style or past decisions, it falls short in capturing their capacity for continuous learning and adaptation. A leader’s judgment evolves with new context, a dynamic process that current AI models struggle to keep pace with.

Related Insights

By replacing the foundational, detail-oriented work of junior analysts, AI prevents them from gaining the hands-on experience needed to build sophisticated mental models. This will lead to a future shortage of senior leaders with the deep judgment that only comes from being "in the weeds."

CMO Laura Kneebush argues that trying to "get good at AI" is futile because it evolves too quickly. Instead, leaders should focus on building organizations that are "good in a world that's going to constantly change," treating AI as one part of a continuous learning culture.

Treat advanced AI systems not as software with binary outcomes, but as a new employee with a unique persona. They can offer diverse, non-obvious insights and a different "chain of thought," sometimes finding issues even human experts miss and providing complementary perspectives.

Off-the-shelf AI models can only go so far. The true bottleneck for enterprise adoption is "digitizing judgment"—capturing the unique, context-specific expertise of employees within that company. A document's meaning can change entirely from one company to another, requiring internal labeling.

Pega's CTO warns leaders not to confuse managing AI with managing people. AI is software that is configured, coded, and tested. People require inspiration, development, and leadership. Treating AI like a human team member is a fundamental error that leads to poor management of both technology and people.

AI is commoditizing knowledge by making vast amounts of data accessible. Therefore, the leaders who thrive will not be those with the most data, but those with the most judgment. The key differentiator will be the uniquely human ability to apply wisdom, context, and insight to AI-generated outputs to make effective decisions.

GSB professors warn that professionals who merely use AI as a black box—passing queries and returning outputs—risk minimizing their own role. To remain valuable, leaders must understand the underlying models and assumptions to properly evaluate AI-generated solutions and maintain control of the decision-making process.

AI models excel at specific tasks (like evals) because they are trained exhaustively on narrow datasets, akin to a student practicing 10,000 hours for a coding competition. While they become experts in that domain, they fail to develop the broader judgment and generalization skills needed for real-world success.

The central challenge for current AI is not merely sample efficiency but a more profound failure to generalize. Models generalize 'dramatically worse than people,' which is the root cause of their brittleness, inability to learn from nuanced instruction, and unreliability compared to human intelligence. Solving this is the key to the next paradigm.

A key gap between AI and human intelligence is the lack of experiential learning. Unlike a human who improves on a job over time, an LLM is stateless. It doesn't truly learn from interactions; it's the same static model for every user, which is a major barrier to AGI.