AI accelerates data retrieval, but it creates a dangerous knowledge gap. Junior employees can find facts (e.g., in a financial statement) without the experience-based judgment to understand their deeper connections and second-order consequences for the business.
Using generative AI to produce work bypasses the reflection and effort required to build strong knowledge networks. This outsourcing of thinking leads to poor retention and a diminished ability to evaluate the quality of AI-generated output, mirroring historical data on how calculators impacted math skills.
Previously, data analysis required deep proficiency in tools like Excel. Now, AI platforms handle the technical manipulation, making the ability to ask insightful business questions—not technical skill—the most valuable asset for generating insights.
A key concern is that AI will automate tasks done by entry-level workers, reducing hiring for these roles. This poses a long-term strategic risk for companies, as they may fail to develop a pipeline of future managers who learn foundational skills early in their careers.
While AI-native, new graduates often lack the business experience and strategic context to effectively manage AI tools. Companies will instead prioritize senior leaders with high AI literacy who can achieve massive productivity gains, creating a challenging job market for recent graduates and a leaner organizational structure.
By replacing the foundational, detail-oriented work of junior analysts, AI prevents them from gaining the hands-on experience needed to build sophisticated mental models. This will lead to a future shortage of senior leaders with the deep judgment that only comes from being "in the weeds."
AI can quickly find data in financial reports but can't replicate an expert's ability to see crucial connections and second-order effects. This leads investors to a false sense of security, relying on a tool that provides information without the wisdom to interpret it correctly.
While cheating is a concern, a more insidious danger is students using AI to bypass deep cognitive engagement. They can produce correct answers without retaining knowledge, creating a cumulative learning deficit that is difficult to detect and remedy.
GSB professors warn that professionals who merely use AI as a black box—passing queries and returning outputs—risk minimizing their own role. To remain valuable, leaders must understand the underlying models and assumptions to properly evaluate AI-generated solutions and maintain control of the decision-making process.
AI models excel at specific tasks (like evals) because they are trained exhaustively on narrow datasets, akin to a student practicing 10,000 hours for a coding competition. While they become experts in that domain, they fail to develop the broader judgment and generalization skills needed for real-world success.
The most significant recent AI advance is models' ability to use chain-of-thought reasoning, not just retrieve data. However, most business users are unaware of this 'deep research' capability and continue using AI as a simple search tool, missing its transformative potential for complex problem-solving.