Get your free personalized podcast brief

We scan new podcasts and send you the top 5 insights daily.

AI thrives in domains with fixed, written rules and searchable histories, like programming. In ambiguous areas like organizational conflict or political negotiation, where context is unwritten and lives in people's heads, its performance plummets. Its confident output masks this unreliability, posing a danger to decision-makers.

Related Insights

AI excels at learning fixed rules, like in chess or identifying a cat. However, it falters in domains like financial markets or politics where the 'game' is adversarial and multiplayer. Any successful AI strategy is quickly identified and countered, rendering it ineffective.

AI performs poorly in areas where expertise is based on unwritten 'taste' or intuition rather than documented knowledge. If the correct approach doesn't exist in training data or isn't explicitly provided by human trainers, models will inevitably struggle with that particular problem.

Unlike coding, where context is centralized (IDE, repo) and output is testable, general knowledge work is scattered across apps. AI struggles to synthesize this fragmented context, and it's hard to objectively verify the quality of its output (e.g., a strategy memo), limiting agent effectiveness.

AI should not be seen as a plug-and-play solution but as a magnifier of the current culture. If an organization struggles with trust, communication, or judgment, AI will amplify those weaknesses rather than solve them.

Messy AI-generated code ("slop") can still result in a functional product, hiding imperfections from the end user. In knowledge work, a slightly "off" AI-generated contract or memo creates immediate legal or business risk, as there is no interface to abstract away the sloppiness.

Today's AI systems exhibit "jagged intelligence"—strong performance on many tasks but inconsistent reliability on others. This prevents full job replacement because being 95% effective is insufficient when the remaining 5% involves crucial edge cases, judgment, and discretion that still require human oversight.

AI can process vast information but cannot replicate human common sense, which is the sum of lived experiences. This gap makes it unreliable for tasks requiring nuanced judgment, authenticity, and emotional understanding, posing a significant risk to brand trust when used without oversight.

Contrary to human intuition, a massive, well-documented domain makes an AI's job easier, not harder. More documentation provides more 'maps' for the AI to navigate. In contrast, a simple human conflict is unsolvable for an AI because its context isn't formalized or archived, creating a void of information.

AI systems often collapse because they are built on the flawed assumption that humans are logical and society is static. Real-world failures, from Soviet economic planning to modern systems, stem from an inability to model human behavior, data manipulation, and unexpected events.

While AI can effectively replicate an executive's communication style or past decisions, it falls short in capturing their capacity for continuous learning and adaptation. A leader’s judgment evolves with new context, a dynamic process that current AI models struggle to keep pace with.