General-purpose LLMs generate responses based on the average of vast datasets. When used for leadership advice, they risk promoting a 'median' or average leadership style. This not only stifles authenticity but can also reinforce historical biases present in the training data.

Related Insights

Wisdom emerges from the contrast of diverse viewpoints. If future generations are educated by a few dominant AI models, they will all learn from the same worldview. This intellectual monoculture could stifle the fringe thinking and unique perspectives that have historically driven breakthroughs.

Leaders are often trapped "inside the box" of their own assumptions when making critical decisions. By providing AI with context and assigning it an expert role (e.g., "world-class chief product officer"), you can prompt it to ask probing questions that reveal your biases and lead to more objective, defensible outcomes.

Effective leadership AI shouldn't force conformity. Instead of producing 'AI soup,' specialized tools should act as intelligence engines that help leaders identify their unique, authentic style and provide recommendations on how to turn their differences into their greatest strengths.

AI expert Andrej Karpathy suggests treating LLMs as simulators, not entities. Instead of asking, "What do you think?", ask, "What would a group of [relevant experts] say?". This elicits a wider range of simulated perspectives and avoids the biases inherent in forcing the LLM to adopt a single, artificial persona.

The most significant risk of AI is abdicating human judgment and becoming a mediocre content generator. Instead, view AI as a collaborative partner. Your role as the leader is to define the prompt, provide context, challenge biases, and apply discernment to the output, solidifying your own strategic value.

Hands-on AI model training shows that AI is not an objective engine; it's a reflection of its trainer. If the training data or prompts are narrow, the AI will also be narrow, failing to generalize. This process reveals that the model is "only as deep as I tell it to be," highlighting the human's responsibility.

The true danger of LLMs in the workplace isn't just sloppy output, but the erosion of deep thinking. The arduous process of writing forces structured, first-principles reasoning. By making it easy to generate plausible text from bullet points, LLMs allow users to bypass this critical thinking process, leading to shallower insights.

Richard Sutton, author of "The Bitter Lesson," argues that today's LLMs are not truly "bitter lesson-pilled." Their reliance on finite, human-generated data introduces inherent biases and limitations, contrasting with systems that learn from scratch purely through computational scaling and environmental interaction.

The rise of LLMs creates a new bar for leadership communication: the "GPT test." If a public figure's statements or writings are indistinguishable from what ChatGPT could generate, they will fail to build an authentic brand. This forces a shift towards genuine originality and unpolished thought.

While AI can effectively replicate an executive's communication style or past decisions, it falls short in capturing their capacity for continuous learning and adaptation. A leader’s judgment evolves with new context, a dynamic process that current AI models struggle to keep pace with.

General-Purpose AI Risks Creating 'Median' Leaders Who Reinforce Stereotypes | RiffOn