Meetings often suffer from groupthink, where consensus is prioritized over critical thinking. AI can be used to disrupt this by introducing alternative perspectives and challenging assumptions. Even if the AI's points are not perfect, they serve the crucial function of breaking the gravitational pull toward premature agreement.

Related Insights

Leaders are often trapped "inside the box" of their own assumptions when making critical decisions. By providing AI with context and assigning it an expert role (e.g., "world-class chief product officer"), you can prompt it to ask probing questions that reveal your biases and lead to more objective, defensible outcomes.

Users who treat AI as a collaborator—debating with it, challenging its outputs, and engaging in back-and-forth dialogue—see superior outcomes. This mindset shift produces not just efficiency gains, but also higher quality, more innovative results compared to simply delegating discrete tasks to the AI.

Instead of seeking consensus, your primary role in a group meeting is to surface disagreements. This brings out the real challenges and priorities that are usually discussed behind closed doors, giving you the full picture of the problem before you ever present a solution.

Move beyond simple prompts by designing detailed interactions with specific AI personas, like a "critic" or a "big thinker." This allows teams to debate concepts back and forth, transforming AI from a task automator into a true thought partner that amplifies rigor.

Log your major decisions and expected outcomes into an AI, but explicitly instruct it to challenge your thinking. Since most AIs are designed to be agreeable, you must prompt them to be critical. This practice helps you uncover flaws in your logic and improve your strategic choices.

Default AI models are often people-pleasers that will agree with flawed technical ideas. To get genuine feedback, create a dedicated AI project with a system prompt defining it as your "CTO." Instruct it to be the complete technical owner, to challenge your assumptions, and to avoid being agreeable.

Go beyond using AI for research by codifying your North Star, OKRs, and strategic goals into a personalized AI agent. Before important meetings, use this agent as a 'thought partner' to pressure-test your ideas, check for alignment with your goals, and identify blind spots. This 10-minute exercise dramatically improves meeting focus and outcomes.

Instead of using AI as a compliant assistant, program it to be a challenging 'sparring partner.' Ask it to find holes in your logic or anticipate all the critical questions your CEO might ask. This transforms it from a content generator into a powerful strategic tool for preparation.

AI models tend to be overly optimistic. To get a balanced market analysis, explicitly instruct AI research tools like Perplexity to act as a "devil's advocate." This helps uncover risks, challenge assumptions, and makes it easier for product managers to say "no" to weak ideas quickly.

Instead of using AI to generate strategic documents, which he believes short-circuits his own thinking process, Bret Taylor uses it as a critical partner. He writes his own strategy notes and then prompts ChatGPT to critique them and find flaws. This leverages AI's analytical power without sacrificing the deliberative process of writing.