We scan new podcasts and send you the top 5 insights daily.
To avoid the trap of adopting the last opinion you heard, Galloway suggests a modern tactic: after reading something, prompt an AI to 'make an argument against this.' This low-friction method forces you to confront counterarguments, either tempering your view or strengthening your conviction with a more robust understanding of the topic.
By default, AI models are designed to be agreeable. To get true value, explicitly instruct the AI to act as a critic or 'devil's advocate.' Ask it to challenge your assumptions and list potential risks. This exposes blind spots and leads to stronger, more resilient strategies than you would develop with a simple 'yes-man' assistant.
To sharpen your thinking, use ChatGPT as a Socratic partner. Feed it your argument and ask it to generate both supporting points and strong counterarguments. This dialectical process helps you anticipate objections and refine your position, leading to a more robust final synthesis.
Before publishing, feed your work to an AI and ask it to find all potential criticisms and holes in your reasoning. This pre-publication stress test helps identify blind spots you would otherwise miss, leading to stronger, more defensible arguments.
Instead of accepting a single answer, prompt the AI to generate multiple options and then argue the pros and cons of each. This "debating partner" technique forces the model to stress-test its own logic, leading to more robust and nuanced outputs for strategic decision-making.
Log your major decisions and expected outcomes into an AI, but explicitly instruct it to challenge your thinking. Since most AIs are designed to be agreeable, you must prompt them to be critical. This practice helps you uncover flaws in your logic and improve your strategic choices.
AI models tend to be overly optimistic. To get a balanced market analysis, explicitly instruct AI research tools like Perplexity to act as a "devil's advocate." This helps uncover risks, challenge assumptions, and makes it easier for product managers to say "no" to weak ideas quickly.
AI models often default to being agreeable (sycophancy), which limits their value as a thought partner. To get valuable, critical feedback, users must explicitly instruct the AI in their prompt to take on a specific persona, such as a skeptic or a harsh editor, to challenge their ideas.
Instead of banning AI, educators should teach students how to prompt it effectively to improve their decision-making. This includes forcing it to cite sources, generate counterarguments, and explain its reasoning, turning AI into a tool for critical inquiry rather than just an answer machine.
Standard AI models are often overly supportive. To get genuine, valuable feedback, explicitly instruct your AI to act as a critical thought partner. Use prompts like "push back on things" and "feel free to challenge me" to break the AI's default agreeableness and turn it into a true sparring partner.
Meetings often suffer from groupthink, where consensus is prioritized over critical thinking. AI can be used to disrupt this by introducing alternative perspectives and challenging assumptions. Even if the AI's points are not perfect, they serve the crucial function of breaking the gravitational pull toward premature agreement.