To combat misinformation, present learners with two plausible-sounding pieces of information—one true, one false—and ask them to determine which is real. This method powerfully demonstrates their own fallibility and forces them to learn the cues that differentiate truth from fiction.

Related Insights

Schools ban AI like ChatGPT fearing it's a tool for cheating, but this is profoundly shortsighted. The quality of an AI's output is entirely dependent on the critical thinking behind the user's input. This makes AI the first truly scalable tool for teaching children how to think critically, a skill far more valuable than memorization.

Political arguments often stall because people use loaded terms like 'critical race theory' with entirely different meanings. Before debating, ask the other person to define the term. This simple step often reveals that the core disagreement is based on a misunderstanding, not a fundamental clash of values.

Treat AI as a critique partner. After synthesizing research, explain your takeaways and then ask the AI to analyze the same raw data to report on patterns, themes, or conclusions you didn't mention. This is a powerful method for revealing analytical blind spots.

AI models tend to be overly optimistic. To get a balanced market analysis, explicitly instruct AI research tools like Perplexity to act as a "devil's advocate." This helps uncover risks, challenge assumptions, and makes it easier for product managers to say "no" to weak ideas quickly.

When confronting seemingly false facts in a discussion, arguing with counter-facts is often futile. A better approach is to get curious about the background, context, and assumptions that underpin their belief, as most "facts" are more complex than they appear.

When presented with direct facts, our brains use effortful reasoning, which is prone to defensive reactions. Stories transport us, engaging different, more social brain systems. This allows us to analyze a situation objectively, as if observing others, making us more receptive to the underlying message.

The AI debate is becoming polarized as influencers and politicians present subjective beliefs with high conviction, treating them as non-negotiable facts. This hinders balanced, logic-based conversations. It is crucial to distinguish testable beliefs from objective truths to foster productive dialogue about AI's future.

Instead of allowing AI to atrophy critical thinking by providing instant answers, leverage its "guided learning" capabilities. These features teach the process of solving a problem rather than just giving the solution, turning AI into a Socratic mentor that can accelerate learning and problem-solving abilities.

Instead of personally challenging a guest, read a critical quote about them from another source. This reframes you as a neutral moderator giving them a chance to respond, rather than an attacker. The guest has likely already prepared an answer for known criticisms.

Standard AI models are often overly supportive. To get genuine, valuable feedback, explicitly instruct your AI to act as a critical thought partner. Use prompts like "push back on things" and "feel free to challenge me" to break the AI's default agreeableness and turn it into a true sparring partner.