We scan new podcasts and send you the top 5 insights daily.
Rather than banning AI, one professor engages students by explaining how LLMs work—predicting word order without concern for truth. This frames the tool as a "bullshitter," which is fundamentally incompatible with a historian's duty to factual accuracy, turning a cheating tool into a lesson on epistemology.
The education system is fixated on preventing AI-assisted cheating, missing the larger point: AI is making the traditional "test" and its associated skills obsolete. The focus must shift from policing tools to a radical curriculum overhaul that prioritizes durable human skills like ethical judgment and creative problem-solving.
Schools ban AI like ChatGPT fearing it's a tool for cheating, but this is profoundly shortsighted. The quality of an AI's output is entirely dependent on the critical thinking behind the user's input. This makes AI the first truly scalable tool for teaching children how to think critically, a skill far more valuable than memorization.
ASU's president argues that if an AI can answer an assignment, the assignment has failed. The educator's role must evolve to use AI to 'up the game,' forcing students to ask more sophisticated questions, making the quality of the query—not the synthesized answer—the hallmark of learning.
Following philosopher Harry Frankfurt's definition, a bullshitter is someone who disregards truth entirely to achieve a desired effect. Oxford philosopher Carissa Véliz argues LLMs fit this model perfectly, as they are designed to please and engage users, not track truth. They will say whatever works, true or not, to satisfy the user.
Instead of policing AI use, a novel strategy is for teachers to show students what AI produces on an assignment and grade it as a 'B-'. This sets a clear baseline, reframing AI as a starting point and challenging students to use human creativity and critical thinking to achieve a higher grade.
To get maximum intellectual value from AI, explicitly instruct it to challenge you. Using prompts like 'Tell me why I'm wrong' or 'Identify my blind spots' transforms AI from a sycophantic assistant into a powerful tool for stress-testing ideas and overcoming cognitive dissonance.
Using AI to merely generate artifacts fosters overconfidence due to the "authorship fallacy," where you love what you create regardless of quality. A better approach is using AI as a Socratic tutor to teach you a process, enhancing your skills rather than replacing them.
Instead of allowing AI to atrophy critical thinking by providing instant answers, leverage its "guided learning" capabilities. These features teach the process of solving a problem rather than just giving the solution, turning AI into a Socratic mentor that can accelerate learning and problem-solving abilities.
Instead of banning AI, educators should teach students how to prompt it effectively to improve their decision-making. This includes forcing it to cite sources, generate counterarguments, and explain its reasoning, turning AI into a tool for critical inquiry rather than just an answer machine.
Historically, well-structured, grammatically correct writing served as a reliable heuristic for an intelligent and serious author. AI completely breaks this connection by allowing anyone to generate perfectly polished prose for any idea, no matter how absurd, removing a key filter for evaluating content.