Get your free personalized podcast brief

We scan new podcasts and send you the top 5 insights daily.

In response to AI making take-home assignments unreliable, universities are reverting to "old-school" assessment methods like in-class blue book exams, spontaneous writing sessions, and oral exams to ensure student work is authentic.

Related Insights

The education system is fixated on preventing AI-assisted cheating, missing the larger point: AI is making the traditional "test" and its associated skills obsolete. The focus must shift from policing tools to a radical curriculum overhaul that prioritizes durable human skills like ethical judgment and creative problem-solving.

To determine if an employee critically engaged with AI-generated content, bypass reading the lengthy document. Instead, directly question them on its substance. Their ability to confidently defend, elaborate on, and explain the material is the true test of their understanding and ownership of the work.

The recent surge in academic dishonesty is less about a moral decline and more a result of new AI tools making cheating easier to execute and significantly harder for educators to prove.

With LLMs making remote coding tests unreliable, the new standard is face-to-face interviews focused on practical problems. Instead of abstract algorithms, candidates are asked to fix failing tests or debug code, assessing their real-world problem-solving skills which are much harder to fake.

ASU's president argues that if an AI can answer an assignment, the assignment has failed. The educator's role must evolve to use AI to 'up the game,' forcing students to ask more sophisticated questions, making the quality of the query—not the synthesized answer—the hallmark of learning.

Professor Alan Blinder reveals that the rise of generative AI has created such a high risk of academic dishonesty that his department has abandoned modern assessment methods. They are reverting to proctored, in-class, handwritten exams, an example of "technological regress" as a defense against new tech.

AI makes cheating easier, undermining grades as a motivator. More importantly, it enables continuous, nuanced assessment that renders one-off standardized tests obsolete. This forces a necessary shift from a grade-driven to a learning-driven education system.

Instead of just banning AI to prevent cheating, one school district experimented by increasing test frequency. This counterintuitively motivated students to use guided AI learning features to master the material, rather than just get homework answers, proving the need to rethink educational workflows.

Instead of policing AI use, a novel strategy is for teachers to show students what AI produces on an assignment and grade it as a 'B-'. This sets a clear baseline, reframing AI as a starting point and challenging students to use human creativity and critical thinking to achieve a higher grade.

National tests in Sweden revealed human evaluators for oral exams were shockingly inconsistent, sometimes performing worse than random chance. While AI grading has its own biases, they can be identified and systematically adjusted, unlike hidden human subjectivity.