While universities adopt AI to streamline application reviews, they are simultaneously deploying AI detection tools to ensure applicants aren't using it for their essays. This creates a technological cat-and-mouse game, escalating the complexity and stakes of the college admissions process for both sides.
The education system is fixated on preventing AI-assisted cheating, missing the larger point: AI is making the traditional "test" and its associated skills obsolete. The focus must shift from policing tools to a radical curriculum overhaul that prioritizes durable human skills like ethical judgment and creative problem-solving.
Schools ban AI like ChatGPT fearing it's a tool for cheating, but this is profoundly shortsighted. The quality of an AI's output is entirely dependent on the critical thinking behind the user's input. This makes AI the first truly scalable tool for teaching children how to think critically, a skill far more valuable than memorization.
Candidates are embedding hidden text and instructions in their resumes to game automated AI hiring platforms. This 'prompt hacking' tactic, reportedly found in up to 10% of applications by one firm, represents a new front in the cat-and-mouse game between applicants and the algorithms designed to filter them.
The purpose of quirky interview questions has evolved. Beyond just assessing personality, questions about non-work achievements or hypothetical scenarios are now used to jolt candidates out of scripted answers and expose those relying on mid-interview AI prompts for assistance.
Professor Alan Blinder reveals that the rise of generative AI has created such a high risk of academic dishonesty that his department has abandoned modern assessment methods. They are reverting to proctored, in-class, handwritten exams, an example of "technological regress" as a defense against new tech.
While cheating is a concern, a more insidious danger is students using AI to bypass deep cognitive engagement. They can produce correct answers without retaining knowledge, creating a cumulative learning deficit that is difficult to detect and remedy.
When companies use black-box AI for hiring, it creates a no-win 'arms race.' Applicants use prompt injection and other tricks to game the system, while companies build countermeasures to detect them. This escalatory cycle is a 'war of attrition' where the underlying goal of finding the right candidate is lost.
As AI renders cover letters useless for signaling candidate quality, employers are shifting their screening processes. They now rely more on assessments that are harder to cheat on, such as take-home coding challenges and automated AI interviews. This moves the evaluation from subjective text analysis to more objective, skill-based demonstrations early in the hiring funnel.
Generative AI's appeal highlights a systemic issue in education. When grades—impacting financial aid and job prospects—are tied solely to finished products, students rationally use tools that shortcut the learning process to achieve the desired outcome under immense pressure from other life stressors.
Since AI assistants make it easy for candidates to complete take-home coding exercises, simply evaluating the final product is no longer an effective screening method. The new best practice is to require candidates to build with AI and then explain their thought process, revealing their true engineering and problem-solving skills.