We scan new podcasts and send you the top 5 insights daily.
Contrary to the sales pitch, AI tools can create more work for educators. The time required to verify facts, fix AI-generated errors, and correct hallucinations in lesson plans or translations often negates any initial time savings, a pattern also observed with software coders.
The problem with bad AI-generated work ('slop') isn't just poor writing. It's that subtle inaccuracies or context loss can derail meetings and create long, energy-wasting debates. This cognitive overload makes it difficult for teams to sense-make and ultimately costs more in human time than it saves.
Using AI to generate content without adding human context simply transfers the intellectual effort to the recipient. This creates rework, confusion, and can damage professional relationships, explaining the low ROI seen in many AI initiatives.
The primary issue with low-effort AI-generated work is not its poor quality, but how it transfers the cognitive burden of correction and completion to the recipient. This 'masquerades' as finished work but creates interpersonal friction and hidden rework, fundamentally shifting the responsibility for the task's success.
Some engineering teams use AI in a way that produces a high volume of code riddled with mistakes. This forces them to rewrite large portions, sometimes without AI assistance, ultimately slowing them down. The issue is not the tool, but the lack of best practices for its application.
A Workday study reveals a critical blind spot in AI productivity metrics. While tools save time, roughly 37% of that saved time is offset by the need for rework—verifying information, correcting errors, and rewriting content. This dramatically reduces the net value and ROI of the technology.
Instead of leading to less work, agentic AI tools are causing users to work longer hours. The core reason is psychological: the tools are so effective at generating output that the opportunity cost of not working feels immense. This creates a hybrid of exhilaration and anxiety where time itself is the bottleneck.
A recent study found that AI assistants actually slowed down programmers working on complex codebases. More importantly, the programmers mistakenly believed the AI was speeding them up. This suggests a general human bias towards overestimating AI's current effectiveness, which could lead to flawed projections about future progress.
Research highlights "work slop": AI output that appears polished but lacks human context. This forces coworkers to spend significant time fixing it, effectively offloading cognitive labor and damaging perceptions of the sender's capability and trustworthiness.
While AI coding assistants appear to boost output, they introduce a "rework tax." A Stanford study found AI-generated code leads to significant downstream refactoring. A team might ship 40% more code, but if half of that increase is just fixing last week's AI-generated "slop," the real productivity gain is much lower than headlines suggest.
The perceived time-saving benefits of using AI for lesson planning may be misleading. Similar to coders who must fix AI-generated mistakes, educators may spend so much time correcting flawed outputs that the net efficiency gain is zero or even negative, a factor often overlooked in a rush to adopt new tools.