Processes like grant writing and college admissions rely on formulaic, 'bullshit work' that AI excels at. The inevitable flood of AI-generated 'slop' applications will make human review untenable, forcing these legacy systems to either fundamentally reform their evaluation criteria or collapse under the volume.

Related Insights

The problem with bad AI-generated work ('slop') isn't just poor writing. It's that subtle inaccuracies or context loss can derail meetings and create long, energy-wasting debates. This cognitive overload makes it difficult for teams to sense-make and ultimately costs more in human time than it saves.

The internet's value stems from an economy of unique human creations. AI-generated content, or "slop," replaces this with low-quality, soulless output, breaking the internet's economic engine. This trend now appears in VC pitches, with founders presenting AI-generated ideas they don't truly understand.

As AI agents become reliable for complex, multi-step tasks, the critical human role will shift from execution to verification. New jobs will emerge focused on overseeing agent processes, analyzing their chain-of-thought, and validating their outputs for accuracy and quality.

AI enables rapid book creation by generating chapters and citing sources. This creates a new problem: authors can produce works on complex topics without ever reading the source material or developing deep understanding. This "AI slop" presents a veneer of expertise that lacks the genuine, ingested knowledge of its human creator.

While universities adopt AI to streamline application reviews, they are simultaneously deploying AI detection tools to ensure applicants aren't using it for their essays. This creates a technological cat-and-mouse game, escalating the complexity and stakes of the college admissions process for both sides.

AI can produce scientific claims and codebases thousands of times faster than humans. However, the meticulous work of validating these outputs remains a human task. This growing gap between generation and verification could create a backlog of unproven ideas, slowing true scientific advancement.

Research highlights "work slop": AI output that appears polished but lacks human context. This forces coworkers to spend significant time fixing it, effectively offloading cognitive labor and damaging perceptions of the sender's capability and trustworthiness.

Advanced AI tools like "deep research" models can produce vast amounts of information, like 30-page reports, in minutes. This creates a new productivity paradox: the AI's output capacity far exceeds a human's finite ability to verify sources, apply critical thought, and transform the raw output into authentic, usable insights.

Job seekers use AI to generate resumes en masse, forcing employers to use AI filters to manage the volume. This creates a vicious cycle where more AI is needed to beat the filters, resulting in a "low-hire, low-fire" equilibrium. While activity seems high, actual hiring has stalled, masking a significant economic disruption.

Professionals are using AI to write detailed reports, while their managers use AI to summarize them. This creates a feedback loop where AI generates content for other AIs to consume, with humans acting merely as conduits. This "AI slop" replaces deep thought with inefficient, automated communication.