A Meta study found expert programmers were less productive with AI tools. The speaker suggests this is because users thought they were faster while actually being distracted (e.g., social media) waiting for the AI, highlighting a dangerous gap between perceived and actual productivity.
While many believe AI will primarily help average performers become great, LinkedIn's experience shows the opposite. Their top talent were the first and most effective adopters of new AI tools, using them to become even more productive. This suggests AI may amplify existing talent disparities.
Human intuition is a poor gauge of AI's actual productivity benefits. A study found developers felt significantly sped up by AI coding tools even when objective measurements showed no speed increase. The real value may come from enabling tasks that otherwise wouldn't be attempted, rather than simply accelerating existing workflows.
The process of struggling with and solving hard problems is what builds engineering skill. Constantly available AI assistants act like a "slot machine for answers," removing this productive struggle. This encourages "vibe coding" and may prevent engineers from developing deep problem-solving expertise.
A paradox of rapid AI progress is the widening "expectation gap." As users become accustomed to AI's power, their expectations for its capabilities grow even faster than the technology itself. This leads to a persistent feeling of frustration, even though the tools are objectively better than they were a year ago.
AI coding assistants won't make fundamental skills obsolete. Instead, they act as a force multiplier that separates engineers. Great engineers use AI to become exceptional by augmenting their deep understanding, while mediocre engineers who rely on it blindly will fall further behind.
Research highlights "work slop": AI output that appears polished but lacks human context. This forces coworkers to spend significant time fixing it, effectively offloading cognitive labor and damaging perceptions of the sender's capability and trustworthiness.
While AI coding assistants appear to boost output, they introduce a "rework tax." A Stanford study found AI-generated code leads to significant downstream refactoring. A team might ship 40% more code, but if half of that increase is just fixing last week's AI-generated "slop," the real productivity gain is much lower than headlines suggest.
The perceived time-saving benefits of using AI for lesson planning may be misleading. Similar to coders who must fix AI-generated mistakes, educators may spend so much time correcting flawed outputs that the net efficiency gain is zero or even negative, a factor often overlooked in a rush to adopt new tools.
Data on AI tool adoption among engineers is conflicting. One A/B test showed that the highest-performing senior engineers gained the biggest productivity boost. However, other companies report that opinionated senior engineers are the most resistant to using AI tools, viewing their output as subpar.
AI tools can generate vast amounts of verbose code on command, making metrics like 'lines of code' easily gameable and meaningless for measuring true engineering productivity. This practice introduces complexity and technical debt rather than indicating progress.