Instead of becoming obsolete, IDEs like IntelliJ will be repurposed as highly efficient, background services for AI agents. Their fast indexing and incremental rebuild capabilities will be leveraged by AIs, while the human engineer works through a separate agent-native interface.
Senior engineers, whose identities are deeply tied to established workflows, are the most vocal critics of AI in coding. Unlike junior or non-engineers who readily adopt new methods, this group feels their extensive experience is being devalued by AI tools.
Never assume an LLM "understands" you, even after a series of successes. This "hot hand" fallacy leads to over-trusting the agent with critical tasks. The speaker shares a personal story of an LLM locking him out of production by changing passwords, highlighting the danger of misinterpreting competence for understanding.
The current model of a developer using an AI assistant is like a craftsman with a power tool. The next evolution is "factory farming" code, where orchestrated multi-agent systems manage the entire development lifecycle—planning, implementation, review, and testing—moving it from a craft to an industrial process.
Dismissing AI coding tools after a few hours is a mistake. A study suggests it takes about a year or 2,000 hours of use for an engineer to truly trust an AI assistant. This trust is defined as the ability to accurately predict the AI's output, capabilities, and limitations.
Even within OpenAI, a stark performance gap exists. Engineers who avoid using agentic AI for coding are reportedly 10x less productive across metrics like code volume, commits, and business impact. This creates significant challenges for performance management and HR.
The long-held rule by Joel Spolsky to "never rewrite your code" no longer applies in the AI era. For an increasing number of scenarios, it is more efficient to have an LLM regenerate an entire system, like a unit test suite, from scratch than to attempt to incrementally fix or refactor it.
When every engineer generates 30,000-line changes in hours, the integration process breaks. The challenge shifts from resolving text conflicts to re-architecting one AI's entire change on top of another's equally massive change that was merged first. This is the next major unsolved obstacle.
A practical hack to improve AI agent reliability is to avoid built-in tool-calling functions. LLMs have more training data on writing code than on specific tool-use APIs. Prompting the agent to write and execute the code that calls a tool leverages its core strength and produces better outcomes.
