An engineering org was effective at using AI for delegating and assessing code, but failed to tackle large problems. The missing piece was a dedicated 'planning phase' to scaffold significant work before execution. Without it, their AI-driven compounding of learnings was limited to small, incremental gains.

Related Insights

Companies that experiment endlessly with AI but fail to operationalize it face the biggest risk of falling behind. The danger lies not in ignoring AI, but in lacking the change management and workflow redesign needed to move from small-scale tests to full integration.

The most significant productivity gains come from applying AI to every stage of development, including research, planning, product marketing, and status updates. Limiting AI to just code generation misses the larger opportunity to automate the entire engineering process.

AI tools accelerate development but don't improve judgment, creating a risk of building solutions for the wrong problems more quickly. Premortems become more critical to combat this 'false confidence of faster output' and force the shift from 'can we build it?' to 'should we build it?'.

High productivity isn't about using AI for everything. It's a disciplined workflow: breaking a task into sub-problems, using an LLM for high-leverage parts like scaffolding and tests, and reserving human focus for the core implementation. This avoids the sunk cost of forcing AI on unsuitable tasks.

Instead of codebases becoming harder to manage over time, use an AI agent to create a "compounding engineering" system. Codify learnings from each feature build—successful plans, bug fixes, tests—back into the agent's prompts and tools, making future development faster and easier.

Organizations fail when they push teams directly into using AI for business outcomes ("architect mode"). Instead, they must first provide dedicated time and resources for unstructured play ("sandbox mode"). This experimentation phase is essential for building the skills and comfort needed to apply AI effectively to strategic goals.

Without a strong foundation in customer problem definition, AI tools simply accelerate bad practices. Teams that habitually jump to solutions without a clear "why" will find themselves building rudderless products at an even faster pace. AI makes foundational product discipline more critical, not less.

Implementing AI tools in a company that lacks a clear product strategy and deep customer knowledge doesn't speed up successful development; it only accelerates aimless activity. True acceleration comes from applying AI to a well-defined direction informed by user understanding.

Many AI projects become expensive experiments because companies treat AI as a trendy add-on to existing systems rather than fundamentally re-evaluating the underlying business processes and organizational readiness. This leads to issues like hallucinations and incomplete tasks, turning potential assets into costly failures.

Teams that become over-reliant on generative AI as a silver bullet are destined to fail. True success comes from teams that remain "maniacally focused" on user and business value, using AI with intent to serve that purpose, not as the purpose itself.