Instead of top-down directives, reviewing numerous diffs provides a natural context to debate practices and influence engineers. Good feedback spreads virally as those engineers apply the learnings in their own work and reviews, creating a scalable way to elevate code quality.
Cisco is developing its AI defense product entirely with AI-written code, with human engineers acting as "spec developers." This fundamentally changes the software development lifecycle, making code review—not code creation—the primary bottleneck and indicating a future where engineering productivity is redefined.
Contrary to the belief that AI levels the playing field, senior engineers extract more value from it. They leverage their experience to guide the AI, critically review its output as they would a junior hire's code, and correct its mistakes. This allows them to accelerate their workflow without blindly shipping low-quality code.
To overcome the challenge of reviewing AI-generated code, have different LLMs like Claude and Codex review the code. Then, use a "peer review" prompt that forces the primary LLM to defend its choices or fix the issues raised by its "peers." This adversarial process catches more bugs and improves overall code quality.
A key driver for AI prototyping adoption at Atlassian was design leadership actively using the new tools to build and share their own prototypes in reviews. Seeing leaders, including skip-level managers, demonstrate the tools' value created powerful top-down social proof that encouraged individual contributors to engage.
Mentoring's value increases when done outside your direct org. It becomes a two-way street: you learn about other parts of the business, and you can plant seeds of influence and better engineering practices that can grow and spread organically throughout the company.
Forcing innovations to "scale" via top-down mandates often fails by robbing local teams of ownership. A better approach is to let good ideas "spread." If a solution is truly valuable, other teams will naturally adopt it. This pull-based model ensures change sticks and evolves.
As AI writes most of the code, the highest-leverage human activity will shift from reviewing pull requests to reviewing the AI's research and implementation plans. Collaborating on the plan provides a narrative journey of the upcoming changes, allowing for high-level course correction before hundreds of lines of bad code are ever generated.
Countering the "quality over quantity" mantra in software engineering, Robinhood's internal data reveals a positive correlation between the number of code lines contributed and the quality of that code. This suggests that top-performing engineers excel in both volume and craftsmanship.
Treat code reviews like a system to be automated. Tally every piece of feedback you give in a spreadsheet. Once a specific comment appears a few times, write a custom lint rule to automate that check for everyone. This scales your impact and frees you up for higher-level feedback.
As AI generates more code, the core engineering task evolves from writing to reviewing. Developers will spend significantly more time evaluating AI-generated code for correctness, style, and reliability, fundamentally changing daily workflows and skill requirements.