AI tools are taking over foundational research and drafting, tasks traditionally done by junior associates. This automation disrupts the legal profession's apprenticeship model, raising questions about how future senior lawyers will gain essential hands-on experience and skills.
AI will eliminate the tedious 'hazing' phase of a junior developer's career. Instead of spending years on boilerplate code and simple bug fixes, new engineers will enter an 'officer's school,' immediately focusing on high-level strategic tasks like system architecture and complex problem-solving.
AI is restructuring engineering teams. A future model involves a small group of senior engineers defining processes and reviewing code, while AI and junior engineers handle production. This raises a critical question: how will junior engineers develop into senior architects in this new paradigm?
While AI-native, new graduates often lack the business experience and strategic context to effectively manage AI tools. Companies will instead prioritize senior leaders with high AI literacy who can achieve massive productivity gains, creating a challenging job market for recent graduates and a leaner organizational structure.
By replacing the foundational, detail-oriented work of junior analysts, AI prevents them from gaining the hands-on experience needed to build sophisticated mental models. This will lead to a future shortage of senior leaders with the deep judgment that only comes from being "in the weeds."
AI tools are so novel they neutralize the advantage of long-term experience. A junior designer who is curious and quick to adopt AI workflows can outperform a veteran who is slower to adapt, creating a major career reset based on agency, not tenure.
An informal poll of the podcast's audience shows nearly a quarter of companies have already reduced hiring for entry-level roles. This is a tangible, early indicator that AI-driven efficiency gains are displacing junior talent, not just automating tasks.
While AI "hallucinations" grab headlines, the more systemic risk is lawyers becoming overly reliant on AI and failing to perform due diligence. The LexisNexis CEO predicts an attorney will eventually lose their license not because the AI failed, but because the human failed to properly review the work.
The immediate threat of AI is to entry-level white-collar jobs, not senior roles. Senior staff can now use AI to perform the "grunt work" of research and drafting previously assigned to apprentices. This automates the traditional career ladder, making it harder for new talent to enter professions like law, finance, and consulting.
Harvey is building agentic AI for law by modeling it on the human workflow where a senior partner delegates a high-level task to a junior associate. The associate (or AI agent) then breaks it down, researches, drafts, and seeks feedback, with the entire client matter serving as the reinforcement learning environment.