A top-tier lawyer’s value mirrors that of a distinguished engineer: it's not just their network, but their ability to architect complex transactions. They can foresee subtle failure modes and understand the entire system's structure, a skill derived from experience with non-public processes and data—the valuable 'reasoning traces' AI models lack.

Related Insights

Current AI models resemble a student who grinds 10,000 hours on a narrow task. They achieve superhuman performance on benchmarks but lack the broad, adaptable intelligence of someone with less specific training but better general reasoning. This explains the gap between eval scores and real-world utility.

As AI models are used for critical decisions in finance and law, black-box empirical testing will become insufficient. Mechanistic interpretability, which analyzes model weights to understand reasoning, is a bet that society and regulators will require explainable AI, making it a crucial future technology.

With AI agents automating raw code generation, an engineer's role is evolving beyond pure implementation. To stay valuable, engineers must now cultivate a deep understanding of business context and product taste to know *what* to build and *why*, not just *how*.

AI models lack access to the rich, contextual signals from physical, real-world interactions. Humans will remain essential because their job is to participate in this world, gather unique context from experiences like customer conversations, and feed it into AI systems, which cannot glean it on their own.

Unlike coding with its verifiable unit tests, complex legal work lacks a binary success metric. Harvey addresses this reinforcement learning challenge by treating senior partner feedback and edits as the "reward function," mirroring how quality is judged in the real world. The ultimate verification is long-term success, like a merger avoiding future litigation.

In a group of 100 experts training an AI, the top 10% will often drive the majority of the model's improvement. This creates a power law dynamic where the ability to source and identify this elite talent becomes a key competitive moat for AI labs and data providers.

Off-the-shelf AI models can only go so far. The true bottleneck for enterprise adoption is "digitizing judgment"—capturing the unique, context-specific expertise of employees within that company. A document's meaning can change entirely from one company to another, requiring internal labeling.

The key technical skill for an AI PM is not deep knowledge of model architecture but a higher-level understanding of how to orchestrate AI components. Knowing what AI can do and how systems connect is more valuable than knowing the specifics of fine-tuning or RAG implementation.

The most significant recent AI advance is models' ability to use chain-of-thought reasoning, not just retrieve data. However, most business users are unaware of this 'deep research' capability and continue using AI as a simple search tool, missing its transformative potential for complex problem-solving.

As AI makes it incredibly easy to build products, the market will be flooded with options. The critical, differentiating skill will no longer be technical execution but human judgment: deciding *what* should exist, which features matter, and the right distribution strategy. Synthesizing these elements is where future value lies.