Venture capitalist Keith Rabois observes a new behavior: founders are using ChatGPT for initial legal research and then presenting those findings to challenge or verify the advice given by their expensive law firms, shifting the client-provider power dynamic.

Related Insights

To ensure accuracy in its legal AI, LexisNexis unexpectedly hired a large number of lawyers, not just data scientists. These legal experts are crucial for reviewing AI output, identifying errors, and training the models, highlighting the essential role of human domain expertise in specialized AI.

To save time with busy clients, create a "synthetic" version in a GPT trained on their public statements and past feedback. This allows teams to get work 80-90% of the way to alignment internally, ensuring human interaction is focused on high-level strategy.

Contrary to its reputation for slow tech adoption, the legal industry is rapidly embracing advanced AI agents. The sheer volume of work and potential for efficiency gains are driving swift innovation, with firms even hiring lawyers specifically to help with AI product development.

Instead of walking into a pitch unprepared, Reid Hoffman advises founders to use large language models to pre-emptively critique their business idea. Prompting an AI to act as a skeptical VC helps founders anticipate tough questions and strengthen their narrative before meeting real investors.

Current AI tools are empowering laypeople to generate a flood of low-quality legal filings. This 'sludge' overwhelms the courts and creates more work for skilled attorneys who must respond to the influx of meritless litigation, ironically boosting demand for the very profession AI is meant to disrupt.

VC Keith Rabois highlights a core conflict: law firms billing by the hour are disincentivized from adopting AI that makes associates more efficient, as it reduces revenue. This explains why corporate legal departments are faster adopters—their goal is to cut costs.

While AI "hallucinations" grab headlines, the more systemic risk is lawyers becoming overly reliant on AI and failing to perform due diligence. The LexisNexis CEO predicts an attorney will eventually lose their license not because the AI failed, but because the human failed to properly review the work.

A leader's most valuable use of AI isn't for automation, but as a constant 'thought partner.' By articulating complex business, legal, or financial decisions to an AI and asking it to pose clarifying questions, leaders can refine their own thinking and arrive at more informed conclusions, much like talking a problem out loud.

Craig Hewitt argues ChatGPT is a consumer product. For serious business tasks, agentic AI tools like Manus (built on Claude) are superior, offering web browsing, data aggregation, and code generation that go far beyond a simple chat interface.

Harvey is building agentic AI for law by modeling it on the human workflow where a senior partner delegates a high-level task to a junior associate. The associate (or AI agent) then breaks it down, researches, drafts, and seeks feedback, with the entire client matter serving as the reinforcement learning environment.