Get your free personalized podcast brief

We scan new podcasts and send you the top 5 insights daily.

To adopt AI without sacrificing accuracy, BlackRock established a "first draft principle." AI can generate the initial version of any document—from client presentations to prospectuses—but it must then pass through the rigorous, multi-layered human review process already in place, ensuring control and quality.

Related Insights

In large enterprises with legacy systems, AI-generated "vibe code" is not ready for direct production deployment. Treat it as a "first draft" for exploration and testing. A successful transition to production requires implementing stage gates and checks and balances, rather than a direct, one-step process from the AI tool.

Regardless of an AI's capabilities, the human in the loop is always the final owner of the output. Your responsible AI principles must clearly state that using AI does not remove human agency or accountability for the work's accuracy and quality. This is critical for mitigating legal and reputational risks.

Beyond model capabilities and process integration, a key challenge in deploying AI is the "verification bottleneck." This new layer of work requires humans to review edge cases and ensure final accuracy, creating a need for entirely new quality assurance processes that didn't exist before.

Don't wait for AI to be perfect. The correct strategy is to apply current AI models—which are roughly 60-80% accurate—to business processes where that level of performance is sufficient for a human to then review and bring to 100%. Chasing perfection in-house is a waste of resources given the pace of model improvement.

In an enterprise setting, "autonomous" AI does not imply unsupervised execution. Its true value lies in compressing weeks of human work into hours. However, a human expert must remain in the loop to provide final approval, review, or rejection, ensuring control and accountability.

To effectively leverage AI, treat it as a new team member. Take its suggestions seriously and give it the best opportunity to contribute. However, just like with a human colleague, you must apply a critical filter, question its output, and ultimately remain accountable for the final result.

To avoid the errors of other AI-driven publications, Axios enforces a strict policy that no AI-generated content is published without human review. This principle allows them to leverage AI for scale while ensuring a local reporter with market knowledge vets everything before it reaches the audience.

Don't blindly trust AI. The correct mental model is to view it as a super-smart intern fresh out of school. It has vast knowledge but no real-world experience, so its work requires constant verification, code reviews, and a human-in-the-loop process to catch errors.

For enterprises, scaling AI content without built-in governance is reckless. Rather than manual policing, guardrails like brand rules, compliance checks, and audit trails must be integrated from the start. The principle is "AI drafts, people approve," ensuring speed without sacrificing safety.

While using a second LLM for verification is a preliminary step, it does not replace human responsibility. Leaders must enforce a culture of slowing down for manual verification and critical thinking to avoid publishing low-quality, AI-generated "slop".