Get your free personalized podcast brief

We scan new podcasts and send you the top 5 insights daily.

To determine if an employee critically engaged with AI-generated content, bypass reading the lengthy document. Instead, directly question them on its substance. Their ability to confidently defend, elaborate on, and explain the material is the true test of their understanding and ownership of the work.

Related Insights

By default, AI models are designed to be agreeable. To get true value, explicitly instruct the AI to act as a critic or 'devil's advocate.' Ask it to challenge your assumptions and list potential risks. This exposes blind spots and leads to stronger, more resilient strategies than you would develop with a simple 'yes-man' assistant.

A powerful workflow is to explicitly instruct your AI to act as a collaborative thinking partner—asking questions and organizing thoughts—while strictly forbidding it from creating final artifacts. This separates the crucial thinking phase from the generative phase, leading to better outcomes.

Unlike human collaborators, an AI lacks feelings or an ego. This means you should be direct, critical, and push back hard when its output isn't right. Frame the interaction as a demanding dialogue, not a polite request. You can also explicitly ask the AI to critique your own ideas from first principles to ensure a rigorous, two-way exchange.

The most effective way to use AI in product discovery is not to delegate tasks to it like an "answer machine." Instead, treat it as a "thought partner." Use prompts that explicitly ask it to challenge your assumptions, turning it into a tool for critical thinking rather than a simple content generator.

After an initial analysis, use a "stress-testing" prompt that forces the LLM to verify its own findings, check for contradictions, and correct its mistakes. This verification step is crucial for building confidence in the AI's output and creating bulletproof insights.

Instead of accepting an AI's first output, request multiple variations of the content. Then, ask the AI to identify the best option. This forces the model to re-evaluate its own work against the project's goals and target audience, leading to a more refined final product.

A powerful and simple method to ensure the accuracy of AI outputs, such as market research citations, is to prompt the AI to review and validate its own work. The AI will often identify its own hallucinations or errors, providing a crucial layer of quality control before data is used for decision-making.

To effectively leverage AI, treat it as a new team member. Take its suggestions seriously and give it the best opportunity to contribute. However, just like with a human colleague, you must apply a critical filter, question its output, and ultimately remain accountable for the final result.

Since AI assistants make it easy for candidates to complete take-home coding exercises, simply evaluating the final product is no longer an effective screening method. The new best practice is to require candidates to build with AI and then explain their thought process, revealing their true engineering and problem-solving skills.

Standard AI models are often overly supportive. To get genuine, valuable feedback, explicitly instruct your AI to act as a critical thought partner. Use prompts like "push back on things" and "feel free to challenge me" to break the AI's default agreeableness and turn it into a true sparring partner.

Vet AI-Generated Work by Questioning the Creator, Not Reading the Output | RiffOn