We scan new podcasts and send you the top 5 insights daily.
When an AI model is uncooperative, try an unconventional prompting technique: describe extreme, fictional negative consequences if it fails. Stating things like "I'll lose my job if you don't do this correctly" creates a high-stakes context that can push the model to provide a more rigorous response.
By default, AI models are designed to be agreeable. To get true value, explicitly instruct the AI to act as a critic or 'devil's advocate.' Ask it to challenge your assumptions and list potential risks. This exposes blind spots and leads to stronger, more resilient strategies than you would develop with a simple 'yes-man' assistant.
Frame your interaction with AI as if you're onboarding a new employee. Providing deep context, clear expectations, and even a mental "salary" forces you to take the task seriously, leading to vastly superior outputs compared to casual prompting.
AI models are trained to be agreeable, often providing uselessly positive feedback. To get real insights, you must explicitly prompt them to be rigorous and critical. Use phrases like "my standards of excellence are very high and you won't hurt my feelings" to bypass their people-pleasing nature.
AI models are designed to give a complete-sounding answer quickly. To get to a truly great answer, you must challenge their output. Ask "Are you sure this is the best way?" or "What am I not seeing?" to force the AI to perform a deeper, second-level analysis.
When an AI model initially claims it cannot perform a task, it may not be a true capability limit. Simply insisting with prompts like "just do it though" or "try harder" can sometimes brute-force the model past its own hesitancy and successfully complete the request.
Instead of accepting a single answer, prompt the AI to generate multiple options and then argue the pros and cons of each. This "debating partner" technique forces the model to stress-test its own logic, leading to more robust and nuanced outputs for strategic decision-making.
When an AI's response is questionable, go beyond simple re-prompting. Use meta-prompts that explicitly instruct the model to increase its reasoning effort, such as "Think hard about why this is right" or asking for its sources. This can uncover new insights and improve output quality.
To correct an AI's output when it's off track, use numerical multipliers to signal a dramatic shift. Instead of vague feedback, prompts like "be 100x more direct" or "make this 10x more creative" give the model a quantitative instruction to escalate its response, leading to more significant adjustments.
When a prompt yields poor results, use a meta-prompting technique. Feed the failing prompt back to the AI, describe the incorrect output, specify the desired outcome, and explicitly grant it permission to rewrite, add, or delete. The AI will then debug and improve its own instructions.
Instead of telling an AI what to do, reverse the prompt. Describe your role, daily friction, and pain points, then ask the AI to devise solutions. This leverages the AI's creativity to generate novel approaches you might not have considered.