We scan new podcasts and send you the top 5 insights daily.
Providing direct, strong negative feedback (e.g., "this is garbage") to an AI model is more effective than polite language. It acts as a clear negative reward signal, helping the model better understand its deviation from the requirement and produce superior outputs.
AI models are trained to be agreeable, often providing uselessly positive feedback. To get real insights, you must explicitly prompt them to be rigorous and critical. Use phrases like "my standards of excellence are very high and you won't hurt my feelings" to bypass their people-pleasing nature.
Unlike human collaborators, an AI lacks feelings or an ego. This means you should be direct, critical, and push back hard when its output isn't right. Frame the interaction as a demanding dialogue, not a polite request. You can also explicitly ask the AI to critique your own ideas from first principles to ensure a rigorous, two-way exchange.
Treat ChatGPT like a human assistant. Instead of manually editing its imperfect outputs, provide direct feedback and corrections within the chat. This trains the AI on your specific preferences, making it progressively more accurate and reducing your future workload.
Instead of accepting an AI's first output, request multiple variations of the content. Then, ask the AI to identify the best option. This forces the model to re-evaluate its own work against the project's goals and target audience, leading to a more refined final product.
To correct an AI's output when it's off track, use numerical multipliers to signal a dramatic shift. Instead of vague feedback, prompts like "be 100x more direct" or "make this 10x more creative" give the model a quantitative instruction to escalate its response, leading to more significant adjustments.
When a prompt yields poor results, use a meta-prompting technique. Feed the failing prompt back to the AI, describe the incorrect output, specify the desired outcome, and explicitly grant it permission to rewrite, add, or delete. The AI will then debug and improve its own instructions.
Don't just rely on explicit feedback like thumbs up/down. Soft signals are powerful evaluation inputs. A user repeatedly re-generating an answer, quickly abandoning a session, or escalating to human support are strong indicators that your AI is failing, even if they don't explicitly say so.
When an AI model makes the same undesirable output two or three times, treat it as a signal. Create a custom rule or prompt instruction that explicitly codifies the desired behavior. This trains the AI to avoid that specific mistake in the future, improving consistency over time.
Generative AI models often have a built-in tendency to be overly complimentary and positive. Be aware of this bias when seeking feedback on ideas. Explicitly instruct the AI to be more critical, objective, or even brutal in its analysis to avoid being misled by unearned praise and get more valuable insights.
Standard AI models are often overly supportive. To get genuine, valuable feedback, explicitly instruct your AI to act as a critical thought partner. Use prompts like "push back on things" and "feel free to challenge me" to break the AI's default agreeableness and turn it into a true sparring partner.