Treat ChatGPT like a human assistant. Instead of manually editing its imperfect outputs, provide direct feedback and corrections within the chat. This trains the AI on your specific preferences, making it progressively more accurate and reducing your future workload.

Related Insights

When prompting ChatGPT for scripts, add a final instruction: "tell me why that script should be engaging." This forces the AI to evaluate its own output against strategic goals, leading to better, more thoughtful suggestions and helping the creator understand the underlying strategy.

After testing a prototype, don't just manually synthesize feedback. Feed recorded user interview transcripts back into the original ChatGPT project. Ask it to summarize problems, validate solutions, and identify gaps. This transforms the AI from a generic tool into an educated partner with deep project context for the next iteration.

Vercel designer Pranati Perry advises viewing AI models as interns. This mindset shifts the focus from blindly accepting output to actively guiding the AI and reviewing its work. This collaborative approach helps designers build deeper technical understanding rather than just shipping code they don't comprehend.

To get the best results from AI, treat it like a virtual assistant you can have a dialogue with. Instead of focusing on the perfect single prompt, provide rich context about your goals and then engage in a back-and-forth conversation. This collaborative approach yields more nuanced and useful outputs.

Getting a useful result from AI is a dialogue, not a single command. An initial prompt often yields an unusable output. Success requires analyzing the failure and providing a more specific, refined prompt, much like giving an employee clearer instructions to get the desired outcome.

When an LLM produces text with the wrong style, re-prompting is often ineffective. A superior technique is to use a tool that allows you to directly edit the model's output. This act of editing creates a perfect, in-context example for the next turn, teaching the LLM your preferred style much more effectively than descriptive instructions.

When a prompt yields poor results, use a meta-prompting technique. Feed the failing prompt back to the AI, describe the incorrect output, specify the desired outcome, and explicitly grant it permission to rewrite, add, or delete. The AI will then debug and improve its own instructions.

When an AI model makes the same undesirable output two or three times, treat it as a signal. Create a custom rule or prompt instruction that explicitly codifies the desired behavior. This trains the AI to avoid that specific mistake in the future, improving consistency over time.

Generic AI tools provide generic results. To make an AI agent truly useful, actively customize it by feeding it your personal information, customer data, and writing style. This training transforms it from a simple tool into a powerful, personalized assistant that understands your specific context and needs.

Standard AI models are often overly supportive. To get genuine, valuable feedback, explicitly instruct your AI to act as a critical thought partner. Use prompts like "push back on things" and "feel free to challenge me" to break the AI's default agreeableness and turn it into a true sparring partner.