To simulate interview coaching, feed your written answers to case study questions into an LLM. Prompt it to score you on a specific rubric (structured thinking, user focus, etc.), identify exact weak phrases, explain why, and suggest a better approach for structured, actionable feedback.
When hiring, top firms like McKinsey value a candidate's ability to articulate a deliberate, logical problem-solving process as much as their past successes. Having a structured method shows you can reliably tackle novel challenges, whereas simply pointing to past wins might suggest luck or context-specific success.
After testing a prototype, don't just manually synthesize feedback. Feed recorded user interview transcripts back into the original ChatGPT project. Ask it to summarize problems, validate solutions, and identify gaps. This transforms the AI from a generic tool into an educated partner with deep project context for the next iteration.
Voice mode offers a more natural and effective way to practice for interviews than text-based AI. For best results, provide the AI with your resume and the job description for the role. This allows it to tailor questions, provide more relevant feedback, and simulate a real interview scenario.
Founders can use AI pitch deck analyzers as a "sparring partner" to receive objective feedback and iteratively improve their narrative. This allows them to identify weaknesses and strengthen their pitch without burning valuable relationships with real VCs on a premature version.
Many AI tools expose the model's reasoning before generating an answer. Reading this internal monologue is a powerful debugging technique. It reveals how the AI is interpreting your instructions, allowing you to quickly identify misunderstandings and improve the clarity of your prompts for better results.
Don't let performance reviews sit in a folder. Upload your official review and peer feedback into a custom GPT to create a personal improvement coach. You can then reference it when working on new projects, asking it to check for your known blind spots and ensure you're actively addressing the feedback.
When a prompt yields poor results, use a meta-prompting technique. Feed the failing prompt back to the AI, describe the incorrect output, specify the desired outcome, and explicitly grant it permission to rewrite, add, or delete. The AI will then debug and improve its own instructions.
Instead of just asking an AI to write a PRD, first provide it with a "Socratic questioning" template. The LLM will then act as a thinking partner, asking challenging, open-ended questions about the problem and solution. This upfront thinking process results in a significantly more robust final document.
After receiving feedback that his writing was too long, a PM built a custom GPT to make messages more concise. He fed it newsletters and books on effective writing from experts, creating a personalized coach that helped him apply the feedback in his daily work, leading to better engagement from colleagues.
Standard AI models are often overly supportive. To get genuine, valuable feedback, explicitly instruct your AI to act as a critical thought partner. Use prompts like "push back on things" and "feel free to challenge me" to break the AI's default agreeableness and turn it into a true sparring partner.