When iterating on content like an email, re-prompting can cause unwanted changes. Use the 'Canvas' feature to create a Google Doc-like environment within the chat. This allows you to lock in parts you like, manually tweak specific words or sentences, and then use that refined version as the basis for further AI generation.
To build an effective custom GPT, perfect your comprehensive prompt in the main chat interface first. Manually iterate until you consistently get the desired output. This learning process ensures your final automated GPT is reliable and high-quality before you build it.
Complex AI-generated assets like slide decks are often not directly editable. The new creative workflow is not about manual tweaks but about refining prompts and regenerating the output. Mastery of this iterative process is becoming a critical skill for creative professionals.
The handoff between AI generation and manual refinement is a major friction point. Tools like Subframe solve this by allowing users to seamlessly switch between an 'Ask AI' mode for generative tasks and a 'Design' mode for manual, Figma-like adjustments on the same canvas.
Treat ChatGPT like a human assistant. Instead of manually editing its imperfect outputs, provide direct feedback and corrections within the chat. This trains the AI on your specific preferences, making it progressively more accurate and reducing your future workload.
Achieve higher-quality results by using an AI to first generate an outline or plan. Then, refine that plan with follow-up prompts before asking for the final execution. This course-corrects early and avoids wasted time on flawed one-shot outputs, ultimately saving time.
Instead of struggling to craft an effective prompt, users can ask the AI to generate it for them. Describe your goal and ask ChatGPT to 'write me the perfect ChatGPT prompt for this with exact wording, format, and style.' This meta-prompting technique leverages the AI's own capabilities for better results.
When an LLM produces text with the wrong style, re-prompting is often ineffective. A superior technique is to use a tool that allows you to directly edit the model's output. This act of editing creates a perfect, in-context example for the next turn, teaching the LLM your preferred style much more effectively than descriptive instructions.
Instead of asking an LLM to generate a full email, create a workflow where it produces individual sections, each with its own specific strategy and prompt. A human editor then reviews the assembled piece for tone and adds "spontaneity elements" like GIFs or timely references to retain a human feel.
Instead of trying to write a complex prompt from scratch, first create the perfect output yourself within a ChatGPT canvas, polishing it until it's exactly what you want. Then, ask the AI to write the detailed system prompt that would have reliably generated that specific output. This method ensures your prompts are precise and effective.
Shift away from the traditional model of drafting content yourself and asking AI for edits. Instead, leverage the AI's near-infinite output capacity to generate a wide range of initial ideas or drafts. This allows you to quickly identify patterns, discard unworkable concepts, and focus your energy on high-level refinement rather than initial creation.