We scan new podcasts and send you the top 5 insights daily.
AI is incredibly fast for generating the initial version of a feature. However, for small, precise changes like altering a color or text, using a direct visual editor is much faster and more efficient than prompting the AI again. An effective workflow blends both approaches.
When iterating on a Gemini 3.0-generated app, the host uses the annotation feature to draw directly on the preview to request changes. This visual feedback loop allows for more precise and context-specific design adjustments compared to relying solely on ambiguous text descriptions.
When iterating on content like an email, re-prompting can cause unwanted changes. Use the 'Canvas' feature to create a Google Doc-like environment within the chat. This allows you to lock in parts you like, manually tweak specific words or sentences, and then use that refined version as the basis for further AI generation.
The handoff between AI generation and manual refinement is a major friction point. Tools like Subframe solve this by allowing users to seamlessly switch between an 'Ask AI' mode for generative tasks and a 'Design' mode for manual, Figma-like adjustments on the same canvas.
When using "vibe-coding" tools, feed changes one at a time, such as typography, then a header image, then a specific feature. A single, long list of desired changes can confuse the AI and lead to poor results. This step-by-step process of iteration and refinement yields a better final product.
Cursor's visual editor allows designers to make minor adjustments to UI elements like padding and spacing directly, bypassing the need for constant AI prompting. This speeds up experimentation but doesn't replace dedicated design tools like Figma.
While AI tools excel at generating initial drafts of code or designs, their editing capabilities are poor. The difficulty of making specific changes often forces creators to discard the AI output and start over, as editing is where the "magic" breaks down.
Don't accept the false choice between AI generation and professional editing tools. The best workflows integrate both, allowing for high-level generation and fine-grained manual adjustments without giving up critical creative control.
Instead of describing UI changes with text alone, Google's AI Studio allows users to annotate a screenshot—drawing boxes and adding comments—to create a powerful multimodal prompt. The AI understands the combined visual and textual context to execute precise changes.
A practical AI workflow for product teams is to screenshot their current application and prompt an AI to clone it with modifications. This allows for rapid visualization of new features and UI changes, creating an efficient feedback loop for product development.
Shift away from the traditional model of drafting content yourself and asking AI for edits. Instead, leverage the AI's near-infinite output capacity to generate a wide range of initial ideas or drafts. This allows you to quickly identify patterns, discard unworkable concepts, and focus your energy on high-level refinement rather than initial creation.