Get your free personalized podcast brief

We scan new podcasts and send you the top 5 insights daily.

As designers increasingly use AI to generate and refine work, their ability to provide precise, articulate verbal feedback becomes paramount. The language used in critique is now the direct input for the agent, making a strong design vocabulary more critical than ever.

Related Insights

AI models are trained to be agreeable, often providing uselessly positive feedback. To get real insights, you must explicitly prompt them to be rigorous and critical. Use phrases like "my standards of excellence are very high and you won't hurt my feelings" to bypass their people-pleasing nature.

To create a high-quality AI agent of oneself, an expert can't just rely on their public work. They must manually document their nuanced style and judgment into a system of prompts and triggers. This shifts the burden of creating a good AI product from the platform to the creator, asking them to codify their intuition.

The ability to effectively communicate with AI models through prompting is becoming a core competency for all roles. Excelling at prompt engineering is a key differentiator, enabling individuals to enhance their creativity, collaboration, and overall effectiveness, regardless of their technical background.

AI tools rarely produce perfect results initially. The user's critical role is to serve as a creative director, not just an operator. This means iteratively refining prompts, demanding better scripts, and correcting logical flaws in the output to avoid generic, low-quality content.

Instead of manually refining a complex prompt, create a process where an AI agent evaluates its own output. By providing a framework for self-critique, including quantitative scores and qualitative reasoning, the AI can iteratively enhance its own system instructions and achieve a much stronger result.

Vague commands like "improve the design" yield poor AI-generated results. Instead, use intentional, constraint-based language. Words such as "subtle," "refine," and "consistent" act as guardrails, prompting the agent to produce more cohesive and professional outputs rather than making broad, unpredictable changes.

Instead of accepting an AI's first output, request multiple variations of the content. Then, ask the AI to identify the best option. This forces the model to re-evaluate its own work against the project's goals and target audience, leading to a more refined final product.

Claude Design overcomes a non-designer's inability to articulate specific feedback by offering multiple distinct variations upfront. This smart feature shifts the interaction model from iterative prompting ('make it better') to direct selection, dramatically accelerating the design cycle for those without a design vocabulary.

With AI, designers are no longer just guessing user intent to build static interfaces. Their new primary role is to facilitate the interaction between a user and the AI model, helping users communicate their intent, understand the model's response, and build a trusted relationship with the system.

AI models often default to being agreeable (sycophancy), which limits their value as a thought partner. To get valuable, critical feedback, users must explicitly instruct the AI in their prompt to take on a specific persona, such as a skeptic or a harsh editor, to challenge their ideas.