'Taste' is a collection of specific preferences, not an abstract feeling. Document what makes an output 'good' by creating universal rules (e.g., 'write at a ninth-grade level,' 'avoid cheesy quotes,' 'no em dashes'). Feeding these documented rules to an AI transforms your subjective taste into repeatable instructions for consistent results.

Related Insights

Instead of manually crafting complex instructions, first iterate with an AI until you achieve the perfect output. Then, provide that output back to the AI and ask it to write the 'system prompt' that would have generated it. This reverse-engineering process creates reusable, high-quality instructions for consistent results.

AI-generated text often falls back on clichés and recognizable patterns. To combat this, create a master prompt that includes a list of banned words (e.g., "innovative," "excited to") and common LLM phrases. This forces the model to generate more specific, higher-impact, and human-like copy.

The concept of "taste" is demystified as the crucial human act of defining boundaries for what is good or right. An LLM, having seen everything, lacks opinion. Without a human specifying these constraints, AI will only produce generic, undesirable output—or "AI slop." The creator's opinion is the essential ingredient.

To avoid generic AI-generated text, use the LLM as a critic rather than a writer. By providing a detailed style guide that you co-created with the AI, its feedback on your drafts becomes highly specific and aligned with your personal goals, audience, and tone.

When an LLM produces text with the wrong style, re-prompting is often ineffective. A superior technique is to use a tool that allows you to directly edit the model's output. This act of editing creates a perfect, in-context example for the next turn, teaching the LLM your preferred style much more effectively than descriptive instructions.

To avoid generic, creatively lazy AI output ("slop"), Atlassian's Sharif Mansour injects three key ingredients: the team's unique "taste" (style/opinion), specific organizational "knowledge" (data and context), and structured "workflow" (deployment in a process). This moves beyond simple prompting to create differentiated results.

To codify a specific person's "taste" in writing, the team fed the DSPy framework a dataset of tweets with thumbs up/down ratings and explanations. DSPy then optimized a prompt that created an AI "judge" capable of evaluating new content with 76.5% accuracy against that person's preferences.

The best AI models are trained on data that reflects deep, subjective qualities—not just simple criteria. This "taste" is a key differentiator, influencing everything from code generation to creative writing, and is shaped by the values of the frontier lab.

Instead of writing a style guide from scratch, feed your most successful and on-brand articles, emails, and web pages into an AI model. This process allows the AI to capture the essence of your unique voice, creating a foundational asset for generating new, consistent content at scale.

An effective skill goes beyond a simple instruction. It should be structured like an expert's toolkit, including established frameworks (e.g., AIDA for copywriting), a scoring system for evaluation, and a defined output template for consistency and clarity.