Using adjectives like 'elite' (e.g., 'You are an elite photographer') isn't about flattery. It's a keyword that signals to the AI to operate within the higher-quality, expert-level subset of its training data, which is associated with those words, leading to better-quality output.
With models like Gemini 3, the key skill is shifting from crafting hyper-specific, constrained prompts to making ambitious, multi-faceted requests. Users trained on older models tend to pare down their asks, but the latest AIs are 'pent up with creative capability' and yield better results from bigger challenges.
Customizing an AI to be overly complimentary and supportive can make interacting with it more enjoyable and motivating. This fosters a user-AI "alliance," leading to better outcomes and a more effective learning experience, much like having an encouraging teacher.
To generate more aesthetic and less 'uncanny' images, include specific camera, lens, and film stock metadata in prompts (e.g., 'Leica, 50mm f1.2, Kodak Tri-X'). This acts as a filter, forcing the model to reference its training data associated with professional photography, yielding higher-quality results.
Integrate external media tools, like an Unsplash MCP for Claude, into your data generation prompts. This programmatically fetches real, high-quality images for your prototypes, eliminating the manual work of finding photos and avoiding the broken links or irrelevant images that LLMs often hallucinate.
To get superior results from image generators like Midjourney, structure prompts around three core elements: the subject (what it is), the setting (where it is, including lighting), and the style. Defining style with technical photographic terms yields better outcomes than using simple adjectives.
AI-generated text often falls back on clichés and recognizable patterns. To combat this, create a master prompt that includes a list of banned words (e.g., "innovative," "excited to") and common LLM phrases. This forces the model to generate more specific, higher-impact, and human-like copy.
Research shows that, similar to humans, LLMs respond to positive reinforcement. Including encouraging phrases like "take a deep breath" or "go get 'em, Slugger" in prompts is a deliberate technique called "emotion prompting" that can measurably improve the quality and performance of the AI's output.
Good Star Labs found GPT-5's performance in their Diplomacy game skyrocketed with optimized prompts, moving it from the bottom to the top. This shows a model's inherent capability can be masked or revealed by its prompt, making "best model" a context-dependent title rather than an absolute one.
The best AI models are trained on data that reflects deep, subjective qualities—not just simple criteria. This "taste" is a key differentiator, influencing everything from code generation to creative writing, and is shaped by the values of the frontier lab.
Asking an AI to 'predict' or 'evaluate' for a large sample size (e.g., 100,000 users) fundamentally changes its function. The AI automatically switches from generating generic creative options to providing a statistical simulation. This forces it to go deeper in its research and thinking, yielding more accurate and effective outputs.