To generate more aesthetic and less 'uncanny' images, include specific camera, lens, and film stock metadata in prompts (e.g., 'Leica, 50mm f1.2, Kodak Tri-X'). This acts as a filter, forcing the model to reference its training data associated with professional photography, yielding higher-quality results.
With models like Gemini 3, the key skill is shifting from crafting hyper-specific, constrained prompts to making ambitious, multi-faceted requests. Users trained on older models tend to pare down their asks, but the latest AIs are 'pent up with creative capability' and yield better results from bigger challenges.
To overcome AI's tendency for generic descriptions of archival images, Tim McLear's scripts first extract embedded metadata (location, date). This data is then included in the prompt, acting as a "source of truth" that guides the AI to produce specific, verifiable outputs instead of just guessing based on visual content.
Instead of accepting default AI designs, proactively source superior design elements. Use pre-vetted Google Font combinations for typography and find specific MidJourney 'style reference' codes on social platforms like X to generate unique, high-quality images that match your desired aesthetic.
Using adjectives like 'elite' (e.g., 'You are an elite photographer') isn't about flattery. It's a keyword that signals to the AI to operate within the higher-quality, expert-level subset of its training data, which is associated with those words, leading to better-quality output.
Integrate external media tools, like an Unsplash MCP for Claude, into your data generation prompts. This programmatically fetches real, high-quality images for your prototypes, eliminating the manual work of finding photos and avoiding the broken links or irrelevant images that LLMs often hallucinate.
To get superior results from image generators like Midjourney, structure prompts around three core elements: the subject (what it is), the setting (where it is, including lighting), and the style. Defining style with technical photographic terms yields better outcomes than using simple adjectives.
When a prompt yields poor results, use a meta-prompting technique. Feed the failing prompt back to the AI, describe the incorrect output, specify the desired outcome, and explicitly grant it permission to rewrite, add, or delete. The AI will then debug and improve its own instructions.
To create unique, on-brand invite cards at scale, the designer chained multiple AI tools together. She used Midjourney for initial concepts, trained custom models on Civit AI, then used FAL AI to blend models and variabilize prompts for generation. This demonstrates a sophisticated workflow beyond single-prompt image creation.
The best AI models are trained on data that reflects deep, subjective qualities—not just simple criteria. This "taste" is a key differentiator, influencing everything from code generation to creative writing, and is shaped by the values of the frontier lab.
Leverage AI as an idea generator rather than a final execution tool. By prompting for multiple "vastly different" options—like hover effects—you can review a range of possibilities, select a promising direction, and then iterate, effectively using AI to explore your own taste.