We scan new podcasts and send you the top 5 insights daily.
Midjourney's mood board feature can average out the aesthetics of multiple images, leading to generic results. For more precise control, use individual images as style references (`s-refs`). This allows the model to pull more distinct and impactful stylistic elements.
Instead of writing prompts from scratch, upload visual references (like a mood board) to ChatGPT. Ask it to describe the visual qualities and language of the images, then use that output as a detailed prompt for AI image generators to replicate the desired style.
Instead of relying on complex text prompts, use a curated mood board as a direct visual input. Generative models like Midjourney can interpret the aesthetic, color, and style from images more effectively than from descriptive words, acting as a powerful communication shortcut.
Instead of accepting default AI designs, proactively source superior design elements. Use pre-vetted Google Font combinations for typography and find specific MidJourney 'style reference' codes on social platforms like X to generate unique, high-quality images that match your desired aesthetic.
To generate more aesthetic and less 'uncanny' images, include specific camera, lens, and film stock metadata in prompts (e.g., 'Leica, 50mm f1.2, Kodak Tri-X'). This acts as a filter, forcing the model to reference its training data associated with professional photography, yielding higher-quality results.
Instead of random prompting, break down any desired photo into its fundamental components like shot type, lighting, camera, and lens. Controlling these variables gives you precise, repeatable results and makes iteration faster, as you know exactly which element to adjust.
To get superior results from image generators like Midjourney, structure prompts around three core elements: the subject (what it is), the setting (where it is, including lighting), and the style. Defining style with technical photographic terms yields better outcomes than using simple adjectives.
If a reference image has an overpowering element (like bright green eyeshadow or bubblegum), it can hijack the generation. Instead of complex negative prompts, simply crop the distracting element out of the reference image and re-upload it to guide the AI toward your intended focus.
Prototyping a new product from scratch risks creating a generic, "AI slop" design. To avoid this, use "inspiration sourcing": find screenshots from other apps (e.g., on Mobbin) that have the design aesthetic you want, and feed them to the AI as a style reference for specific features.
To maintain visual consistency in AI-generated videos, don't rely on text-to-video prompts alone. First, create a library of static 'ingredient' images for characters, settings, and props. Then, feed these reference images into the AI for each scene to ensure a coherent look and feel across all clips.
Midjourney's personalization feature allows you to train a preference profile by rating images. Create distinct profiles for different aesthetics (e.g., '2025 iPhone Style'). Applying these codes adds a consistent, unique layer to your generations that goes beyond what a single prompt or style reference can achieve.