Product teams often use placeholder text and duplicate UI components, but users don't provide good feedback on unrealistic designs. A prototype with authentic, varied content—even if the UI is simpler—will elicit far more valuable user feedback because it feels real.
Instead of providing a vague functional description, feed prototyping AIs a detailed JSON data model first. This separates data from UI generation, forcing the AI to build a more realistic and higher-quality experience around concrete data, avoiding ambiguity and poor assumptions.
To get superior results from image generators like Midjourney, structure prompts around three core elements: the subject (what it is), the setting (where it is, including lighting), and the style. Defining style with technical photographic terms yields better outcomes than using simple adjectives.
Integrate external media tools, like an Unsplash MCP for Claude, into your data generation prompts. This programmatically fetches real, high-quality images for your prototypes, eliminating the manual work of finding photos and avoiding the broken links or irrelevant images that LLMs often hallucinate.
Instead of creating mock data from scratch, provide an LLM with your existing production data schema as a JSON file. You can then prompt the AI to augment this schema with new fields and realistic data needed to prototype a new feature, seamlessly extending your current data model.
The data-driven prototyping approach separates the UI from the content. This enables rapid iteration, allowing you to generate entirely new versions or localizations of a prototype (e.g., a trip to Thailand instead of Paris) simply by swapping a single JSON data file, without altering any code.
Using adjectives like 'elite' (e.g., 'You are an elite photographer') isn't about flattery. It's a keyword that signals to the AI to operate within the higher-quality, expert-level subset of its training data, which is associated with those words, leading to better-quality output.
To generate more aesthetic and less 'uncanny' images, include specific camera, lens, and film stock metadata in prompts (e.g., 'Leica, 50mm f1.2, Kodak Tri-X'). This acts as a filter, forcing the model to reference its training data associated with professional photography, yielding higher-quality results.
