An early Synthetic Users experiment involved an AI agent, "Captain Planet," representing the environmental impact of product decisions. This highlights a novel use case for LLMs: modeling the needs of non-human entities (communities, ecosystems, future generations) in strategic planning.

Related Insights

To save time with busy clients, create a "synthetic" version in a GPT trained on their public statements and past feedback. This allows teams to get work 80-90% of the way to alignment internally, ensuring human interaction is focused on high-level strategy.

Create distinct AI agents representing key executives (e.g., CEO, CMO, CSO). By posing strategic questions to each, you can simulate how different departments might react, identify potential misalignments in priorities, and refine proposals before presenting them to real stakeholders.

An agent can be trained on a user's entire output to build a 'human replica.' This model helps other agents resolve complex questions by navigating the inherent contradictions in human thought (e.g., financial self vs. personal self), enabling better autonomous decision-making.

Early AI agents are unreliable and behave in non-human ways. Framing them as "virtual collaborators" sets them up for failure. A creative metaphor, like "fairies," correctly frames them as non-human entities with unique powers and flaws. This manages expectations and unlocks a rich vein of product ideas based on the metaphor's lore.

A UK startup has found that LLMs can generate accurate, simulated focus group discussions. By creating diverse digital personas, the AI reproduces the nuanced and often surprising feedback that typically requires expensive and slow in-person research, especially in politics.

A study with Colgate-Palmolive found that large language models can accurately mimic real consumer behavior and purchase intent. This validates the use of "synthetic consumers" for market research, enabling companies to replace costly, slow human surveys with scalable AI personas for faster, richer product feedback.

Moltbook, a social network exclusively for AI agents that has attracted over 1.5 million users, represents the emergence of digital spaces where non-human entities create content and interact. This points to a future where marketing and analysis may need to target autonomous AI, not just humans.

Create AI agents that embody key executive personas to monitor operations. A 'CFO agent' could audit for cost efficiency while a 'brand agent' checks for compliance. This system surfaces strategic conflicts that require a human-in-the-loop to arbitrate, ensuring alignment.

Unlike traditional software, AI products have unpredictable user inputs and LLM outputs (non-determinism). They also require balancing AI autonomy (agency) with user oversight (control). These two factors fundamentally change the product development process, requiring new approaches to design and risk management.

A new product development principle for AI is to observe the model's "latent demand"—what it attempts to do on its own. Instead of just reacting to user hacks, Anthropic builds tools to facilitate the model's innate tendencies, inverting the traditional user-centric approach.

AI Can Model Abstract Stakeholders Like 'The Planet' in Product Decisions | RiffOn