AI models personalize responses based on user history and profile data, including your employer. Asking an LLM what it thinks of your company will result in a biased answer. To get a true picture, marketers must query the AI using synthetic personas that represent their actual target customers.

Related Insights

The most common marketing phrases generated by ChatGPT are now so overused they cause a 15% drop in audience engagement. Marketers must use a follow-up prompt to 'un-AI' the content, specifically telling the tool to remove generic phrases, corporate tone, and predictable language to regain authenticity.

While AI efficiently transcribes user interviews, true customer insight comes from ethnographic research—observing users in their natural environment. What people say is often different from their actual behavior. Don't let AI tools create a false sense of understanding that replaces direct observation.

A study with Colgate-Palmolive found that large language models can accurately mimic real consumer behavior and purchase intent. This validates the use of "synthetic consumers" for market research, enabling companies to replace costly, slow human surveys with scalable AI personas for faster, richer product feedback.

Instead of a generalist AI, LinkedIn built a suite of specialized internal agents for tasks like trust reviews, growth analysis, and user research. These agents are trained on LinkedIn's unique historical data and playbooks, providing critiques and insights impossible for external tools.

To test complex AI prompts for tasks like customer persona generation without exposing sensitive company data, first ask the AI to create realistic, synthetic data (e.g., fake sales call notes). This allows you to safely develop and refine prompts before applying them to real, proprietary information, overcoming data privacy hurdles in experimentation.

Go beyond using AI for data synthesis. Leverage it as a critical partner to stress-test your strategic opinions and assumptions. AI can challenge your thinking, identify conflicts in your data, and help you refine your point of view, ultimately hardening your final plan.

Do not blindly trust an LLM's evaluation scores. The biggest mistake is showing stakeholders metrics that don't match their perception of product quality. To build trust, first hand-label a sample of data with binary outcomes (good/bad), then compare the LLM judge's scores against these human labels to ensure agreement before deploying the eval.

AI models tend to be overly optimistic. To get a balanced market analysis, explicitly instruct AI research tools like Perplexity to act as a "devil's advocate." This helps uncover risks, challenge assumptions, and makes it easier for product managers to say "no" to weak ideas quickly.

Many companies fail with AI prospecting because their outputs are generic. The key to success isn't the AI tool but the quality of the data fed into it and relentless prompt iteration. It took the speakers six months—not six weeks—to outperform traditional methods, highlighting the need for patience and deep customization with sales team feedback.

LLMs learn from existing internet content. Breeze's founder found that because his partner had a larger online footprint, GPT incorrectly named the partner as a co-founder. This demonstrates a new urgency for founders to publish content to control their brand's narrative in the age of AI.