The debate over whether LLMs are truly "intelligent" is academic. The practical test for product builders is whether the tool produces valuable outputs that lead to better decisions, regardless of the underlying mechanism.
When confronting criticism of disruptive technology, identify if the objection is a fundamental moral belief. For these critics, no amount of data will change their mind because they believe the technology "should not exist" on principle, making evidence-based arguments ineffective.
The process of user research, such as conducting interviews, can become overvalued. The ultimate objective is to build good products that solve real problems for people. The methods used to achieve that outcome are secondary to the outcome itself.
Many vocal critics of a new technology base their skepticism on preconceived notions, not direct experience. Their opposition is often rooted in a desire for it *not* to work. Directly asking if they've used the product can expose this bias and reframe the conversation around actual results.
When engaging with a vocal critic online, especially an influential one, the goal isn't to convert them. The strategic objective is to present your case for the "people on the fence" who are observing and might otherwise only hear the critic's unchallenged viewpoint.
Feeling embarrassed when looking back at early versions of your product or career milestones shouldn't be seen as negative. It is a strong signal that you have made significant progress and that your standards and capabilities have improved over time.
An early Synthetic Users experiment involved an AI agent, "Captain Planet," representing the environmental impact of product decisions. This highlights a novel use case for LLMs: modeling the needs of non-human entities (communities, ecosystems, future generations) in strategic planning.
Just as one human interview can go off-track, a single AI-generated interview can produce anomalous results. Running a larger batch of synthetic interviews allows you to identify outliers and focus on the "center of gravity" of the responses, increasing the reliability of the overall findings.
A single LLM struggles with complex, multi-goal tasks. By breaking a task down and assigning specific roles (e.g., planner, interviewer, critic) to a "swarm" of agents, each can perform its bounded task more effectively, leading to a higher quality overall result.
Unlike traditional desk research which finds existing data, generative AI can infer responses for novel scenarios not present in training data. It builds an internal "model of human nature," allowing it to generate plausible answers to new questions, effectively creating research that was never done.
