Unlike traditional software, AI products have unpredictable user inputs and LLM outputs (non-determinism). They also require balancing AI autonomy (agency) with user oversight (control). These two factors fundamentally change the product development process, requiring new approaches to design and risk management.

Related Insights

The need for explicit user transparency is most critical for nondeterministic systems like LLMs, where even creators don't always know why an output was generated. Unlike a simple rules engine with predictable outcomes, AI's "black box" nature requires giving users more context to build trust.

Unlike traditional deterministic products, AI models are probabilistic; the same query can yield different results. This uncertainty requires designers, PMs, and engineers to align on flexible expectations rather than fixed workflows, fundamentally changing the nature of collaboration.

To build trust, users need Awareness (know when AI is active), Agency (have control over it), and Assurance (confidence in its outputs). This framework, from a former Google DeepMind PM, provides a clear model for designing trustworthy AI experiences by mimicking human trust signals.

As AI evolves from single-task tools to autonomous agents, the human role transforms. Instead of simply using AI, professionals will need to manage and oversee multiple AI agents, ensuring their actions are safe, ethical, and aligned with business goals, acting as a critical control layer.

Building features like custom commands and sub-agents can look like reliable, deterministic workflows. However, because they are built on non-deterministic LLMs, they fail unpredictably. This misleads users into trusting a fragile abstraction and ultimately results in a poor experience.

Unlike traditional software, AI products are evolving systems. The role of an AI PM shifts from defining fixed specifications to managing uncertainty, bias, and trust. The focus is on creating feedback loops for continuous improvement and establishing guardrails for model behavior post-launch.

AI's unpredictability requires more than just better models. Product teams must work with researchers on training data and specific evaluations for sensitive content. Simultaneously, the UI must clearly differentiate between original and AI-generated content to facilitate effective human oversight.

It's a common misconception that advancing AI reduces the need for human input. In reality, the probabilistic nature of AI demands increased human interaction and tighter collaboration among product, design, and engineering teams to align goals and navigate uncertainty.

Unlike deterministic SaaS software that works consistently, AI is probabilistic and doesn't work perfectly out of the box. Achieving 'human-grade' performance (e.g., 99.9% reliability) requires continuous tuning and expert guidance, countering the hype that AI is an immediate, hands-off solution.

Fully autonomous AI agents are not yet viable in enterprises. Alloy Automation builds "semi-deterministic" agents that combine AI's reasoning with deterministic workflows, escalating to a human when confidence is low to ensure safety and compliance.