By creating a distinct, less-polished tab for Cowork, Anthropic sets user expectations that it's an evolving feature. This strategy allows them to ship daily, gather feedback on a "bleeding edge" product, and avoid disrupting the core, stable chat experience.
Mandating AI usage can backfire by creating a threat. A better approach is to create "safe spaces" for exploration. Atlassian runs "AI builders weeks," blocking off synchronous time for cross-functional teams to tinker together. The celebrated outcome is learning, not a finished product, which removes pressure and encourages genuine experimentation.
Unlike standard chatbots where you wait for a response before proceeding, Cowork allows users to assign long-running tasks and queue new requests while the AI is working. This shifts the interaction from a turn-by-turn conversation to a delegated task model.
To introduce powerful features without overwhelming users, design interactions that reveal functionality contextually. For instance, instead of a tutorial on zooming, have the UI automatically zoom out when space becomes limited. This makes the feature discoverable and its purpose immediately obvious.
A dual-track launch strategy is most effective. Ship small, useful improvements on a weekly cadence to demonstrate momentum and reliability. For major, innovative features that represent a step-change, consolidate them into a single, high-impact 'noisy' launch to capture maximum attention.
True speed isn't shipping broken products to everyone; it is responsible iteration with opt-in user groups. This approach distinguishes valuable A/B experiments from unacceptable "spaghetti at the wall" testing by targeting willing early adopters who understand the experimental status.
While traditionally creating cultural friction, separate innovation teams are now more viable thanks to AI. The ability to go from idea to prototype extremely fast and leanly allows a small team to explore the "next frontier" without derailing the core product org, provided clear handoff rules exist.
Large AI labs like OpenAI are not always the primary innovators in product experience. Instead, a "supply chain of product ideas" exists where startups first popularize new interfaces, like templated creation. The labs then observe what works and integrate these proven concepts into their own platforms.
Historically, resource-intensive prototyping (requiring designers and tools like Figma) was reserved for major features. AI tools reduce prototype creation time to minutes, allowing PMs to de-risk even minor features with user testing and solution discovery, improving the entire product's success rate.
The V0 team dogfoods their own AI prototyping tool to define and communicate new features internally. Instead of writing specification documents, PMs build and share working prototypes. This provides immediate clarity and sparks more effective, tangible feedback from the entire team.
The panel suggests a best practice for AI prototyping tools: focus on pinpointed interactions or small, specific user flows. Once a prototype grows to encompass the entire product, it's more efficient to move directly into the codebase, as you're past the point of exploration.