Brian Chesky compares the current state of AI interfaces to the MS-DOS era—a functional but primitive way to interact with powerful new technology. He believes the chatbot is not the final form and a "multi-touch" moment is needed, where devices and apps are completely re-imagined for an AI-native consumer world.

Related Insights

Figma CEO Dylan Field predicts we will look back at current text prompting for AI as a primitive, command-line interface, similar to MS-DOS. The next major opportunity is to create intuitive, use-case-specific interfaces—like a compass for AI's latent space—that allow for more precise control beyond text.

Current text-based prompting for AI is a primitive, temporary phase, similar to MS-DOS. The future lies in more intuitive, constrained, and creative interfaces that allow for richer, more visual exploration of a model's latent space, moving beyond just natural language.

As AI makes digital content increasingly artificial and indistinguishable from reality, the value of authentic, in-person human connection will skyrocket. The most powerful counter-position to the AI trend isn't less technology, but rather using technology to facilitate more tangible, "real" world interactions.

Comparing chat interfaces to the MS-DOS command line, Atlassian's Sharif Mansour argues that while chat is a universal entry point for AI, it's the worst interface for specialized tasks. The future lies in verticalized applications with dedicated UIs built on top of conversational AI, just as apps were built on DOS.

While chatbots are an effective entry point, they are limiting for complex creative tasks. The next wave of AI products will feature specialized user interfaces that combine fine-grained, gesture-based controls for professionals with hands-off automation for simpler tasks.

The current chatbot interface is not the final form for AI. Drawing a parallel to the personal computer's evolution from text prompts to GUIs and web browsers, Marc Andreessen argues that radically different and superior user experiences for AI are yet to be invented.

AI is best understood not as a single tool, but as a flexible underlying interface. It can manifest as a chat box for some, but its real potential is in creating tailored workflows that feel native to different roles, like designers or developers, without forcing everyone into a single interaction model.

Chatbots are fundamentally linear, which is ill-suited for complex tasks like planning a trip. The next generation of AI products will use AI as a co-creation tool within a more flexible canvas-like interface, allowing users to manipulate and organize AI-generated content non-linearly.

Brian Chesky applies the classic "overestimate in a year, underestimate in a decade" framework to AI. He argues that despite hype, daily life hasn't changed much yet. The true shift will occur in 3-5 years, once the top 50 consumer apps are rebuilt as AI-native products.

Figma's CEO likens current text prompts to MS-DOS: functional but primitive. He sees a massive opportunity in designing intuitive, use-case-specific interfaces that move beyond language to help users 'steer the spaceship' of complex AI models more effectively.