The future of AI interaction won't be a multitude of specialized apps. Instead, it will likely converge into a smaller number of powerful, generalized input boxes that intelligently route user intent, much like the Chrome address bar or Google's main search page.

Related Insights

Figma CEO Dylan Field predicts we will look back at current text prompting for AI as a primitive, command-line interface, similar to MS-DOS. The next major opportunity is to create intuitive, use-case-specific interfaces—like a compass for AI's latent space—that allow for more precise control beyond text.

Current text-based prompting for AI is a primitive, temporary phase, similar to MS-DOS. The future lies in more intuitive, constrained, and creative interfaces that allow for richer, more visual exploration of a model's latent space, moving beyond just natural language.

Comparing chat interfaces to the MS-DOS command line, Atlassian's Sharif Mansour argues that while chat is a universal entry point for AI, it's the worst interface for specialized tasks. The future lies in verticalized applications with dedicated UIs built on top of conversational AI, just as apps were built on DOS.

A huge portion of product development involves creating user interfaces for backend databases. AI-powered inference engines will allow users to state complex goals in natural language, bypassing the need for traditional UIs and fundamentally changing software development.

The primary interface for managing AI agents won't be simple chat, but sophisticated IDE-like environments for all knowledge workers. This paradigm of "macro delegation, micro-steering" will create new software categories like the "accountant IDE" or "lawyer IDE" for orchestrating complex AI work.

For years, Google has integrated AI as features into existing products like Gmail. Its new "Antigravity" IDE represents a strategic pivot to building applications from the ground up around an "agent-first" principle. This suggests a future where AI is the core foundation of a product, not just an add-on.

The best UI for an AI tool is a direct function of the underlying model's power. A more capable model unlocks more autonomous 'form factors.' For example, the sudden rise of CLI agents was only possible once models like Claude 3 became capable enough to reliably handle multi-step tasks.

AI is best understood not as a single tool, but as a flexible underlying interface. It can manifest as a chat box for some, but its real potential is in creating tailored workflows that feel native to different roles, like designers or developers, without forcing everyone into a single interaction model.

The primary interface for AI is shifting from a prompt box to a proactive system. Future applications will observe user behavior, anticipate needs, and suggest actions for approval, mirroring the initiative of a high-agency employee rather than waiting for commands.

AI will fundamentally change user interfaces. Instead of designers pre-building UIs, AI will generate the necessary "forms and lists" on the fly based on a user's natural language request. This means for the first time, the user, not the developer, will be the one creating the interface.

Anthropic Engineer Predicts AI UIs Will Consolidate into Fewer, Google-like Universal Interfaces | RiffOn