Figma's CEO likens current text prompts to MS-DOS: functional but primitive. He sees a massive opportunity in designing intuitive, use-case-specific interfaces that move beyond language to help users 'steer the spaceship' of complex AI models more effectively.

Related Insights

Figma CEO Dylan Field predicts we will look back at current text prompting for AI as a primitive, command-line interface, similar to MS-DOS. The next major opportunity is to create intuitive, use-case-specific interfaces—like a compass for AI's latent space—that allow for more precise control beyond text.

Current text-based prompting for AI is a primitive, temporary phase, similar to MS-DOS. The future lies in more intuitive, constrained, and creative interfaces that allow for richer, more visual exploration of a model's latent space, moving beyond just natural language.

Comparing chat interfaces to the MS-DOS command line, Atlassian's Sharif Mansour argues that while chat is a universal entry point for AI, it's the worst interface for specialized tasks. The future lies in verticalized applications with dedicated UIs built on top of conversational AI, just as apps were built on DOS.

The primary interface for managing AI agents won't be simple chat, but sophisticated IDE-like environments for all knowledge workers. This paradigm of "macro delegation, micro-steering" will create new software categories like the "accountant IDE" or "lawyer IDE" for orchestrating complex AI work.

While chatbots are an effective entry point, they are limiting for complex creative tasks. The next wave of AI products will feature specialized user interfaces that combine fine-grained, gesture-based controls for professionals with hands-off automation for simpler tasks.

The current chatbot interface is not the final form for AI. Drawing a parallel to the personal computer's evolution from text prompts to GUIs and web browsers, Marc Andreessen argues that radically different and superior user experiences for AI are yet to be invented.

AI is best understood not as a single tool, but as a flexible underlying interface. It can manifest as a chat box for some, but its real potential is in creating tailored workflows that feel native to different roles, like designers or developers, without forcing everyone into a single interaction model.

Open-ended prompts overwhelm new users who don't know what's possible. A better approach is to productize AI into specific features. Use familiar UI like sliders and dropdowns to gather user intent, which then constructs a complex prompt behind the scenes, making powerful AI accessible without requiring prompt engineering skills.

Widespread adoption of AI for complex tasks like "vibe coding" is limited not just by model intelligence, but by the user interface. Current paradigms like IDE plugins and chat windows are insufficient. Anthropic's team believes a new interface is needed to unlock the full potential of models like Sonnet 4.5 for production-level app building.

The shift from command-line interfaces to visual canvases like OpenAI's Agent Builder mirrors the historical move from MS-DOS to Windows. This abstraction layer makes sophisticated AI agent creation accessible to non-technical users, signaling a pivotal moment for mainstream adoption beyond the engineering community.

Today's Prompt-Based AI Interfaces Are Stuck in an 'MS-DOS Era' | RiffOn