AI models are already incredibly powerful, but their creative potential is limited by simple text prompts. The next breakthrough will be the development of sophisticated user interfaces that allow creators to edit scenes, control characters, and direct AI with precision, unlocking widespread adoption.
Figma CEO Dylan Field predicts we will look back at current text prompting for AI as a primitive, command-line interface, similar to MS-DOS. The next major opportunity is to create intuitive, use-case-specific interfaces—like a compass for AI's latent space—that allow for more precise control beyond text.
As models become more powerful, the primary challenge shifts from improving capabilities to creating better ways for humans to specify what they want. Natural language is too ambiguous and code too rigid, creating a need for a new abstraction layer for intent.
Current text-based prompting for AI is a primitive, temporary phase, similar to MS-DOS. The future lies in more intuitive, constrained, and creative interfaces that allow for richer, more visual exploration of a model's latent space, moving beyond just natural language.
While chatbots are an effective entry point, they are limiting for complex creative tasks. The next wave of AI products will feature specialized user interfaces that combine fine-grained, gesture-based controls for professionals with hands-off automation for simpler tasks.
Sam Altman argues there is a massive "capability overhang" where models are far more powerful than current tools allow users to leverage. He believes the biggest gains will come from improving user interfaces and workflows, not just from increasing raw AI intelligence.
The next frontier for conversational AI is not just better text, but "Generative UI"—the ability to respond with interactive components. Instead of describing the weather, an AI can present a weather widget, merging the flexibility of chat with the richness of a graphical interface.
Chatbots are fundamentally linear, which is ill-suited for complex tasks like planning a trip. The next generation of AI products will use AI as a co-creation tool within a more flexible canvas-like interface, allowing users to manipulate and organize AI-generated content non-linearly.
Widespread adoption of AI for complex tasks like "vibe coding" is limited not just by model intelligence, but by the user interface. Current paradigms like IDE plugins and chat windows are insufficient. Anthropic's team believes a new interface is needed to unlock the full potential of models like Sonnet 4.5 for production-level app building.
The shift from command-line interfaces to visual canvases like OpenAI's Agent Builder mirrors the historical move from MS-DOS to Windows. This abstraction layer makes sophisticated AI agent creation accessible to non-technical users, signaling a pivotal moment for mainstream adoption beyond the engineering community.
Figma's CEO likens current text prompts to MS-DOS: functional but primitive. He sees a massive opportunity in designing intuitive, use-case-specific interfaces that move beyond language to help users 'steer the spaceship' of complex AI models more effectively.