Future coding interfaces will move beyond read-only chat logs. They will treat the AI conversation as an editable 'multi-buffer'—a new type of document that aggregates code snippets from across a project. This will allow developers to directly manipulate code within the conversational flow itself.
Figma CEO Dylan Field predicts we will look back at current text prompting for AI as a primitive, command-line interface, similar to MS-DOS. The next major opportunity is to create intuitive, use-case-specific interfaces—like a compass for AI's latent space—that allow for more precise control beyond text.
AI's impact on coding is unfolding in stages. Phase 1 was autocomplete (Copilot). We're now in Phase 2, defined by interactive agents where developers orchestrate tasks with prompts. Phase 3 will be true automation, where agents independently handle complete, albeit simpler, development workflows without direct human guidance.
Because AI agents operate autonomously, developers can now code collaboratively while on calls. They can brainstorm, kick off a feature build, and have it ready for production by the end of the meeting, transforming coding from a solo, heads-down activity to a social one.
Instead of being stuck with rigid software, a future powered by decentralized AI could allow users to modify their tools directly. For example, a doctor frustrated with an electronic medical record system could use natural language to instantly change the software to fit their workflow, reclaiming control over their digital environment.
The primary interface for managing AI agents won't be simple chat, but sophisticated IDE-like environments for all knowledge workers. This paradigm of "macro delegation, micro-steering" will create new software categories like the "accountant IDE" or "lawyer IDE" for orchestrating complex AI work.
The current model of separate design files and codebases is inefficient. Future tools will enable designers to directly manipulate production code through a visual canvas, eliminating the handoff process and creating a single, shared source of truth for the entire team.
While "vibe coding" tools are excellent for sparking interest and building initial prototypes, transitioning a project into a maintainable product requires learning the underlying code. AI code editors like Cursor act as the next step, helping users bridge the gap from prompt-based generation to hands-on software engineering.
The next frontier for conversational AI is not just better text, but "Generative UI"—the ability to respond with interactive components. Instead of describing the weather, an AI can present a weather widget, merging the flexibility of chat with the richness of a graphical interface.
Chatbots are fundamentally linear, which is ill-suited for complex tasks like planning a trip. The next generation of AI products will use AI as a co-creation tool within a more flexible canvas-like interface, allowing users to manipulate and organize AI-generated content non-linearly.
The next IDE evolution will transform the codebase into a dynamic 'metadata backbone'. By capturing a continuous history of edits and conversations, it will allow all context—discussions, decisions, feedback—to be permanently anchored to specific lines of code, unlike today's static, snapshot-based Git workflows.