Get your free personalized podcast brief

We scan new podcasts and send you the top 5 insights daily.

Unlike many AI tools that produce a final, unchangeable output, Canva's AI generates a standard, multi-layered file. This lets users treat the AI's output as a first draft that they can refine using familiar drag-and-drop tools, bridging the gap between generation and creation.

Related Insights

Most generative AI tools get users 80% of the way to their goal, but refining the final 20% is difficult without starting over. The key innovation of tools like AI video animator Waffer is allowing iterative, precise edits via text commands (e.g., "zoom in at 1.5 seconds"). This level of control is the next major step for creative AI tools.

Canva's CEO views "one-shot generation" as the first, limited phase of AI. The next frontier, or "AI 2.0," involves iterative and agentic orchestration where the AI acts as a creative partner, helping to refine a design through a series of adjustments rather than just creating a single final output.

The handoff between AI generation and manual refinement is a major friction point. Tools like Subframe solve this by allowing users to seamlessly switch between an 'Ask AI' mode for generative tasks and a 'Design' mode for manual, Figma-like adjustments on the same canvas.

To maximize efficiency and control costs, treat AI ad generators as a starting point, not a final solution. Use them to create initial concepts and copy. Once an ad is "close enough," export it and perform final visual edits in a dedicated design tool like Canva, avoiding expensive AI credit usage for minor tweaks.

Instead of being a monolithic model, Canva's AI works by orchestrating its entire suite of existing, specialized features like background remover. A single user prompt can trigger multiple tools in sequence to generate a complex, layered design, leveraging years of product development.

The ability for Canva's AI to orchestrate complex designs across documents, presentations, and videos wasn't a recent development. It was built on a decade of investment in a single, flexible design format, which provided the necessary architectural foundation for a design-focused foundational model.

Canva views its AI as the third evolution of design interfaces. The first was pixel-based (e.g., Photoshop), the second was object-based (classic Canva), and the new era is concept-based, where users describe an idea and the AI generates an editable first draft.

While AI tools excel at generating initial drafts of code or designs, their editing capabilities are poor. The difficulty of making specific changes often forces creators to discard the AI output and start over, as editing is where the "magic" breaks down.

Don't accept the false choice between AI generation and professional editing tools. The best workflows integrate both, allowing for high-level generation and fine-grained manual adjustments without giving up critical creative control.

AI is incredibly fast for generating the initial version of a feature. However, for small, precise changes like altering a color or text, using a direct visual editor is much faster and more efficient than prompting the AI again. An effective workflow blends both approaches.