While GenAI continues the "learn by example" paradigm of machine learning, its ability to create novel content like images and language is a fundamental step-change. It moves beyond simply predicting patterns to generating entirely new outputs, representing a significant evolution in computing.

Related Insights

While more data and compute yield linear improvements, true step-function advances in AI come from unpredictable algorithmic breakthroughs like Transformers. These creative ideas are the most difficult to innovate on and represent the highest-leverage, yet riskiest, area for investment and research focus.

AI won't replace designers because it lacks taste and subjective opinion. Instead, as AI gets better at generating highly opinionated (though not perfect) designs, it will serve as a powerful exploration tool. This plants more flags in the option space, allowing human designers to react, curate, and push the most promising directions further, amplifying their strategic role.

Don't view generative AI video as just a way to make traditional films more efficiently. Ben Horowitz sees it as a fundamentally new creative medium, much like movies were to theater. It enables entirely new forms of storytelling by making visuals that once required massive budgets accessible to anyone.

With models like Gemini 3, the key skill is shifting from crafting hyper-specific, constrained prompts to making ambitious, multi-faceted requests. Users trained on older models tend to pare down their asks, but the latest AIs are 'pent up with creative capability' and yield better results from bigger challenges.

By training AI on your personal data, arguments, and communication style, you can leverage it as a creative partner. This allows skilled professionals to reduce the time for complex tasks, like creating a new class, from over 16 hours to just four.

With past shifts like the internet or mobile, we understood the physical constraints (e.g., modem speeds, battery life). With generative AI, we lack a theoretical understanding of its scaling potential, making it impossible to forecast its ultimate capabilities beyond "vibes-based" guesses from experts.

The next frontier for conversational AI is not just better text, but "Generative UI"—the ability to respond with interactive components. Instead of describing the weather, an AI can present a weather widget, merging the flexibility of chat with the richness of a graphical interface.

The recent leap in AI coding isn't solely from a more powerful base model. The true innovation is a product layer that enables agent-like behavior: the system constantly evaluates and refines its own output, leading to far more complex and complete results than the LLM could achieve alone.

When analyzing video, new generative models can create entirely new images that illustrate a described scene, rather than just pulling a direct screenshot. This allows AI to generate its own 'B-roll' or conceptual art that captures the essence of the source material.

Asking an AI to 'predict' or 'evaluate' for a large sample size (e.g., 100,000 users) fundamentally changes its function. The AI automatically switches from generating generic creative options to providing a statistical simulation. This forces it to go deeper in its research and thinking, yielding more accurate and effective outputs.