Unlike streaming text from LLMs, image generation forces users to wait. An A/B test by one of Fal's customers proved that increased latency directly harms user engagement and the number of images created, much like slow page loads hurt e-commerce sales.

Related Insights

Distilled models like SDXL Lightning, hyped for real-time demos, failed to gain user retention. The assumption they'd be used for 'drafting' proved wrong, as users consistently prefer waiting for the highest possible quality output, making speed secondary to final results.

The least intrusive way to introduce ads into LLMs is during natural pauses, such as the wait time for a "deep research" query. This interstitial model offers a clear value exchange: the user gets a powerful, free computation sponsored by an advertiser, avoiding disruption to the core interactive experience.

Fal strategically chose not to compete in LLM inference against giants like OpenAI and Google. Instead, they focused on the "net new market" of generative media (images, video), allowing them to become a leader in a fast-growing, less contested space.

AI product quality is highly dependent on infrastructure reliability, which is less stable than traditional cloud services. Jared Palmer's team at Vercel monitored key metrics like 'error-free sessions' in near real-time. This intense, data-driven approach is crucial for building a reliable agentic product, as inference providers frequently drop requests.

Users mistakenly evaluate AI tools based on the quality of the first output. However, since 90% of the work is iterative, the superior tool is the one that handles a high volume of refinement prompts most effectively, not the one with the best initial result.

A 'GenAI solves everything' mindset is flawed. High-latency models are unsuitable for real-time operational needs, like optimizing a warehouse worker's scanning path, which requires millisecond responses. The key is to apply the right tool—be it an optimizer, machine learning, or GenAI—to the specific business problem.

Traditional video models process an entire clip at once, causing delays. Descartes' Mirage model is autoregressive, predicting only the next frame based on the input stream and previously generated frames. This LLM-like approach is what enables its real-time, low-latency performance.

For marketers running time-sensitive promotions, the traditional ETL process of moving data to a lakehouse for analysis is too slow. By the time insights on campaign performance are available, the opportunity to adjust tactics (like changing a discount for the second half of a day-long sale) has already passed, directly impacting revenue and customer experience.

Pega's CTO advises using the powerful reasoning of LLMs to design processes and marketing offers. However, at runtime, switch to faster, cheaper, and more consistent predictive models. This avoids the unpredictability, cost, and risk of calling expensive LLMs for every live customer interaction.

The primary challenge in creating stable, real-time autoregressive video is error accumulation. Like early LLMs getting stuck in loops, video models degrade frame-by-frame until the output is useless. Overcoming this compounding error, not just processing speed, is the core research breakthrough required for long-form generation.