Setting an LLM's temperature to zero should make its output deterministic, but it doesn't in practice. This is because floating-point number additions, when parallelized across GPUs, are non-associative. The order in which batched operations complete creates tiny variations, preventing true determinism.

Related Insights

The "Bitter Lesson" is not just about using more compute, but leveraging it scalably. Current LLMs are inefficient because they only learn during a discrete training phase, not during deployment where most computation occurs. This reliance on a special, data-intensive training period is not a scalable use of computational resources.

Unlike simple classification (one pass), generative AI performs recursive inference. Each new token (word, pixel) requires a full pass through the model, turning a single prompt into a series of demanding computations. This makes inference a major, ongoing driver of GPU demand, rivaling training.

Building features like custom commands and sub-agents can look like reliable, deterministic workflows. However, because they are built on non-deterministic LLMs, they fail unpredictably. This misleads users into trusting a fragile abstraction and ultimately results in a poor experience.

Newer LLMs exhibit a more homogenized writing style than earlier versions like GPT-3. This is due to "style burn-in," where training on outputs from previous generations reinforces a specific, often less creative, tone. The model’s style becomes path-dependent, losing the raw variety of its original training data.

Seemingly simple benchmarks yield wildly different results if not run under identical conditions. Third-party evaluators must run tests themselves because labs often use optimized prompts to inflate scores. Even then, challenges like parsing inconsistent answer formats make truly fair comparison a significant technical hurdle.

Model architecture decisions directly impact inference performance. AI company Zyphra pre-selects target hardware and then chooses model parameters—such as a hidden dimension with many powers of two—to align with how GPUs split up workloads, maximizing efficiency from day one.

AI coding assistants struggle with deep kernel work (CUDA, PTX) because there's little public code to learn from. Furthermore, debugging AI-generated parallel code is extremely difficult because the developer lacks the original mental model, making it less efficient than writing it themselves.

To explain the LLM 'temperature' parameter, imagine a claw machine. A low temperature (zero) is a sharp, icy peak where the claw deterministically grabs the top token. A high temperature melts the peak, allowing the claw to grab more creative, varied tokens from a wider, flatter area.

Salesforce is reintroducing deterministic automation because its generative AI agents struggle with reliability, dropping instructions when given more than eight commands. This pullback signals current LLMs are not ready for high-stakes, consistent enterprise workflows.

The primary performance bottleneck for LLMs is memory bandwidth (moving large weights), making them memory-bound. In contrast, diffusion-based video models are compute-bound, as they saturate the GPU's processing power by simultaneously denoising tens of thousands of tokens. This represents a fundamental difference in optimization strategy.