Cerebras overcame the key obstacle to wafer-scale computing—chip defects—by adopting a strategy from memory design. Instead of aiming for a perfect wafer, they built a massive array of identical compute cores with built-in redundancy, allowing them to simply route around any flaws that occur during manufacturing.

Related Insights

To achieve 1000x efficiency, Unconventional AI is abandoning the digital abstraction (bits representing numbers) that has defined computing for 80 years. Instead, they are co-designing hardware and algorithms where the physics of the substrate itself defines the neural network, much like a biological brain.

The next wave of AI silicon may pivot from today's compute-heavy architectures to memory-centric ones optimized for inference. This fundamental shift would allow high-performance chips to be produced on older, more accessible 7-14nm manufacturing nodes, disrupting the current dependency on cutting-edge fabs.

While competitors chased cutting-edge physics, AI chip company Groq used a more conservative process technology but loaded its chip with on-die memory (SRAM). This seemingly less advanced but different architectural choice proved perfectly suited for the "decode" phase of AI inference, a critical bottleneck that led to its licensing deal with NVIDIA.

Software companies struggle to build their own chips because their agile, sprint-based culture clashes with hardware development's demands. Chip design requires a "measure twice, cut once" mentality, as mistakes cost months and millions. This cultural mismatch is a primary reason for failure, even with immense resources.

The plateauing performance-per-watt of GPUs suggests that simply scaling current matrix multiplication-heavy architectures is unsustainable. This hardware limitation may necessitate research into new computational primitives and neural network designs built for large-scale distributed systems, not single devices.

For a hyperscaler, the main benefit of designing a custom AI chip isn't necessarily superior performance, but gaining control. It allows them to escape the supply allocations dictated by NVIDIA and chart their own course, even if their chip is slightly less performant or more expensive to deploy.

GPUs were designed for graphics, not AI. It was a "twist of fate" that their massively parallel architecture suited AI workloads. Chips designed from scratch for AI would be much more efficient, opening the door for new startups to build better, more specialized hardware and challenge incumbents.

OpenAI is designing its custom chip for flexibility, not just raw performance on current models. The team learned that major 100x efficiency gains come from evolving algorithms (e.g., dense to sparse transformers), so the hardware must be adaptable to these future architectural changes.

The current 2-3 year chip design cycle is a major bottleneck for AI progress, as hardware is always chasing outdated software needs. By using AI to slash this timeline, companies can enable a massive expansion of custom chips, optimizing performance for many at-scale software workloads.

When building systems with hundreds of thousands of GPUs and millions of components, it's a statistical certainty that something is always broken. Therefore, hardware and software must be architected from the ground up to handle constant, inevitable failures while maintaining performance and service availability.

Cerebras Solved Wafer-Scale Chip Defects by Mimicking Memory Architecture | RiffOn