Get your free personalized podcast brief

We scan new podcasts and send you the top 5 insights daily.

While AI models tolerate certain types of noise, EnCharge AI's founder argues this is a red herring for hardware design. The many layers of software abstraction required for scalable systems cannot handle unpredictable analog noise. Therefore, the underlying hardware must be "brutally accurate" to ensure system integrity.

Related Insights

The AI supply chain is crunched not just by obvious components like TSMC wafers and HBM memory. A significant, often overlooked bottleneck is rack manufacturing—including high-speed cables, connectors, and even sheet metal—which are "sneaky hard" due to extreme power, heat, and signal integrity demands.

To achieve 1000x efficiency, Unconventional AI is abandoning the digital abstraction (bits representing numbers) that has defined computing for 80 years. Instead, they are co-designing hardware and algorithms where the physics of the substrate itself defines the neural network, much like a biological brain.

EnCharge AI's analog compute design is so efficient that it doesn't need cutting-edge fabrication nodes to achieve significant performance gains. By using older, more accessible 16nm and 12nm processes, the company can avoid the intense competition and supply constraints for TSMC's most advanced nodes.

Digital computing, the standard for 80 years, is too power-hungry for scalable AI. Unconventional AI's Naveen Rao is betting on analog computing, which uses physics to perform calculations, as a more energy-efficient substrate for the unique demands of intelligent, stochastic workloads.

AI chip startup Talos takes a contrarian approach by casting models "straight into silicon," creating inflexible, model-specific hardware. This trades flexibility for massive gains in speed and cost, betting that frontier models will remain stable for periods of 3-12 months, making the "cartridge-swap" model economically viable.

We are building AI, a fundamentally stochastic and fuzzy system, on top of highly precise and deterministic digital computers. Unconventional AI founder Naveen Rao argues this is a profound mismatch. The goal is to build a new computing substrate—analog circuits—that is isomorphic to the nature of intelligence itself.

EnCharge AI's innovation was to reframe in-memory analog compute not as a scaled-up memory problem, but as a high-precision analog design problem. They borrowed techniques from medical and aerospace circuits to overcome noise and enable massive efficiency gains.

Borrowing a term from Formula One, Chris Fregly argues that AI engineers must develop a deep, symbiotic understanding of the full hardware-software stack. Rather than just staying at the Python level, true optimizers must co-design algorithms, software, and hardware, just as a champion driver understands how to build their car.

Instead of competing on speed and energy alone, Normal Computing is designing ASICs that introduce noise as a third optimization vector. These chips are ideal for probabilistic workloads like diffusion models, which are inherently noisy and approximate, mapping the software's physics to the hardware's.

Biological intelligence has no OS or APIs; the physics of the brain *is* the computation. Unconventional AI's CEO Naveen Rao argues that current AI is inefficient because it runs on layers of abstraction. The future is hardware where intelligence is an emergent property of the system's physics.