Get your free personalized podcast brief

We scan new podcasts and send you the top 5 insights daily.

A technique from cryptography, the Feistel network, makes any function invertible. When applied to neural network layers ("RevNets"), it allows activations from the forward pass to be re-calculated during the backward pass instead of stored. This trades extra compute for a massive reduction in memory footprint during training.

Related Insights

The progression from early neural networks to today's massive models is fundamentally driven by the exponential increase in available computational power, from the initial move to GPUs to today's million-fold increases in training capacity on a single model.

Google's TurboQuant algorithm enables near-lossless context compression, drastically reducing memory usage and inference costs. This breakthrough could democratize powerful AI by making it far cheaper and faster to run, much like the fictional 'middle-out' compression from the show 'Silicon Valley' was a game-changer.

The history of AI, such as the 2012 AlexNet breakthrough, demonstrates that scaling compute and data on simpler, older algorithms often yields greater advances than designing intricate new ones. This "bitter lesson" suggests prioritizing scalability over algorithmic complexity for future progress.

Autoencoding models (e.g., BERT) are "readers" that fill in blanks, while autoregressive models (e.g., GPT) are "writers." For non-generative tasks like classification, a tiny autoencoding model can match the performance of a massive autoregressive one, offering huge efficiency gains.

A fundamental constraint today is that the model architecture used for training must be the same as the one used for inference. Future breakthroughs could come from lifting this constraint. This would allow for specialized models: one optimized for compute-intensive training and another for memory-intensive serving.

To bridge the learning efficiency gap between humans and AI, researchers use meta-learning. This technique learns optimal initial weights for a neural network, giving it a "soft bias" that starts it closer to a good solution. This mimics the inherent inductive biases that allow humans to learn efficiently from limited data.

Today's transformers are optimized for matrix multiplication (MatMul) on GPUs. However, as compute scales to distributed clusters, MatMul may not be the most efficient primitive. Future AI architectures could be drastically different, built on new primitives better suited for large-scale, distributed hardware.

Research shows it's possible to distinguish and remove model weights used for memorizing facts versus those for general reasoning. Surprisingly, pruning these memorization weights can improve a model's performance on some reasoning tasks, suggesting a path toward creating more efficient, focused AI reasoners.

The key to continual learning is not just a longer context window, but a new architecture with a spectrum of memory types. "Nested learning" proposes a model with different layers that update at different frequencies—from transient working memory to persistent core knowledge—mimicking how humans learn without catastrophic forgetting.

Recent AI breakthroughs aren't just from better models, but from clever 'architecture' or 'scaffolding' around them. For example, Claude Code 'cheats' its context window limit by taking notes, clearing its memory, and then reading the notes to resume work. This architectural innovation drives performance.

Cryptographic "Feistel Ciphers" Help Train Neural Networks by Reducing Memory Usage | RiffOn