The bill regulates not just models trained with massive compute, but also smaller models trained on the output of larger ones ('knowledge distillation'). This is a key technique Chinese firms use to bypass US export controls on advanced chips, bringing them under the regulatory umbrella.
The decision to allow NVIDIA to sell powerful AI chips to China has a counterintuitive goal. The administration believes that by supplying China, it can "take the air out" of the country's own efforts to build a self-sufficient AI chip ecosystem, thereby hindering domestic firms like Huawei.
The US President's move to centralize AI regulation over individual states is likely a response to lobbying from major tech companies. They need a stable, nationwide framework to protect their massive capital expenditures on data centers. A patchwork of state laws creates uncertainty and the risk of being forced into costly relocations.
China is gaining an efficiency edge in AI by using "distillation"—training smaller, cheaper models from larger ones. This "train the trainer" approach is much faster and challenges the capital-intensive US strategy, highlighting how inefficient and "bloated" current Western foundational models are.
Unable to compete globally on inference-as-a-service due to US chip sanctions, China has pivoted to releasing top-tier open-source models. This serves as a powerful soft power play, appealing to other nations and building a technological sphere of influence independent of the US.
An emerging geopolitical threat is China weaponizing AI by flooding the market with cheap, efficient large language models (LLMs). This strategy, mirroring their historical dumping of steel, could collapse the pricing power of Western AI giants, disrupting the US economy's primary growth engine.
A common misconception is that Chinese AI is fully open-source. The reality is they are often "open-weight," meaning training parameters (weights) are shared, but the underlying code and proprietary datasets are not. This provides a competitive advantage by enabling adoption while maintaining some control.
Chinese AI models like Kimi achieve dramatic cost reductions through specific architectural choices, not just scale. Using a "mixture of experts" design, they only utilize a fraction of their total parameters for any given task, making them far more efficient to run than the "dense" models common in the West.
Contrary to their intent, U.S. export controls on AI chips have backfired. Instead of crippling China's AI development, the restrictions provided the necessary incentive for China to aggressively invest in and accelerate its own semiconductor industry, potentially eroding the U.S.'s long-term competitive advantage.
Unlike the US's voluntary approach, Chinese AI developers must register their models with the government before public release. This involved process requires safety testing against a national standard of 31 risks and giving regulators pre-deployment access for approval, creating a de facto licensing regime for consumer AI.
The business model for powerful, free, open-source AI models from Chinese companies may not be direct profit. Instead, it could be a strategy to globally distribute an AI trained on a specific worldview, competing with American models on an ideological rather than purely commercial level.