China remains committed to open-weight models, seeing them as beneficial for innovation. Its primary safety strategy is to remove hazardous knowledge (e.g., bioweapons information) from the training data itself. This makes the public model inherently safer, rather than relying solely on post-training refusal mechanisms that can be circumvented.
Top Chinese officials use the metaphor "if the braking system isn't under control, you can't really step on the accelerator with confidence." This reflects a core belief that robust safety measures enable, rather than hinder, the aggressive development and deployment of powerful AI systems, viewing the two as synergistic.
China's promotion of open-weight models is a strategic maneuver to exert global influence. By controlling the underlying models that answer questions about history, borders, and values, a nation can shape global narratives and project soft power, much like Hollywood did for the U.S.
China's binding regulations mean companies focus safety efforts on the 31 specific risks defined by the government. This compliance-driven approach can leave them less prepared for emergent risks like CBRN or loss of control, as resources are directed toward meeting existing legal requirements rather than proactive, voluntary measures.
A key disincentive for open-sourcing frontier AI models is that the released model weights contain residual information about the training process. Competitors could potentially reverse-engineer the training data set or proprietary algorithms, eroding the creator's competitive advantage.
Unable to compete globally on inference-as-a-service due to US chip sanctions, China has pivoted to releasing top-tier open-source models. This serves as a powerful soft power play, appealing to other nations and building a technological sphere of influence independent of the US.
A common misconception is that Chinese AI is fully open-source. The reality is they are often "open-weight," meaning training parameters (weights) are shared, but the underlying code and proprietary datasets are not. This provides a competitive advantage by enabling adoption while maintaining some control.
Z.AI and other Chinese labs recognize Western enterprises won't use their APIs due to trust and data concerns. By open-sourcing models, they bypass this barrier to gain developer adoption, global mindshare, and brand credibility, viewing it as a pragmatic go-to-market tactic rather than an ideological stance.
Unlike the US's voluntary approach, Chinese AI developers must register their models with the government before public release. This involved process requires safety testing against a national standard of 31 risks and giving regulators pre-deployment access for approval, creating a de facto licensing regime for consumer AI.
Even when air-gapped, commercial foundation models are fundamentally compromised for military use. Their training on public web data makes them vulnerable to "data poisoning," where adversaries can embed hidden "sleeper agents" that trigger harmful behavior on command, creating a massive security risk.
While the U.S. leads in closed, proprietary AI models like OpenAI's, Chinese companies now dominate the leaderboards for open-source models. Because they are cheaper and easier to deploy, these Chinese models are seeing rapid global uptake, challenging the U.S.'s perceived lead in AI through wider diffusion and application.