China's binding regulations mean companies focus safety efforts on the 31 specific risks defined by the government. This compliance-driven approach can leave them less prepared for emergent risks like CBRN or loss of control, as resources are directed toward meeting existing legal requirements rather than proactive, voluntary measures.
In China, mayors and governors are promoted based on their ability to meet national priorities. As AI safety becomes a central government goal, these local leaders are now incentivized to create experimental zones and novel regulatory approaches, driving bottom-up policy innovation that can later be adopted nationally.
Top Chinese officials use the metaphor "if the braking system isn't under control, you can't really step on the accelerator with confidence." This reflects a core belief that robust safety measures enable, rather than hinder, the aggressive development and deployment of powerful AI systems, viewing the two as synergistic.
The emphasis on long-term, unprovable risks like AI superintelligence is a strategic diversion. It shifts regulatory and safety efforts away from addressing tangible, immediate problems like model inaccuracy and security vulnerabilities, effectively resulting in a lack of meaningful oversight today.
In China, academics have significant influence on policymaking, partly due to a cultural tradition that highly values scholars. Experts deeply concerned about existential AI risks have briefed the highest levels of government, suggesting that policy may be less susceptible to capture by commercial tech interests compared to the West.
The argument that the U.S. must race to build superintelligence before China is flawed. The Chinese Communist Party's primary goal is control. An uncontrollable AI poses a direct existential threat to their power, making them more likely to heavily regulate or halt its development rather than recklessly pursue it.
China remains committed to open-weight models, seeing them as beneficial for innovation. Its primary safety strategy is to remove hazardous knowledge (e.g., bioweapons information) from the training data itself. This makes the public model inherently safer, rather than relying solely on post-training refusal mechanisms that can be circumvented.
AI companies engage in "safety revisionism," shifting the definition from preventing tangible harm to abstract concepts like "alignment" or future "existential risks." This tactic allows their inherently inaccurate models to bypass the traditional, rigorous safety standards required for defense and other critical systems.
Unlike the US's voluntary approach, Chinese AI developers must register their models with the government before public release. This involved process requires safety testing against a national standard of 31 risks and giving regulators pre-deployment access for approval, creating a de facto licensing regime for consumer AI.
An FDA-style regulatory model would force AI companies to make a quantitative safety case for their models before deployment. This shifts the burden of proof from regulators to creators, creating powerful financial incentives for labs to invest heavily in safety research, much like pharmaceutical companies invest in clinical trials.
The AI safety discourse in China is pragmatic, focusing on immediate economic impacts rather than long-term existential threats. The most palpable fear exists among developers, who directly experience the power of coding assistants and worry about job replacement, a stark contrast to the West's more philosophical concerns.