Published Date : 01/08/2025
Three days after the Trump administration published its much-anticipated AI action plan, the Chinese government released its own AI policy blueprint. Was the timing a coincidence? I doubt it.
China’s “Global AI Governance Action Plan” was unveiled on July 26, the first day of the World Artificial Intelligence Conference (WAIC), the largest annual AI event in China. Geoffrey Hinton and Eric Schmidt were among the many Western tech industry figures who attended the festivities in Shanghai. Our WIRED colleague Will Knight was also on the scene.
The atmosphere at WAIC was a stark contrast to Trump’s America-first, regulation-light vision for AI. In his opening speech, Chinese Premier Li Qiang emphasized the importance of global cooperation on AI. He was followed by a series of prominent Chinese AI researchers who highlighted urgent questions the Trump administration seems to be largely ignoring.
Zhou Bowen, leader of the Shanghai AI Lab, one of China’s top AI research institutions, discussed his team’s work on AI safety at WAIC. He suggested the government could play a role in monitoring commercial AI models for vulnerabilities.
In an interview with WIRED, Yi Zeng, a professor at the Chinese Academy of Sciences and a leading voice on AI, expressed hopes for global collaboration. “It would be best if the UK, US, China, Singapore, and other institutes come together,” he said.
The conference included closed-door meetings about AI safety policy issues. Paul Triolo, a partner at the advisory firm DGA-Albright Stonebridge Group, noted that the discussions were productive, despite the absence of American leadership. With the US out of the picture, “a coalition of major AI safety players, co-led by China, Singapore, the UK, and the EU, will now drive efforts to construct guardrails around frontier AI model development,” Triolo told WIRED.
Many Western visitors were surprised by the focus on AI safety in China. “You could literally attend AI safety events nonstop in the last seven days. And that was not the case with some of the other global AI summits,” Brian Tse, founder of the Beijing-based AI safety research institute Concordia AI, told me. Earlier this week, Concordia AI hosted a day-long safety forum in Shanghai with renowned AI researchers like Stuart Russel and Yoshua Bengio.
Comparing China’s AI blueprint with Trump’s action plan, it appears the two countries have switched positions. When Chinese companies first began developing advanced AI models, many thought they would be held back by government censorship. Now, US leaders want to ensure homegrown AI models “pursue objective truth,” an endeavor that, as my colleague Steven Levy wrote, is “a blatant exercise in top-down ideological bias.” China’s AI action plan, in contrast, reads like a globalist manifesto, recommending that the United Nations lead international AI efforts and suggesting governments play a crucial role in regulating the technology.
Although their governments are very different, people in China and the US share similar concerns about AI safety: model hallucinations, discrimination, existential risks, and cybersecurity vulnerabilities. Because the US and China are developing frontier AI models using the same architecture and methods, the societal impacts and risks are very similar. This also means academic research on AI safety is converging in both countries, including in areas like scalable oversight and interoperable safety testing standards.
However, Chinese and American leaders have different attitudes toward these issues. The Trump administration recently tried and failed to impose a 10-year moratorium on state-level AI regulations. In contrast, Chinese officials, including President Xi Jinping, are increasingly speaking out about the importance of AI guardrails. Beijing has been busy drafting domestic standards and rules for the technology, some of which are already in effect.
As the US adopts unorthodox and inconsistent policies, the Chinese government increasingly appears as the adult in the room. With its new AI action plan, Beijing is trying to seize the moment and send a message: If you want leadership on this world-changing innovation, look here.
I don’t know how effective China’s charm offensive will be in the end, but the global retreat of the US does feel like a once-in-a-century opportunity for Beijing to spread its influence, especially at a time when every country is looking for role models to help manage AI risks.
However, it remains to be seen how eager China’s domestic AI industry will be to embrace this heightened focus on safety. While the Chinese government and academic circles have ramped up their AI safety efforts, industry has been less enthusiastic—just like in the West.
According to a recent report by Concordia AI, Chinese AI labs disclose less information about their safety efforts than their Western counterparts. Of the 13 frontier AI developers in China the report analyzed, only three provided details about safety assessments in their research publications.
Will told me that several tech entrepreneurs he spoke to at WAIC expressed concerns about AI risks such as hallucination, model bias, and criminal misuse. But when it came to AGI, many seemed optimistic about its positive impacts and less concerned about job loss or existential risks. Privately, some entrepreneurs admitted that addressing existential risks isn’t as important to them as scaling, making money, and beating the competition.
However, the clear signal from the Chinese government is that companies should tackle AI safety risks. I wouldn’t be surprised if many startups in the country change their tune. Triolo of DGA-Albright Stonebridge Group expects Chinese frontier research labs to publish more cutting-edge safety work.
Some WAIC attendees see China’s focus on open-source AI as a key part of the picture. “As Chinese AI companies increasingly open-source powerful AIs, their American counterparts are pressured to do the same,” Bo Peng, a researcher who created the open-source large language model RWKV, told WIRED. Peng envisions a future where different nations, including those that do not always agree, work together on AI. “A competitive landscape of multiple powerful open-source AIs is in the best interest of AI safety and humanity's future,” he explained. “Because different AIs naturally embody different values and will keep each other in check.”
Q: What is the World Artificial Intelligence Conference (WAIC)?
A: The World Artificial Intelligence Conference (WAIC) is the largest annual AI event in China, where global leaders, researchers, and industry figures gather to discuss the latest advancements and policies in artificial intelligence.
Q: What is China's 'Global AI Governance Action Plan'?
A: China's 'Global AI Governance Action Plan' is a comprehensive policy blueprint released by the Chinese government, emphasizing global cooperation and safety in the development and regulation of AI technologies.
Q: How does China's AI policy differ from the US's AI action plan?
A: China's AI policy focuses on global cooperation, safety, and regulation, while the US's AI action plan under the Trump administration emphasized a more America-first, regulation-light approach.
Q: What are the key concerns regarding AI safety in both China and the US?
A: Key concerns in both countries include model hallucinations, discrimination, existential risks, and cybersecurity vulnerabilities, reflecting the need for robust safety measures in AI development.
Q: What role does open-source AI play in China's AI strategy?
A: Open-source AI is seen as a crucial part of China's strategy, promoting transparency and collaboration among different nations to ensure the development of safe and ethical AI systems.