DeepSeek R1 Safe AI model launched by Huawei with focus on safety, dodging politics

DeepSeek R1 Safe AI model launched by Huawei with focus on safety, dodging politics

Huawei has unveiled DeepSeek-R1-Safe, a safety-focused version of the DeepSeek AI model that prioritises compliance with Chinese regulations by blocking politically sensitive and harmful content while maintaining strong performance.

Advertisement
DeepSeek R1 Safe AI model launched by Huawei with focus on safety, dodging politicsDeepSeek R1 Safe AI model launched by Huawei with focus on safety, dodging politics
Business Today Desk
  • Sep 22, 2025,
  • Updated Sep 22, 2025 6:46 PM IST

Huawei has introduced a new version of the DeepSeek artificial intelligence system that prioritises safety and regulatory compliance, highlighting China’s tightening oversight of advanced AI tools.

The model, called DeepSeek-R1-Safe, was developed in collaboration with Zhejiang University, one of China’s leading academic institutions and the alma mater of DeepSeek founder Liang Wenfeng. Huawei clarified that neither Liang nor the DeepSeek company were directly involved in the project.

Advertisement

Built on the open-source DeepSeek-R1 model, the system was retrained using 1,000 of Huawei’s Ascend AI chips. According to the company, the goal was to integrate safeguards that prevent the model from engaging in politically sensitive discussions, generating toxic speech or encouraging illegal activities.

Huawei claims DeepSeek-R1-Safe was “nearly 100% successful” in blocking politically sensitive content during routine interactions. However, the effectiveness fell to around 40% when users attempted to bypass restrictions through roleplay, indirect scenarios or coded prompts. Overall, the system achieved an 83% security defence score, outperforming rival large language models such as Alibaba’s Qwen-235B and DeepSeek-R1-671B by up to 15%. Importantly, Huawei said these safety measures reduced the model’s performance by less than 1% compared with the original.

Advertisement

The release underscores Beijing’s push to ensure AI platforms reflect “socialist values” and comply with strict boundaries on online expression. Other domestic platforms, such as Baidu’s Ernie Bot, already block responses on politically sensitive issues, and Huawei’s update appears to formalise these controls within advanced AI systems.

The development also reflects a broader global trend of tailoring AI to local priorities. In early 2025, Saudi Arabian company Humain launched an Arabic-native chatbot designed to embody Islamic culture and values. Such initiatives signal how countries are increasingly seeking to shape AI not only for technical performance but also to mirror cultural and political frameworks.

Huawei unveiled the model at its annual Connect conference in Shanghai, where it also presented its long-term roadmap for semiconductors and computing infrastructure. The announcement comes amid growing adoption of DeepSeek technology within China and ongoing global debate about the balance between AI innovation, safety and regulation.

For Unparalleled coverage of India's Businesses and Economy – Subscribe to Business Today Magazine

Huawei has introduced a new version of the DeepSeek artificial intelligence system that prioritises safety and regulatory compliance, highlighting China’s tightening oversight of advanced AI tools.

The model, called DeepSeek-R1-Safe, was developed in collaboration with Zhejiang University, one of China’s leading academic institutions and the alma mater of DeepSeek founder Liang Wenfeng. Huawei clarified that neither Liang nor the DeepSeek company were directly involved in the project.

Advertisement

Built on the open-source DeepSeek-R1 model, the system was retrained using 1,000 of Huawei’s Ascend AI chips. According to the company, the goal was to integrate safeguards that prevent the model from engaging in politically sensitive discussions, generating toxic speech or encouraging illegal activities.

Huawei claims DeepSeek-R1-Safe was “nearly 100% successful” in blocking politically sensitive content during routine interactions. However, the effectiveness fell to around 40% when users attempted to bypass restrictions through roleplay, indirect scenarios or coded prompts. Overall, the system achieved an 83% security defence score, outperforming rival large language models such as Alibaba’s Qwen-235B and DeepSeek-R1-671B by up to 15%. Importantly, Huawei said these safety measures reduced the model’s performance by less than 1% compared with the original.

Advertisement

The release underscores Beijing’s push to ensure AI platforms reflect “socialist values” and comply with strict boundaries on online expression. Other domestic platforms, such as Baidu’s Ernie Bot, already block responses on politically sensitive issues, and Huawei’s update appears to formalise these controls within advanced AI systems.

The development also reflects a broader global trend of tailoring AI to local priorities. In early 2025, Saudi Arabian company Humain launched an Arabic-native chatbot designed to embody Islamic culture and values. Such initiatives signal how countries are increasingly seeking to shape AI not only for technical performance but also to mirror cultural and political frameworks.

Huawei unveiled the model at its annual Connect conference in Shanghai, where it also presented its long-term roadmap for semiconductors and computing infrastructure. The announcement comes amid growing adoption of DeepSeek technology within China and ongoing global debate about the balance between AI innovation, safety and regulation.

For Unparalleled coverage of India's Businesses and Economy – Subscribe to Business Today Magazine

Read more!
Advertisement