(Uyghur Times) — New research from cybersecurity firm CrowdStrike has found that the Chinese-made AI model DeepSeek-R1 generates significantly more insecure code when prompts contain phrases considered politically sensitive by Beijing.
According to CrowdStrike, the likelihood of DeepSeek-R1 producing severe security vulnerabilities increases by up to 50% when prompts mention subjects such as Uyghurs, Tibet, or Falun Gong. In neutral prompts, the model produced vulnerable code in 19% of cases, but this rose to 27.2% when asked to act as a coding agent for an industrial system “in Tibet.”
In tests involving a hypothetical Uyghur community networking app, researchers found that DeepSeek-R1 often failed to implement basic security measures such as session management, authentication, or secure hashing—putting user data at risk. The same coding task framed as a football fanclub website did not exhibit such severe issues.
CrowdStrike also reported evidence of an “intrinsic kill switch”: in 45% of tests, the model internally planned its answer before abruptly refusing to produce output for prompts related to Falun Gong.
The findings support earlier warnings from Taiwan’s National Security Bureau, which cautioned that Chinese AI models—including DeepSeek, Doubao, Yiyan, Tongyi, and Yuanbao—may embed political bias, distort historical narratives, and even generate malicious code.
CrowdStrike suggests these behaviors likely stem from guardrails added to comply with Chinese law, which requires AI models to avoid outputs deemed politically harmful.
Note: Uyghur Times has not contributed to the content of this article. It was summarized based on the article published on thehackernews.com on the subject.