Taiwan's National Security Bureau (NSB) has issued an urgent warning about several China-developed generative AI language models, saying they contain severe security vulnerabilities and politically steered outputs that could compromise users and distort information across global technology ecosystems.
The warning comes from a jurisdiction at the center of the semiconductor supply chain. Taiwan's findings highlight concerns not only about personal privacy, but also about the integrity of software and AI tools increasingly woven into chip design, cloud services, manufacturing operations, and cross-border digital collaboration.
Security flaws across the board
The assessment, conducted with the Ministry of Justice Investigation Bureau and the National Police Agency's Criminal Investigation Bureau, reviewed five widely used AI models from China: DeepSeek, Doubao, Wenxin Yiyan, Tongyi Qianwen, and Tencent Yuanbao.
Evaluators applied the Digital Development Ministry's "Mobile Application APP Basic Cybersecurity Testing Standards v4.0," which examines issues such as data collection, permission overreach, data sharing, device access, and system information capture.
All five models failed on multiple counts. Tongyi Qianwen violated eleven of fifteen indicators, Doubao and Tencent Yuanbao each logged ten, while Wenxin Yiyan and DeepSeek recorded nine and eight violations, respectively.
The apps commonly demanded real-time location data, attempted to capture screenshots, required agreement to sweeping privacy terms, and collected extensive device information. The NSB says these practices create openings for unauthorized data extraction that could affect corporate users, supply-chain partners, and research organizations.
Political bias embedded in outputs
Beyond technical weaknesses, the NSB evaluated how the models handle sensitive topics. The content review, based on ten categories, found consistent political bias aligned with the official positions of Beijing.
Outputs described Taiwan as being governed by China's central authorities, portrayed Taiwan as "China Taiwan," and insisted the island is an inalienable part of Chinese territory. These narratives appeared even when queries involved historical or factual context.
The models also avoided or removed terms such as "democracy," "freedom," "human rights," and "Tiananmen Square," indicating deliberate keyword filtering.
The NSB's report further notes that the models were able to generate inflammatory or defamatory text, spread misleading information, and—under certain prompts—produce cyberattack commands or basic exploit code. These behaviors raise concerns that the tools could be misused in coordinated information operations or as vectors for network intrusion.
Risks for global supply chains
For companies outside Taiwan, especially those in sectors that rely on accurate data, technical reliability, and secure digital infrastructure, the findings point to operational risks. AI models are increasingly embedded in research and development, logistics planning, customer engagement systems, and automated decision-making. Biased output or insecure data handling could introduce hidden vulnerabilities into workflows connected to semiconductor manufacturing, cloud computing, or international supply-chain management.
The NSB's conclusions echo similar concerns raised in multiple countries. A central issue is the possibility that user data—ranging from search queries to chat history—could be transferred back to Chinese servers. The report notes that companies operating these AI services may be legally obligated under Chinese laws, including the National Intelligence Law and the Cybersecurity Law, to provide data to government agencies upon request.
Taiwan's security agency advises users to avoid downloading the identified AI applications and urges companies to review their exposure to foreign-developed AI tools that lack transparent data protection mechanisms.
The NSB says it will continue sharing intelligence with international partners to track cross-border risks and reinforce digital resilience.
Article edited by Jerry Chen


