OpenAI bans Chinese accounts working for social media surveillance company


|

OpenAI bans Chinese accounts working for social media surveillance company
OpenAI bans Chinese accounts working for social media surveillance company
OpenAI bans China-linked accounts accused of using ChatGPT to monitor protests abroad and send reports to Chinese authorities.
OpenAI suspends accounts linked to Chinese surveillance of overseas dissent
OpenAI has banned multiple user accounts associated with China after discovering that its artificial intelligence tools, including ChatGPT, were being used to support a suspected surveillance operation targeting online discussions of anti-China protests in Western countries.

The Microsoft-backed technology firm said it had terminated the accounts for violating its usage policies, which prohibit the use of its systems for unauthorised surveillance or in support of authoritarian regimes.

The banned users had reportedly deployed OpenAI’s chatbot to assist in generating software code and sales material for an AI-powered tool designed to monitor online conversations. According to OpenAI’s report, the surveillance system, named the "Qianyue Overseas Public Opinion AI Assistant", was capable of tracking discussions on human rights demonstrations in countries including the United States and the United Kingdom.

AI used for cross-border surveillance, says OpenAI
In a statement reported by Bloomberg, OpenAI’s principal investigator Ben Nimmo said the case illustrates how democratic technologies can be exploited by non-democratic regimes.

“This is a pretty troubling glimpse into the way one non-democratic actor tried to use democratic or US-based AI for non-democratic purposes, according to the materials they were generating themselves,” Nimmo said.

The company stated that the AI assistant had been used to compile and send reports on protest activity to Chinese authorities, including intelligence personnel and embassy officials. These activities, OpenAI emphasised, are in direct violation of its terms, which forbid the use of its technology to suppress personal freedoms or aid repressive surveillance.

Meta and other AI providers also implicated
OpenAI noted that the banned accounts had not relied solely on ChatGPT. The network reportedly used additional AI models, including Meta’s Llama, to help refine its output. Responding to the claims, Meta acknowledged that it was one of several model providers potentially used in the operation, though it did not confirm direct involvement.

Meta also highlighted the global spread of open-access AI systems, noting that attempts to block certain platforms may have limited impact given the scale of China's own domestic AI development efforts.

“China is already investing more than a trillion dollars to surpass the US technologically,” Meta said. “Chinese tech companies are releasing their own open AI models as fast as companies in the US.”

US government concern over AI misuse
The incident has added to broader concerns within the United States over the potential misuse of artificial intelligence by China. Washington has previously warned that Beijing could use AI technologies to suppress dissent, manipulate online narratives, and conduct digital espionage.

While OpenAI has positioned itself as a leader in responsible AI development, the company has increasingly drawn attention to the potential risks posed by state actors seeking to co-opt its tools for geopolitical or authoritarian aims.

Publishing its findings, OpenAI said the move was intended to inform the public and policymakers of emerging threats and the challenges of preventing abuse in globally distributed AI ecosystems.

Context: a growing AI arms race
The revelations come amid an intensifying race between China and the United States over leadership in artificial intelligence. Beijing has invested heavily in building domestic AI capabilities, while the US government has sought to restrict the export of advanced semiconductors and AI models that could boost China’s technological edge.

OpenAI’s enforcement actions underscore the difficulties facing AI developers in balancing openness with security. As models become more powerful and widely available, efforts to prevent misuse—particularly by state-affiliated actors—will likely remain a critical concern for the global tech industry.
US Seeks Collection of Social Media Handles on Immigration Forms
US Seeks Collection of Social Media Handles on Immigration Forms
US immigration authorities propose a wider collection of social media handles from benefit applicants, raising concerns over privacy, speech rights, and oversight.
|
UK police unit to monitor social media to track violent protests before they erupt
UK police unit to monitor social media to track violent protests before they erupt
The UK is launching a new police unit to monitor social media for signs of anti-migrant unrest. Officials say it will support public safety, but critics cite free speech concerns.
|
Protest against US government proposal to collect immigrants’ social media information
Protest against US government proposal to collect immigrants’ social media information
Civil rights groups warn that a new USCIS plan to collect social media data from immigrants could undermine privacy, free speech, and fair process.
|