China-based influence networks have been using ChatGPT to generate everything from social media comments to performance reviews for their own bosses.
OpenAI researchers found that Chinese actors are leaning on AI to churn out posts and comments in several languages, including English, Chinese, and Urdu, often spreading them across platforms like TikTok, X, Reddit, and Facebook. The goal appears to be to sway opinion, monitor discussions, and make their outreach look like it comes from regular users rather than coordinated accounts.
One notable campaign, which OpenAI calls Sneer Review, used ChatGPT to post short takes on topics as varied as the dismantling of the US Agency for International Development and a Taiwanese game that pushes back against the Chinese Communist Party. Sometimes the operation would go so far as to make entire back-and-forths, crafting both the original post and subsequent reply comments to simulate real engagement.
AI-Fueled Information Campaigns
OpenAI’s report said the Sneer Review actors even asked ChatGPT to write a detailed breakdown of their own internal operations, basically submitting AI-generated performance reviews to showcase their tactics and results. The content wasn’t limited to public influence; these networks also created longer articles and fake reports claiming that a specific game had triggered major controversy online.
Beyond posting and performing, another network pretending to be journalists and analysts used ChatGPT to whip up posts, write account bios, translate messages to English, and even comb through data. At one point, the bots analyzed content related to a US Senator’s official correspondence, although OpenAI couldn’t confirm if any of it was actually sent out.
Some campaigns made their own marketing content through ChatGPT, boasting about orchestrating fake social media operations and using social engineering in efforts to recruit information sources, which matched what was actually happening across the web. OpenAI’s past findings also touched on other China-connected influence projects that watched protests in the West and created real-time reports fed back to Beijing’s security teams.
Focusing on more than just China, OpenAI disrupted multiple covert operations over the last three months, stopping a total of ten different campaigns running on artificial intelligence. Four of those had ties that most likely pointed back to China, with the remaining groups linked to Russia, Iran, the Philippines, Cambodia, and North Korea.
Ben Nimmo, principal investigator on OpenAI’s intelligence team, described the ongoing campaigns as varied and wide-reaching but not generally very successful at getting attention from actual users. Nimmo explained that deploying better tools like AI doesn’t always mean the campaigns will gain more traction online.