China-linked accounts increasingly leverage generative AI to drive influence operations, from content creation to operational planning.
In early August, two Vanderbilt University professors published an essay exposing a cache of Chinese documents tied to the private firm GoLaxy. These documents revealed sophisticated AI use, not only for creating misleading content targeting audiences in Hong Kong and Taiwan but also for gathering detailed information on U.S. lawmakers—potentially for espionage or future influence campaigns. The essay garnered widespread attention, deservedly so.
However, these findings represent just the tip of the iceberg. A series of reports, takedowns, and incidents over the summer—from OpenAI, Meta, and Graphika—highlight the growing role of AI in China-linked foreign propaganda. Generative AI is now applied not only to content creation but also to operational tasks like data collection and internal reporting to the state apparatus. This shift marks a new stage in Beijing’s information warfare, demonstrating the potential scope of an AI-driven future and underscoring the urgent need for attention from social media platforms, software developers, and democratic governments.
A review of these findings reveals five major trends:
Earlier campaigns used AI for fake personas and deepfakes, but recent disclosures reveal coordinated efforts to produce entire fake news websites that spread pro-Beijing narratives in multiple languages. Graphika’s Falsos Amigos report identified 11 such sites created between late 2024 and early 2025, employing AI-generated images as logos and cover photos to boost credibility.
These sites largely republished content from China Global Television Network (CGTN), presenting AI-generated summaries in English, French, Spanish, and Vietnamese, with machine-translated full articles. The tone and style varied for different audiences, and CGTN was cited only as a reference—effectively laundering propaganda under the guise of independent reporting.
OpenAI’s June report described similar tactics. Banned ChatGPT accounts had generated names, profile pictures, and personas for “news” pages and individual accounts, including U.S. veterans critical of the Trump administration, in an operation called Uncle Spam, designed to amplify U.S. political polarization.
China-linked accounts also simulated engagement by posting comments and replies to create the appearance of organic discussion. For example, Pakistani activist Mahrang Baloch, a critic of China’s Balochistan investments, became the target of a false video campaign on TikTok and Facebook, with hundreds of AI-generated comments in English and Urdu.
Another operation, Sneer review, used ChatGPT to produce negative comments about a Taiwanese anti-CCP game and then crafted a “long-form article” claiming public backlash.
These examples show that generative AI is increasingly used to refine prior tactics such as propaganda laundering, covert content dissemination, smear campaigns, and creating fake social media personas.
Beyond content, generative AI is being used to improve operational efficiency. OpenAI disrupted four China-linked operations from March to June 2025 that utilized ChatGPT. According to Ben Nimmo, OpenAI’s principal investigator, these operations combined influence campaigns, social engineering, and surveillance.
Coordinated posting networks amplified reach. The 11 fake news domains identified by Graphika were linked to 16 social media accounts across Facebook, Instagram, Mastodon, Threads, and X, mirroring CGTN English’s posting schedule. OpenAI also observed cross-platform activity on TikTok, X, Reddit, Facebook, and Bluesky, highlighting an expanding digital footprint.
China-linked operators reportedly used AI to optimize posting schedules, content distribution, and engagement. They also requested code to scrape personal data—profiles and follower lists—from X and Bluesky, possibly to target propaganda. GoLaxy documents revealed profiles for 117 U.S. Congress members and over 2,000 political figures, suggesting potential for highly personalized disinformation.
AI also supported internal reporting. OpenAI found ChatGPT used to draft internal essays and performance reviews detailing operational steps—potentially allowing real-time refinement of tactics if undetected.
These campaigns often focused on issues unrelated to China. Graphika reported content on U.S. food insecurity, tariffs, and youth events promoting China ties, as well as topics like Iran-Israel tensions and Ukraine. OpenAI highlighted attempts to influence U.S. policy debates, such as USAID closures.
Many operations aimed to meddle in foreign democracies. OpenAI’s Uncle Spam sought to exacerbate U.S. polarization. Meta removed 157 Facebook and 17 Instagram accounts targeting Myanmar, Taiwan, and Japan, posting as locals while criticizing civil resistance or political leaders.
Pro-China content was also directed at young audiences in the Global South, promoting the sites as independent media while masking state backing. Graphika found evidence of targeting children aged 8–16, aiming to shape long-term perceptions in geopolitically strategic regions.
While absolute attribution is cautious, evidence suggests connections to China’s propaganda and security apparatus. Graphika found 11 domains registered via Alibaba Cloud, mostly in Beijing, with links to state-owned CGTN affiliates. OpenAI noted a ChatGPT user claiming CCP affiliation. The Vanderbilt essay tied GoLaxy to the Chinese Academy of Sciences, with documented cooperation with intelligence, military, and CCP entities.
Despite some traction—TikTok videos reaching 25,000 likes and X accounts with 10,000 views—most networks had limited organic engagement. Resources for detecting and disrupting operations vary across platforms. While Meta and OpenAI show the importance of investment in defenses, smaller platforms or apps owned by China-based companies may lack incentives or face pressure not to act.
As generative AI use expands worldwide, China’s party-state apparatus continues to leverage these tools for influence campaigns, smearing critics, and disrupting democratic debate. Private and governmental initiatives to enhance transparency, cross-platform collaboration, and resilience are essential to counter future AI-driven manipulation campaigns originating from Beijing.