Meta announced it has neutralized a Chinese disinformation campaign, known as “Spamouflage,” that the social media company called the largest known cross-platform covert influence operation it's tracked to date.
The campaign posted positive commentary about China and criticisms about the United States, Western foreign policies, and critics of the Chinese government, including journalists and Chinese medical researchers who focused on COVID-19.
In its Q2 Adversarial Threat Report, Meta said it removed 7,704 Facebook accounts, 954 pages, 15 groups and 15 Instagram accounts for violating its policy against coordinated inauthentic behavior. Meta said Spamouflage has been active in China since 2019 and has targeted regions across the world, including Taiwan, the United States, Australia, the United Kingdom, Japan, as well as global Chinese-speaking audiences.
Based on its most recent investigation, started in late-2022, Meta uncovered a large and prolific covert influence operation that was active on more than 50 platforms and forums, including X (formerly Twitter), YouTube, TikTok, Reddit, Pinterest, Medium, Blogspot, LiveJournal, VKontakte, Vimeo, and dozens of smaller platforms and forums, as well as Facebook and Instagram.
As far as the specific content, Meta reported that Spamouflage content included posts criticizing Chinese virologist Yan Limeng – a frequent target of the operation – that also appeared on websites like TripAdvisor. Another target – Chinese-American journalist Jiayang Fan – appeared to have been mentioned on the forum of a Luxembourg newspaper, the Luxemburger Wort.
What does it all mean to security teams?
The move by Meta comes at a time of rising tensions between the United States and China, as multiple high profile over cyber campaigns have been uncovered in recent months by hackers linked to Beijing. Layered on top of that have been contentious debates between U.S. and Chinese leaders over the origins of the pandemic and the fate of Taiwan. The island off the coast of China has become more strategic in recent years because of its importance in the chip industry. Taiwan produces more than 60% of the world’s semiconductors and 90% of the most advanced chips. China has long claimed the country of 23 million as part of Chinese territory, and national security experts worry Beijing may be mulling a potential invasion sometime over the next decade to force the issue.
While the Meta news has clear importance to social media consumers, policy analysts, election officials, and the intelligence community, it’s could also have an impact on the day-to-day operations of corporate security teams.
“While on the surface, these online influence operations may seem to have little relevancy...there are several ways they can pose a direct threat to a business or other organization,” said Karim Hijazi, managing director of investment firm SCP & Co. “The core issue here is how these campaigns are 'omni-channel' and utilize more invasive means of proliferating disinformation, like spear-phishing and impersonation efforts that can be very convincing and destructive.”
David Mitchell, chief technical officer at HYAS, said security personnel, whether executive level or operators, should pay attention to disinformation campaigns just as they would an attack campaign. Disinformation can target a company and the links may also include phishing or malware that employees may click on, if the targeted message fits their views.
Mitchell floated the idea that verification-for-profit schemes could help crack down on campaigns like Spamouflauge.
“While it’s fantastic that Meta has finally taken a proactive stance against disinformation campaigns, this problem is going to continue to get worse during geopolitical strife and upcoming election seasons,” Mitchell said. “Because these platforms do not verify the identity of accounts, nor charge for their services, they are rife for coordinated nation-state abuse. Dealing with these campaigns will always be a global form of ‘whack-a-mole’ and will not change until social media networks change how they are monetized and valued — just a few dollars per user, per month significantly increases the barrier to entry for malicious actors.”
Ken Westin, Field CISO at Panther Labs, added that CISOs of social media, news, and other sites should identify these fake accounts to reduce the amount of disinformation on their applications. Westin said if an organization has a social media presence, they should also make sure their account is not compromised and sharing disinformation.
“The disinformation stories can be used to target an organization, and their own social media accounts may be used to help boost these stories as well,” said Westin. “It’s in the best interests of organizations to weed out disinformation from their sites as publishing these types of stories can erode trust and confuse customers.”