Disinformation campaigns may seem a problem primarily facing social media companies that need to regularly strip false information from their platforms. But the fallout for targeted businesses can be substantial, with security teams often expected to minimize the damage.

The spread of intentionally false information has been part of the business landscape since well before the social media era. In 1928, the makers So-Bos-So, New York’s most popular product to shoo flies from cattle, successfully sued a smaller brand for instructing salesmen to warn stores they could be “fined for selling” So-Bos-So, which was “subject to government seizure.” 

But technology shifted the approach of such campaigns, now often managed by so-called “dark” public relations firms on behalf of businesses or nation states. The risk today lies in speed, virility and how quickly thoughts planted by a disingenuous actor are laundered by real people through retweets and other forms of online distribution.

And that, combined with challenges tied to attribution, makes disinformation a problem for CISOs.

“It’s equivalent to the cartoon snowball rolling down the hill,” said Richard Rushing, the chief information security officer of Motorola, and member of CyberRisk Alliance’s Cybersecurity Collaborative, a forum of CISOs. “If it starts collecting stuff, halfway down the mountain it’s pretty much unstoppable, regardless if it’s false.”

What’s at risk

For companies, disinformation campaigns can result in very real reputational damage or hits to the bottom line.

Consider, for example, the moral panic that ensued against Wayfair when fringe conservative groups posted conspiracy theories that the site was being used to traffic children. Or when conservative activists falsely spread a rumor about Starbucks holding a “Dreamer Day” to disrupt what they said was a liberal haven. Also telling are statements from Hong Kong authorities that as much as 20 percent of local stock market manipulation happens over social media, particularly in small-cap stocks.

Often companies are targets as part of broader political campaigns. What’s less clear is how often companies are intentionally using these tactics to harm each other the way Russia uses those techniques against the United States.

“The reason we see the geopolitical stuff is that we care about geopolitical stuff,” said Camille Francois, chief innovation officer of the influence campaign monitoring company Graphika. “We aren’t looking for companies targeting other companies.”

But it is happening, she added.

“We’ve had companies come to us and ask us whether negative social media posts are Russian bots,” she added. “We’ve had to tell them, ‘No, those are just people who are mad at you.’” 

Many dark PR firms have been traced to Russia and the Philippines, likely leveraging the same talent and online tactics whether hired to disrupt businesses or governments. To study their capabilities, researchers at Recorded Future hired two Russian-speaking firms in 2019 – one to prop up a fictional British company and one to tear it down. They were able to place an article in a “century-old,” well-established newspaper and several other media sources, as well as operate social media campaigns to boost their influence. 

That said, identifying the entity funding the campaigns is often more difficult. As Francois said, a company could run a campaign claiming Brand X’s product is poisoned, but so long as tweets don’t end “so, buy Brand Y,” it may be very hard to trace the effort.

Fair to say though that disinformation campaigns are not being initiated by sizable, established companies that would have the sense to know that “success” from such a campaign also heightens the potential negative publicity or legal fallout of being caught, said Sam Small, chief security officer of the ZeroFox online reputation management service.

“Companies of a certain size have in-house counsel or they retain attorneys, and they have chief risk officers, and they have investors and stakeholders who just don’t want to be associated or affiliated with those things,” he said. 

An information security or a marketing problem?

But why is this a CISO problem? Researchers agree that disinformation can be approached as a risk issue, an information issue, a marketing issue, a security or information security issue.

But there are reasons that many CISOs keep a hand in this game, why companies like ZeroFox and Graphika market and speak at cybersecurity conferences, and why, generally, social media propaganda gets lumped in with other cyberwarfare.

Disinformation “becomes a security problem when threat actors are targeting your business’s ability to operate, or targeting you or your customers or your employees via impersonation,” Small said.

Practically speaking, monitoring for information closely resembles a threat intelligence problem. There are similar asymmetries, similar conceptual processes to verify legitimate posters and root out the phonies, and similar philosophic underpinnings: fake data in, bad results out.

Rushing, for example, was among the members of Motorola leadership involved with identifying appropriate response to disinformation targeting the telecommunications industry at large: online rumors that 5G caused COVID-19. Those claims went from the fringe to the more mainstream, and actually led to a full-blown arson attack on telecom infrastructure in the United Kingdom.

Company leadership on behalf of both internal and external stakeholders often look to the CISO for explanation of how the false message could infiltrate the internet and how best to respond. Rushing pointed to a couple of lessons from the 5G experience that go beyond traditional cyber defense strategies. For one, those targeted need to quickly leverage allies in industry, standards bodies, research and academic groups to quickly put up a unified front, shoot down the false statements, and formulate a response. Rushing also said companies and their security teams need to understand that, when established groups are infected with false information, no issue is too silly to take seriously.

“Most companies are able to handle things they feel are a strategic risk,” agreed Francois. “You just need to consider disinformation a strategic risk and develop an ability to do forensics and assessment, without over-pivoting.”