Disinformation has taken on a new life in the information age. Today, social platforms have become the battlefield for information warfare. Emerging insights reveal that there’s potentially considerable overlap between how malware infects a computer and how malicious ideas work their way into the minds of people. 

Most of the time, we think of malware as a malicious idea, carefully crafted in a computer language that takes advantage of flaws in information processing to give an attacker the ability to run their instructions on the host system with escalated privileges. Of course, for malware to run, it must have one or more vulnerabilities it can exploit. Are there vulnerabilities or processing flaws with how people process information? Absolutely. In fact, cognitive scientists have spent the past century developing a body of knowledge related to these flaws. 

It was noted by Ivan Pavlov and expanded upon by B. F. Skinner that stimuli invoke a response that’s reinforced with conditioning. Solomon Asch showed that inducing conformity was also possible. When a scenario gets carefully organized using factors such as peer pressure, many test subjects go against an obvious truth and align with the group. It’s even possible to see actual values alter as people make choices outside of what they consider normal, leading to an experience of discomfort from those decisions, followed by rationalizations. Leon Festinger explained this phenomenon as Cognitive Dissonance, where an action contradicts values, typically causing the subject to experience discomfort, which they can  alleviate through the manufacture of justifications to support the action.  

The last vestiges of belief in the infallibility of human thought were demolished by cognitive scientists, Daniel Kahneman and Amos Tversky, iconic pioneers of biases and heuristics. They identified several systemic flaws in human thinking that are innate and ubiquitous; flaws that are practically inescapable.

For example, the Availability Heuristic describes our tendency to frame a topic in terms of examples that come to mind most readily, versus doing analysis or research. For example, many people are afraid when a plane hits some turbulence or may even have a categorical fear of air travel, although the data would indicate they are far more likely to die in a car accident. When people die in a plane crash, it's reported on around the world, making examples come to mind more readily. On the other hand, when people die in cars, from smoking, or any number of other causes that are more numerous but less salient, the story becomes far less captivating. It’s just one example among the many biases and heuristics that have been identified. As Dan Ariely has pointed out, human beings are “predictably irrational” – and this predictable irrationality has become the foundation of exploitability.   

Introducing mental malware

In a disinformation campaign, a carefully crafted string of text gets delivered to exploit flaws in human information processing, allowing the attackers ideas to run on the mind of someone susceptible to the attack via a specific cognitive vulnerability. Communication reenforces the message via social platforms to expand reach and facilitate successful completion of the attacker’s objectives. In either case, if hackers are attacking computers with malware or people with disinformation it flows from vulnerability to exploitation to maintaining and expanding control. 

What can security teams do?

Mitigating the impact of mental malware has many similarities to methods for stopping malware on information systems. In stopping malware on information systems, it comes down to prevention, intelligence, and response to mitigate the worst outcomes. For Mental Malware, we have to stop attackers from developing the infrastructure required to deliver attacks, as well as improve the level at which people scrutinize incoming information. If we examine the Disinformation Kill Chain, the Build phase where adversaries accumulate compromised accounts and establish bots on social platforms, offers an optimal place for disruption. Increased efforts to prevent account compromise and to identify malicious activity, will make it more difficult to successfully carry out disinformation campaigns. 

Finally, applying “patches” to our thinking may help reduce exploitable vulnerabilities, such as narratives rooted in fear or hate. These mental patches may come in the form of more inclusive thinking, along with counseling and dialog to resolve social issues. Many of the same ideas we have refined over the years to deal with attacks on information systems can also help to reduce the risk associated with Mental Malware. 

Brandy Harris, assistant dean, Grand Canyon University (GCU);  Joe Urbaszewski, cyber range coordinator, GCU; Kristina Rivera, faculty specialist, GCU; Mike Manrod, CISO, GCE,  @CroodSolutions