Nordstrom was hit with a breach a couple of years ago when a contracted employee mishandled data. Today’s columnist, Tony Pepper of Egress, says even companies with the best security awareness programs can experience a breach when staff or anyone else with access to sensitive data becomes overconfident about security. SounderBruce CreativeCommons (Credit: CC BY-SA 2.0)

The phrase “a little knowledge is a dangerous thing” rings true today whenever we see someone whose misplaced confidence, despite incomplete understanding, leads them into trouble. Behavioral psychologists call this phenomenon the Dunning-Kruger effect, when people with a low level of knowledge dangerously overestimate their skill and make errors as a result. This effect partly answers one of the most enduring conundrums in cybersecurity: why do people keep clicking on bad emails, causing email data breaches? 

IT security teams grow increasingly frustrated when despite training employees how to recognize phishing emails, time after time malicious links are clicked, emails are misdirected and under-protected, and data gets compromised.

Perversely, training can cause many of the problems when it comes to insider data breaches. A proportion of employees who have attended basic security training courses will experience the Dunning-Kruger effect. They believe they know all they need to know about security and data breach risks, which gives them unjustified confidence in their ability to keep data safe.

Once outside the training room and working among the distractions and pressures of a typical business day, employees make judgements about data security and safe sharing practices that do not align with (or go beyond) what they have been taught. The little knowledge they have gained translates into risk derived from overconfidence and can increase the frequency of errors that lead to data breaches. For example, an employee who has received training around phishing may think that they know all the signs of a phishing email. When faced with one in their inbox, they might, with a misguided sense of confidence, click the link anyway, believing that they would have noticed something suspicious.

The need to balance training and technology

Of course, I’m not advocating throwing the training manual out the window – far from it. Training has a pivotal role to play in creating a security-conscious culture by setting expectations of employee behavior and responsibilities. In fact, we enroll every employee at Egress in regular training. But on its own, training alone will not protect the organization.

However, it also doesn’t make sense to lock down systems and tools like email that employees use with data protection solutions that create friction and make them difficult to use. This defeats the purpose of email as a productivity aid and typically results in employees finding unsanctioned alternatives, creating an even greater loss of control over corporate data.

Security teams need to strike the balance between productivity and protection; between trusting employees to apply the security training they’ve been given and recognizing that everyone can make an error. It’s simply not possible to train humans out of making mistakes, especially when they are under pressure or – ironically – when they believe that they are not likely to make a mistake in the first place!

We need to create a safety net of security solutions that spot the errors people miss. Contextual machine learning offers us a way to solve this kind of problem.

By analyzing a user’s typical behavior and the data they interact with, contextual machine learning algorithms can identify when behavior becomes inadvertently or intentionally risky. For example, when a spear phishing email closely mimics a genuine contact address, contextual machine learning can identify it as suspicious and warns the user before they reply. Similarly, when a hurried user selects the wrong recipient from a list of autocomplete options, they are alerted that James – their external supplier – does not normally receive the confidential customer details that should have been addressed to James, the company’s customer service manager. Effectively, we intercept the high-risk activity before the mistake gets made.

Working with, not against, human nature

The machine learning approach works best when the software runs in the background, only interrupting the user’s workflow when genuine risks are identified. Employees don’t appreciate constant alerts, which can give rise to click fatigue and – once again – actually increase breach risk as users dismiss alerts about genuine issues. Contextual machine learning minimizes the number of false positive alerts and users know that when an alert does appear, it’s in their best interests to act on it. This way, the tool becomes a trusted ally, not an annoying impediment.  

Human-activated data breaches have remained an intractable cybersecurity risk for too long. Consider the Dunning-Kruger effect as part of a multilayered approach that helps develop a cybersecurity strategy that will actually succeed with real people.

Contextual machine learning represents a huge step forward in eliminating human errors, while still allowing email to play the crucial role in productivity it was designed to deliver. Allied with ongoing training, it ensures that users can safely apply the knowledge they gain, without putting data at extra risk.

Tony Pepper, co-founder and chief executive officer, Egress