The analytic capabilities are there to pinpoint problem employees. But what to do with them? Bradley Barth reports.
Equifax has a vested interest in reducing risk. Ever since a devastating 2017 data breach affected roughly half of all American citizens, the credit monitoring company has implemented a series of initiatives designed to prevent a similar fiasco in the future. Among them: leveraging the company’s expertise in generating data-driven credit report scores to develop its own monthly scorecarding system that quantifies employees’ cyber risk behaviors.
“We’re doing this because our DNA in Equifax is obviously credit scoring and so we know how to do analytics… on this, and we’re just applying that same skill set to this problem,” said Equifax CISO Jamil Farshchi, in a presentation at this year’s 2020 RSA Conference.
Employee risk assessment and scorecarding fall under the larger umbrella of risk assessment services that can also help evaluate third-party vendors, supply chain and one’s own IT environment. Only in this case there’s a human twist: “Just like [with] technology, security professionals would like to know who is most vulnerable to exploitation, who is most targeted, and who could cause the most damage if they were successfully exploited,” says Michael Madon, SVP and GM, security awareness and threat intelligence products, at Mimecast.
While not a new concept, employee risk scorecarding is growing ever more advanced, thanks to better analytics, policies and workflows that make assessments more actionable.
“Sophisticated modelling, metrics and machine learning can allow tailored risk insights to be distributed throughout the organization – even down to the employee level,” says John Gelinne, managing director with Deloitte Risk & Financial Advisory. “Real-time transparency of employee behavior risks can help transform organizational security into a personal, shared responsibility, rather than an effort left solely to the security professionals charged with protecting the company.”
On the other hand, the ability to drill down and analyze each individual employee presents challenges as well, including how to respond to problem workers without them feeling persecuted.
Data Collection & Interpretation
Scoring a person’s cyber risk is very different than more traditional types of employee evaluations that might, for example, collect key performance indicators to measure productivity or efficiency over a period of time, says Alan Brill, senior managing director, cyber risk, at corporate investigations and risk consulting firm Kroll. That’s because cyber risk assessments are predictive and must anticipate future actions instead of simply judging past ones.
“A security scorecard is focused on how an employee is likely to perform on certain tasks, like not clicking on a link or not responding to a business email compromise message,” Brill explains. “For these behaviors, finding out that an employee failed – by clicking on a link, for example – it’s kind of too late.”
The good news is, predictive models are getting better. There is more data to digest, and more advanced analytics and artificial intelligence tools for developing specific employee-centric metrics. Mimecast goes as far as to say that it names names when formulating predictive security risk scores for clients’ employees, from the lowest-level workers to upper executives.
But if this is so, then what kinds of data should companies be crunching?
For starters, “employee risk assessments or scores need to incorporate factors such as the sophistication level and observed performance level of staff” members, says Madon.
Such skills can be measured by conducting ongoing simulations of phishing and smishing scams, and studying employees’ clickthrough rates to see how often they are tricked. Knowledge assessment tests and training exercises are also useful tools that, altogether, provide a “broad understanding of what an organization knows and is a normal starting point for companies,” says Kurt Wescoe, chief architect for security awareness training at Proofpoint.
But that’s far from a complete picture. That’s why organizations are also increasingly looking at user behavior histories to spot any patterns or deviations from normal activity that might indicate increased risk.
“Increasingly, potential risk indicators can be measured through user behavior analytics including virtual (e.g. internal and external network traffic monitoring, data exfiltration and access attributes) to non-virtual (e.g. fraud, physical security, time-card violations or other compliance issues, etc.),” says Gelinne. These virtual and non-virtual attributes “can be correlated to provide actionable analytics and real-time ‘tippers’ to proactively identify unexpected behaviors that could unintentionally or intentionally open the company’s front door to hackers.”
Another important variable is the role employees play in an organization, and the size of the target on their backs. That’s why companies may want to look at attack data: “to focus on which individuals or job functions are the most targeted at an organization, and then combine this with seniority and system and data access levels,” says Madon. After all, attackers know high executives with high-level privileges “have a lot of influence and access, and are very easy to track…”
But role-based risk goes even deeper than that. “How much risk an employee represents… depends on what business-related loss event scenarios they are relevant to,” says Jack Jones, chairman of the FAIR Institute, a non-profit organization founded to develop standard information risk management practices. “For example, a customer support representative is inherently less relevant than the director of innovation if you’re worried about the loss of new product details. Conversely, the [customer service] rep is inherently more relevant than the director of innovation if you’re worried about the loss of customer records.”
“Similarly, the software engineers who are responsible for securely coding a key business application are more relevant to scenarios involving the availability and/or data breach resistance of that application than are the employees in sales,” Jones continues. “So, although role is a crucial parameter, a role’s relevance has to be understood within the context of the loss event scenarios the business cares about.”
Another factor that’s not quite as easy or obvious to calculate is what kinds of controls, processes and protections are already in place to protect employees from, well, themselves. “The capability of a company network to filter out threats must be understood in evaluating the risks that may confront an employee,” says Brill. “For example, if an email system can detect and reject fake-source emails reliably, that’s a risk that is less likely to be faced by an employee. Without such technology, we have to rely on an employee to do a more nuanced job in evaluating emails, what they purport to be and whether they should be responded to.”
With all this influx of data, companies now have the ability to generate some truly eye-opening metrics. One such example is “value at risk,” (VaR) a metric that in financial circles generally measures the potential losses that a particular investment may incur.
In terms of one’s workforce, VaR “places risk in dollar or mission terms so employees… have a common understanding of risk,” explains Gelinne’s colleague Kelly Miller Smith, a cyber principal with Deloitte Risk & Financial Advisory. Calculated using cyber risk quantification financial modelling techniques, VaR allows companies to measure an employee’s actions or behaviors against leadership’s priorities – whether those priorities are remediating vulnerabilities or protecting sensitive data. “This can be used to explain the benefits of the individual steps employees can take to mitigate risk,” Smith explains.
Whatever the employee metric, however, organizations should be prepared to study it over the long haul, according to Joseph Carson, chief security scientist and Advisory CISO at Thycotic.
“Metrics that only measure a point in time are typically useless. The important metric that matters is how are you improving over time,” says Carson. “Security awareness training is not a checkbox, one-time-only project. It is a continuous learning process.”
A client once asked Kroll to send its employees a simulated phishing email on the very same day the company had sent out an anti-phishing communication. Despite the warning that morning, “50 percent of the employees fell for it,” says Brill. “This is an important reminder. Employee learning is only as good as the instruction they are given. If they perceive it as unimportant, or something that they can sort of click through without really paying attention, you can’t expect effective uptake of the message.”
Indeed, all the risk assessment data in the world doesn’t amount to much if nothing is done to institute meaningful change. For companies, managing poor-scoring employees – including getting them to buy in and comply – constitutes both a challenge and an opportunity.
At RSA, Farshchi said workforce attitude toward risk scoring “improves over time… Initially, it’s kind of rough because people do look at it more punitively, but over time they start to realize, ‘Oh, I can actually have direct control over this. And it’s not like I’m just a victim here.”
By and large, experts say it’s important that employees not feel shame, but rather derive a positive, constructive outlook from the scorecarding experience – assuming the company cannot prove criminal intent, of course. Also, the assessment process must be foolproof, as false allegations or violations could undermine the process.
“People who feel that they are being persecuted, singled out, shamed or treated unfairly are unlikely to be open to full cooperation with security measures,” says Brill, who for this reason encourages companies to involve human resources, legal specialists and labor relations teams in the scorecarding process. “Actions that may seem reasonable to technology specialists may be illegal or may violate a union agreement. Employees who feel that they are being mistreated or discriminated against can file complaints with state or federal agencies leading to investigations and potential penalties.”
Once these precautions are in place, companies have numerous options.
“Response should be predicated on context and cause,” says Jones. “If the organization is worried about malicious acts, then stronger monitoring as a deterrent and/or removing higher-risk employees are appropriate. If your focus is on minimizing security-related errors, then training, reassignment or providing additional resources can be appropriate responses, depending on whether the root cause is awareness, skills or workload.”
Those companies that do retrain should also make sure the regimen is not “basic and generic,” but rather “specific to an employee’s role,” Jones continues.
To support its high-risk workers, Equifax offers self-service retraining, but also the opportunity for these employees to consult with the security team, face to face. “It’s labor intensive, obviously, but it’s a worthy investment,” said Farshchi.
Still, a bad employee risk score doesn’t necessary mean only the employee needs fixing.
“…[S]ometimes the problem comes down to the policies, processes, or tools that are provided to employees. Unclear, poorly defined expectations and poorly designed processes/tools are often the root cause, versus the employees themselves,” said Jones.
Madon agrees, noting, “People who score as highly vulnerable should certainly be training more, but also should have greater security protections, as well as have technology and business processes that they rely on not be susceptible to single points of failure…”
“Theoretically, if an employee clicks a malicious link in a phishing email, there was a failure of the automated defense tool somewhere in the chain,” says Brill’s colleague Chris Kudless, VP in the Cyber Risk practice at Kroll. “Viewing an employee’s actions through the lens of the automated defense tool enables the security team to identify where in the chain that tool failed and how to shore up that failure…”
While a flawed security tool leaves employees prone to their own mistakes, a strong, effective set of controls actually complements the risk scorecarding process nicely.
“For example, you may choose to enforce two-factor authentication via a CASB [cloud access security broker] solution for a user that has shown a deficiency in password management,” says Wescoe. “Alternatively, for users that are falling for phishing [sim tests], you may decide to apply stronger email security policies and apply sandboxing to all of their emails.”
“This can cut both ways, too,” Wescoe adds, “as you may be able to reduce controls around users that have demonstrated strong security knowledge” during their assessments.
Despite its virtues, the employee risk and scorecarding process must continue to be refined. To understand why, one need only look at the scourge of COVID-19 phishing campaigns that have oozed out of the shadows since the start of the coronavirus pandemic, preying on the fears of an anxious public looking for answers in an uncertain time. These campaigns work for a simple reason: Humans still remain the most consistently vulnerable element of any business. And malicious actors will stop at nothing to prey on their fallibility.