The cyber risk landscape is complex and dynamic, which makes it inherently challenging to manage. Add the fact that organizations have limited resources and, well, sometimes it can feel unmanageable. How on Earth are we supposed to get everything done that needs to be done? The simple and unavoidable answer is, we don’t. We can’t. There are just too many things that could demand our attention.
To have any hope of successfully managing cyber risk, we have to be as cost effective as possible in our choices—choices about which risks to focus on and, once that’s decided, which risk treatments to implement. Because at the end of the day, not all risks, nor all treatments, are created equal. This means that we have to be able to compare the risks we face to determine which ones matter most and compare our treatment options to determine which ones are likely to be most cost effective. And comparison invariably means measurement.
Unfortunately, the common approach to risk “measurement” today involves someone from audit, infosec, or elsewhere simply rating “risk” on a 1-to-3 or 1-to-5 ordinal scale. Sometimes they rate “likelihood” and “impact” separately using an ordinal scale and combine the two values to derive “risk.” In either case, a number of common problems exist that make these ratings unreliable, at best. In fact, over the past several years, as I’ve worked with numerous organizations across various industries, I’ve found that between 70% and 90% of these ordinal risk ratings are inaccurate in most organizations. Even when these risk ratings are accurate, their ordinal nature (red/yellow/green, high/medium/low, or 1-to-n) doesn’t translate into anything meaningful to executives. As a result, decision makers are left to their own subjective interpretation of how much to care about the rating.
Some organizations have tried to overcome the ordinal rating problem with risk-related metrics or KRIs, like patch levels, awareness levels, phishing test metrics, etc. The intent is to provide “hard data” and remove the subjectivity of ordinal ratings. Unfortunately, these metrics still don’t mean anything to a business executive. Sure, thresholds are usually defined, but they are almost always defined arbitrarily and subjectively— i.e., without a clear understanding of the risk implications in business terms. In other words, how much less risk will the organization have if awareness levels go from 90% to 98%?
The good news is that better approaches to measuring cyber risk exist, and are beginning to be widely adopted. A good example is Factor Analysis of Information Risk (FAIR), which is an open standard adopted by the Open Group. The Open Group even offers a certification for professionals who have attained a level of knowledge regarding FAIR. FAIR is also beginning to be included in the degree programs at a number of universities, and the non-profit FAIR Institute has a rapidly growing membership of over a thousand professionals from around the globe.
Of course, measurement methods like FAIR represent change, and change isn’t always welcomed with open arms. There are those in the industry who are leery and sometimes even openly dismissive of risk measurement. Most often, their concerns revolve around data quality and the dynamic nature of the risk landscape. Fortunately, FAIR leverages methods and principles to compensate for uncertain data that are well-established in other fields (e.g., PERT distributions, Monte Carlo, estimate calibration, etc.). Besides, if you stop to think about it, the data situation doesn’t improve when you wave a wet finger in the air and rate risk using an ordinal scale. At least with modern measurement methods like FAIR, you can faithfully represent data uncertainty in both the inputs and outputs of analyses.
The bottom line is that risk measurements have to improve if we want to have a prayer of effectively dealing with the cyber risk landscape. This has been recognized in a study by the World Economic Forum and proposed changes to cyber risk regulations in the financial industry, to name just two. At some point in the not-too-distant future, organizations that aren’t using these modern methods of risk management are going to be considered deficient. It’s another example of either being on the bus or under it.
About the author: Jack Jones is the EVP of Research and Development at RiskLens. He is the author and creator of the Factor Analysis of Information Risk (FAIR) framework and has co-authored a book on FAIR entitled Measuring and Managing Information Risk – A FAIR approach.