How can you know if a technology or technical process is approaching obsolescence and ready to be redefined or replaced? Age isn't the only indicator that counts, as there are a multitude of “older” technologies that still provide benefits generations after they came into being.
In fact, the amount of net benefit that a technology provides is probably the best indicator. When benefit declines to a point where the technology creates more pains than it resolves, you can bet that users and innovators alike will seek out new ways to problem solve.
For IT security practitioners, questions arise every day as to whether existing security technologies and processes are providing enough net benefit to justify their continued use. If a security control doesn't keep out attackers, or a risk reduction process doesn't actually reduce risk level – why spend the time, money and resources to keep doing the same-old-same-old?
Let's examine the usefulness of vulnerability management (VM) processes in a typical enterprise today. VM is a systematic process of identifying, prioritizing, and then mitigating vulnerabilities. These flaws continue to be one of the common vectors used by attackers and malware to get into a network. Vulnerabilities are the “secret entrances” to networks, and once an attacker finds a secret entrance – the network can be exploited.
So, VM sounds like a worthwhile endeavor for a security management team, right? A vulnerability management program should reduce the number of vulnerabilities, particularly critical ones, thus reducing the chance of an intrusion, theft, or attack. But as most IT security practitioners will tell you, the long-standing approach to VM is delivering less and less value over time, and threatening to become an obsolete, irrelevant security process.
Why? Let's examine the reasons that “traditional” VM programs fail in delivering sufficient net benefit, and what security teams can do to return VM into a functioning, beneficial security practice.
Problem: Disruptive vulnerability scans conducted infrequently
Solution: Non-invasive vulnerability detection conducted daily
VM starts with finding the weaknesses in the first place. Traditional active vulnerability scanners find flaws by testing lots of signatures against hosts to see if they are present. But active scanning has a high cost. Scans of live systems can disrupt the normal behavior of running processes, causing them to fail. Deploying scanning agents through a network may be difficult and expensive, or access to certain zones may be limited. Active scanning consumes significant system resources, and monitoring and maintaining the scan processes requires significant IT management resources.
Due to these high costs, enterprises often implement active scanning on a limited and infrequent basis.
Why don't organizations scan more often or more in-depth?
Is there a better way? Vulnerability information can be derived through non-invasive means, using information already available within networks, from security management systems, patch management systems, asset databases, and other repositories of system and software product data.
Problem: Too many vulnerabilities – it's overwhelming
Solution: Automated risk-based prioritization relevant to your network
In a typical enterprise network with thousands of systems and hundreds of software packages in use, the number of vulnerabilities present throughout the network can be astronomical. All too frequently, we hear customers say that their vulnerability assessment efforts turn up tens of thousands, hundreds of thousands, even millions of vulnerabilities.
If your goal was to simply show that you have vulnerabilities, you could stop there. But if your goal is to reduce risk level, your work has just begun. Unless you have unlimited resources, you need to figure out which vulnerabilities are the critical and begin to prioritize and tackle.
The priority given to fix a vulnerability should depend on the value of the assets you are trying to protect, on the network architecture, on the existence of security controls, on the existence of known exploits and on the degree of difficulty for carrying out an attack.
In a large network, there is no way to do this kind of analysis with basic or non-scalable tools – you need to have the right tool for the job. Look for sophisticated analytics that can take into account many factors that impact the risk to your network. Then make sure that they can perform at the scale required to trim a monumental task into an easy one.
Factors such as asset value and importance, existence of security controls, likelihood of a successful attack, and the network architecture itself all impact whether a vulnerability is really critical or just a minor issue that can be ignored.
Problem: Difficult to close the loop with remediation
Solution: Link vulnerability remediation with change management processes
62 percent of the respondents to the VM survey mentioned that they simply did not have enough resources to devote to remediation processes. If you find critical vulnerabilities, but can't patch them fast enough, then the remediation workload continues to grow, and the VM program becomes irrelevant to the security team. Successful closure through remediation activities is critical.
One of the best approaches is to link remediation steps with change management processes directly. Vulnerabilities flagged as high priority should kick off a remediation (change) request, resulting in action by the appropriate team, and a verification step to show that the change is successful and vulnerability resolved.
Linking the change management and VM processes has another benefit – with these processes linked, you can identify how planned network changes such as adding new systems or new network devices could open up risky access for an exploitable vulnerability. Ask your change management vendor to explain how they take vulnerability data into account, or your vulnerability management vendor to explain how they link to your change control data and processes.
Problem: Difficult to show measurable impact to risk level
Solution: Measure, track, and communicate
For VM to succeed in reducing the level of risk exposure due to vulnerabilities, you have to reduce the amount of total risk to the business information assets faster than the rate at which the risk exposure grows. Unless you are able to fix ALL vulnerabilities faster than new ones are discovered (and what enterprise can do that?) then you have to find the flaws that contribute the MOST to the risk of the organization. This is not at all the same as looking at a “criticality level” reported for a weakness. To figure out risk contribution, you need to tell how a vulnerability could impact your business.
Your VM program needs metrics that can be calculated automatically and tracked frequently to deliver the most management benefit without pain. Identify vulnerability contribution to risk across the organization and by factors appropriate to your business such as business processes or network zones. Measure the time it takes to fix vulnerabilities, and use trend data to tell if the results are improving over time. In addition to giving you objective risk data you can use every day, you'll also be able to show that your newly remodeled VM program is a key element of successful security management.