Content

Is Patch Management a Failing Strategy?

If recent headlines about the Blaster worm sounded familiar, it was with good reason. Earlier this year, another worm was wreaking havoc on the internet, targeting a well-known Microsoft vulnerability.

This worm, known as Slammer, had widespread ramifications on information and transactions across the globe, and even shut down Bank of America's 13,000 ATMs for hours. Hadn't the security industry learned and applied the lessons from Slammer? Judging by the talking heads on television last week, apparently not!

The reality is that from Slammer to Blaster very little has changed. The answer as to why lies in the fact that patch management is a failing strategy. The frequent patching that is required of organizations is in many cases impossible to achieve. Even if a company has the resources to do so, there is no guarantee that it will be enough. Some patches flat out do not work, while others are unacceptable because of their damage to such things as server performance.

Looking for evidence that patch management is a failing strategy? The vulnerability that was exploited with Slammer was well known and more than six months old at the time of the attack and patches were available. Yet, somehow, a significant population of MS SQL Servers were unpatched and unprotected. Information about the vulnerability that Blaster targets has been available for weeks. Unfortunately, it takes longer than that for a large number of organizations to manage patches on the resources they have available. For these organizations, patch management is impractical. What would happen if two or three worms were on the loose at the same time? The results would be devastating.

Don't rely on false security

Patches cannot be relied upon to deliver effective front-line security, because they simply aren't applied in a consistent, effective and timely fashion. Indeed, many industry best practices preclude applying patches in an ad hoc manner: changes to production environments need to be tested and proved safe before deployment. This frequently leaves a large window of opportunity when a vulnerability can be maliciously exploited. Moreover, it's all too easy for more important deadlines, issues or simply today's crisis to interfere, potentially pushing the fix forever to the bottom of the list and leaving your systems perpetually vulnerable.

Intrusion detection systems would have picked up the attack (assuming that the signature updates were less than six months old), but few operations centers are listening to the threats their IDSs detect because they are notorious for creating false positives. So even with effective sensors, this incident shows that having basic perimeter security sensors deployed is not enough to prevent significant economic damage from occurring.

Here's the dilemma: odds are your systems are not as well protected as they ought to be, and your IDS is up to date but being ignored. The volume of false alarms the IDS produces is frequently so great that rules are set to allow only the most important alerts through. This yields false negatives, where lower risk but valid threats are not flagged. Life gets a whole lot easier for the operator - until a low-risk attack takes advantage of an unprotected server, and all the systems you must protect are suddenly vulnerable.

The importance of being proactive

Proactive network security management allows you to keep your current practices and procedures while simultaneously improving your ability to detect valid threats. Data from IDS and firewalls is analyzed automatically and false positives are removed at the source, so that your operators can handle a much greater volume of sensor data more effectively. Secondly, it's correlated: data from multiple sensors (IDS, firewalls, anti-virus) are linked together looking for patterns. This helps escalate false negatives by identifying 'wide footprint' attacks that individually are relatively meaningless, but together amount to a persistent and potentially devastating effect.

The best systems perform these tasks in real-time, link security data to network events like router CPU overload or platform reboots, which then allows them to identify potential compromises in process from unknown (or 'day zero') threats - and they link to vendor knowledge bases so that third-shift operators can still understand the context and risk that any given threat poses. And by delivering timely, accurate and actionable alerts, security event management solutions significantly reduce the risk that an unpatched system will be compromised before you contain the attack.

The result: greater security by detecting threats in real time without swamping your emergency teams. And that means greater confidence that the systems you deploy will remain safe, by linking your perimeter and host-based security sensors to identify, correlate and contain attacks.

Patching is a poor global defensive strategy to rely on. Simply rushing out patches presents new, unmanageable and unknown risks. There are better ways to approach security's problems. The lack of progress from Slammer to Blaster has shown that the security industry is not learning from the lessons of the past. How many fire drills will be necessary? Effective security management is what is needed, not silo approaches like patching.

Phil Hollows is vice president of security products for OpenService (), a real-time network and security event management vendor.

Get daily email updates

SC Media's daily must-read of the most current and pressing daily news

By clicking the Subscribe button below, you agree to SC Media Terms and Conditions and Privacy Policy.