Content

Has There Ever Been a Better Time to Talk Up Vulnerability Assessment?

We often hear of prevention being better than cure.

In 2003, this is the most apt phrase to describe the approach to managing IT security.

It is widely acknowledged that 99 percent of all attacks exploit known weaknesses and defective configurations. So the bottom line is that only one percent of attacks are truly dangerous. In the face of a constant flow of viruses, vulnerabilities and worms, even the best administrators lose their perspective on the task at hand. The easy response is to install products to act as an additional buffer to attack, but these quickly become redundant if they are not continually monitored and upgraded. It may be stated without any exaggeration that most networks are 'inspected' more frequently by hackers than their administrators.

Proactive companies will be monitoring their systems to close security weaknesses and vulnerabilities before malicious code and hackers can exploit them. The recent SQL Slammer worm highlighted how vulnerabilities easily slip through the net and play into the hands of the assailants. From monitoring of just our own internet gateway, we saw 1,300 probes within the first ten minutes of the worm appearing. That's more than two per second. In a four-day period, we monitored over 70,000 probes from the Slammer worm.

Based on this level of activity, the Slammer worm would have compromised a vulnerable server connected to the internet within a few hours. And, the patch had been around six months prior to the worm, but thousands of companies were still affected. Is this really down to administrator complacency, as many would like us to believe, or is the reality somewhat different? If we consider that in 2002 CERT research indicated that there were 4,129 vulnerabilities and 82,094 reported incidents, the enormity of the system administrator's task can begin to be understood. For many, the number of systems being monitored and the number of patches that are issued each week would make patching a full-time job - an impossible task for an already busy IT department.

Obviously, there are significant challenges facing companies looking to develop a structured and best-practice approach to vulnerability assessment because of the number of vulnerabilities that emerge.

The vulnerability life cycle

Vulnerabilities exist in all software packages. It is only a matter of time before they are discovered and then exploited. However, companies can help themselves by testing and assessing all new systems for vulnerable services prior to 'going live' and reconfigure weak out-of-the-box settings that make it simple for hackers to gain access to their systems and networks.

Typically, the process of uncovering vulnerabilities follows a series of steps from discovery to clean up. The period between disclosure and exploitation is shortening all the time, challenging vulnerability assessment products and service providers to continuously update their offerings.

While this lifecycle is shortening, the effectiveness of the clean-up phase is where the problems really lie, as it relies on companies being vigilant and making infrastructure scanning and patching a core element of their security strategy.

Scanning and patching

In 2003, the enterprise challenge lies in grasping the importance of bolstering and managing the scanning and patching phase. As the vulnerability lifecycle shortens, so too must the scanning frequency. The era of the annual audit has to be a thing of the past if security is to be strengthened significantly. Even quarterly scans can leave organizations exposed to more than 1,000 new weaknesses for up to three months at a time, which is still an unacceptably high level of risk. The window of exposure to new vulnerabilities closes the more often scans takes place. If the systems to be scanned make up a critical web infrastructure generating significant revenues, continuous assessment with daily scans may be the only serious option. Although this can only be effective if the resulting updates are actually carried out - the current weak link in the process.

If we take a recent study of a simple internet gateway comprising 17 systems, it showed that installing every update, upgrade, fix and service pack would require approximately 1,300 patches in total over a 12-month period. Installing five patches every working day on 17 servers requires almost complete dedication of resources. With 1,700 servers, the task becomes formidable. Vulnerability assessment can help prioritize which patches are important from a security perspective at least.

The headlines highlight the risk of not patching systems. The number one virus in 2002, Klez, continues to be the top virus detected. The Klez virus was discovered on April 17, 2002. Microsoft issued a security bulletin on March 29, 2001 (MS01-020: Incorrect MIME header can cause IE to execute email attachment) along with an appropriate patch to Internet Explorer - components of which are used by Microsoft mail clients (Outlook, Outlook Express). So, the impact could have been reduced significantly. Effective assessment is the answer. Whether you choose to do this in-house or via an external agency is down to the size of your system and the budget required.

Recent product tests have shown that no one tool by itself successfully detects all of the vulnerabilities planted within test environments - even when the number of vulnerabilities is relatively few. Multiple tools are needed, at different layers, to create a comprehensive test suite. The tool set has got to be maintained and updated with every vendor update. Custom or user-defined checks, ahead of vendor signature updates - in particular where the threat attached to a new vulnerability is considered to be high - will need to be scripted, although not all tools are extensible, allowing users to write their own checks.

Some of the tools available do now include automated signature update features - a highly attractive feature. Updates rely on resources at the vendor to research and monitor security intelligence sources and then to author, test and release the updates. The number of advisories posted by a team gives a good indication of how much original research they are conducting and how much they rely on newsgroups and mailing lists.

As with intrusion detection systems (IDS), false positives and false negatives are a problem with vulnerability assessment tools. Not only will a single tool often fail to identify all vulnerabilities associated with systems scanned, but reports will include confusing and conflicting information as well as totally misdiagnosed vulnerabilities, which wastes administrator time. Certain products are not intelligent. For example, if all vulnerability checks are tested against all systems scanned will lead to Unix-related vulnerabilities being reported to exist on hosts running Windows NT. Comparing the output from multiple tools is key to ensuring accuracy, but again this is a timely exercise.

More sophisticated products build up a profile of the system or device being scanned and only run relevant vulnerability checks based on the profile (that includes information such as the OS running on the target, as well as ports open) to improve scanning performance and reduce false positives.

In-house or external support

So how can companies effectively manage this growing demand on resources? The decision to carry out the task in-house or externally will depend on budget, time and resources.

To do the job effectively in-house companies require full-time skilled personnel as they are referencing results from multiple reports. The effective de-duplication of data is the reason that many companies are failing to spot the most important patches and updates that need to be made to their systems.

If manual checking internally is unfeasible then looking to an automated tool via an external company is the most obvious next step. The most sophisticated tools such as Foundstone's Foundscan product can scan a network in less than an hour. The benefit of tools like this are the reduction in false positive results and production of reports that ensure that unpatched applications and servers remain highlighted until the system has been re-scanned to ensure that the patch has been deployed. Obviously there is additional investment involved, but the return in terms of secure systems and resource allocation can be effectively highlighted.

Without doubt, the watchword for enterprise security in 2003 has to be prevention, and vulnerability assessment should lie at the heart of an effective prevention strategy. Companies need to understand that the task facing system administrators is an onerous one and that they cannot shoulder the blame for every system attack. There are several options to enable companies to ensure the window of exposure to new vulnerabilities remains acceptable and now is the time for these to be taken more seriously. The days of companies crossing their fingers and hoping that vulnerabilities will pass them by should be a thing of the past.

Richard Walters is head of product management at Integralis (www.integralis.com).

 

Get daily email updates

SC Media's daily must-read of the most current and pressing daily news

By clicking the Subscribe button below, you agree to SC Media Terms and Conditions and Privacy Policy.