Content

Take the pain out of patching

Television news anchors became the news in August when they had to apologize on-air for computer problems affecting their broadcast. The world watched as ABC and CNN were struck by the Zotob worm.

By far the most striking aspect of the attack was how the worm left journalists scrambling for typewriters when it put servers and computers out of action.

The worm exploited a vulnerability in the Plug and Play (PnP) service for Microsoft Windows 2000 and Windows XP Service Pack 1, as well as some other versions of Windows.

Antivirus company Sophos estimates that the time taken for malware writers to create the Zotob worm from the time of announcement was five days.

This rapid turnaround time was partly a result of the virus creator taking code from another virus, the Mytob worm. The writer stripped out the part that spreads the worm through email systems and substituted code that took advantage of the plug and play vulnerability.

This modular approach to malware development means writers get code out of the door faster. It also means there is not a lot of time for anyone to properly test patches and roll them out across an organization.

But patching still involves a lot of testing on systems built to mirror actual systems deployed within an organization.

While the firms involved in releasing patches for their vulnerable products also test them against sample configurations of servers, other organizations have to test against their own configurations and, more importantly, the applications that run on top of those systems.

For a large company, this could mean testing against 50,000-plus devices and more than 800 applications.

A patch must not break a bespoke critical application within a company. The trouble is that the time-frame between the announcement of a flaw and a piece of malicious code designed to take advantage of that vulnerability is getting shorter and shorter.

And the patching process isn't cheap.

Dave Ostrowski, product marketing manager at Internet Security Systems, says there are two factors to consider when looking at the cost of patching.

"Once a machine is infected, it typically takes 20 minutes to an hour to remove the infection, apply the patch and reconnect to the network," says Ostrowski.

"One hour of a network administrator's time is typically valued at around $50. Thus, 1,000 infections at a company can typically cost about $50,000." And this is just the cost of fixing the machine. There is also the loss of productivity to account for (see the panel on page 39: Loss of productivity rule of thumb).

"To capture the true cost of the impact, several things should be considered," says Simon Tang, senior manager at Deloitte Security Services in Toronto. He lists the direct costs involved – cost of systems damaged, cost of work, contractors and support in relation to the fixes, overhead costs during downtime or unavailablility of systems (building costs, utilities, administration or managerial costs, and so on).

"Loss of business should also be accounted for, as well as other indirect costs, such as reputation damages," adds Tang. "Loss of future business should also be estimated."

Patching is a major part of an overall security strategy, and should also be considered a basic network management process, according to Rob House, head of business solutions at Siemens Communications. "Vendors produce vast numbers of patches to remedy product vulnerabilities – Microsoft alone has released on average 1.38 patches per week since 2002 to cover vulnerabilities across its product range," he adds.

"Companies are really struggling to keep on top of the numbers of patches being made available. Few firms have an effective patch management solution in place, leaving gaps in corporate defenses and highlighting the need for a comprehensive patching strategy."

So how should administrators make sure they have the right strategy in place to deal with patching, when there are so many patches and so little time to patch before an outbreak occurs?

Mike Murray, director of nCircle's vulnerability exposure research team, suggests prioritizing which patches get looked at first. But before that can happen, you have to get to know your network and what runs on it really well. That way, you have prepared the ground before a flaw goes public.

"If you have knowledge of variables such as applications and their different versions, protocols, OSs, ports, and so on, you can prioritize patch importance properly – and this information gives rise to 'the top ten riskiest applications,' for example," he says.

Once the network has been mapped out, the next stage is to hunt down flaws within it. Testing an organization's network in advance for vulnerabilities and exposures will help in assessing where any damage might be most keenly felt.

Waiting for hackers to find vulnerabilities, and vendors to respond with the appropriate patches, isn't good enough. The best strategy involves addressing security vulnerabilities before they are known. "Organizations need to continually scan their networks, identify vulnerabilities, and provide critical direction for patching holes," says Murray.

Hordes of patches come from different vendors every day. Knowing which systems run on the infrastructure helps initially to identify relevant patches, cutting out a lot of unnecessary work. What is left can be investigated further so that important patches to critical systems can be tested first.

Once tested, they can be rolled out to a live environment. This is generally done 'out-of-hours' to minimize downtime during normal working hours.

Also, organizations must assess the impact that not applying the patch would have on the organization. For example, does the vulnerability allow hackers or virus writers to control a network or the resources on it?

"Once the criteria to prioritize patching is clear to everyone involved in patching, the company can execute a fairly rapid turnaround in terms of rolling out the patch to a test environment, and then into production if they have an effective tool to do this," says Alan Bentley, managing director EMEA at PatchLink.

This should give the security administrator the knowledge to draft a written patching policy that can be understood and followed by relevant personnel.

This guide should let the patching team know what systems are critical, which patches are important and how they will be rolled-out, how non-critical patches are scheduled for deployment, and what testing needs to be carried out before deployment.

Next, a team of people should be assembled to respond to critical patches and formulate a plan of action in accordance with the patching policy to deal with these patches.

This team should monitor relevant security websites, such as scmagazine.com, for information on the latest critical vulnerabilities and their patches.

Finally, a standard process should be in place that IT support staff can follow when rolling out a patch. As a patch installation can itself sometimes cause unforeseen problems that didn't manifest themselves during testing, it is also important to have a plan for rolling back patches. If a patch installation causes a system to malfunction, then it has the same effect as a worm – it will cause an outage and loss of productivity.

An effective patch strategy means knowing your systems and documenting policies and processes. When this is done, most of the groundwork will have been covered and organizations can respond faster to vulnerabilities. In the final analysis, this means gaining valuable time against worms.

Get daily email updates

SC Media's daily must-read of the most current and pressing daily news

By clicking the Subscribe button below, you agree to SC Media Terms and Conditions and Privacy Policy.