The time in which it takes hackers to write malicious code to exploit a known vulnerability is rapidly shrinking. The appearance of the Sasser worm in the shortest time ever, just 18 days between the revealing of a Microsoft vulnerability and the beginning of an attack, marks a new low point in the fight against cyber crime. With the protective window of opportunity getting ever smaller, the task of patching vulnerable software is becoming an increasingly endless one.
For many corporate IT managers, the second Tuesday of each month (when Microsoft releases its security update) can become a race against time to patch new vulnerabilities before a hacker uses one of them to attack company systems. For large enterprises with thousands of desktops to protect, the task of reactive patching is notoriously labour intensive and can have adverse effects on IT systems, such as network downtime. Often, companies are forced to decide between whether to patch now and risk a costly system failure, or patch later and risk being exposed to a damaging exploit.
There is little doubt of the security benefits of patching, but activity in this area is becoming unhealthily convoluted. Vendors pressurised into releasing security fixes can sometimes send out patches too early before all the bugs have been identified. This not only results in further updates, but also the possibility that applying the patch may cause a serious problem of its own – such as bringing down a critical server. As time passes, patches are often revised, meaning it is very risky to immediately deploy a patch without thorough system testing first to ensure the patch will not cause damage to the computer system.
Testing each patch in a controlled lab means programmers can fine-tune the patch design and adjust configuration settings to suit the particular corporate environment; minimising the risk of the patch causing any operational problems for the live corporate network once applied. Most companies have a standard desktop build for all of their PCs, meaning that if the test proves successful, the programmer can rollout the patch to the rest of the company’s systems without having to test each individual desktop. It is also crucial that server patches are tested before application. As servers are key to the functionality of the corporate network, any operational glitch could cause downtime.
Making the decision to patch immediately, as opposed to waiting until the next scheduled patching update (ideally on a two weekly or monthly basis), requires expert knowledge and manpower in-house to assess the severity of each threat (the likelihood of its impact on the IT environment), the level of the vulnerability (which systems could be affected) and the cost of mitigation and / or recovery. Companies that create an up-to-date inventory of all production systems and security controls can make effective decisions on whether the patch is critical to the infrastructure, allowing for prioritisation and ultimately deciding whether reactive patching is necessary.
Companies with good control and an organised patching regime, can effectively plan to allow IT administrators to patch and reboot each server on a scheduled basis, allowing for patches to be consolidated and validated before being applied, limiting reactive patching. As most patches require system reboots that interrupt the business – scheduled patching allows the company to reduce unnecessary downtime and lower overheads. However, particularly dangerous holes will always need to be dealt with immediately – for example, the PCT hole identified in Microsoft’s security bulletin MS04-011 could be triggered remotely and without any action taken by the users on affected machines, meaning some infrastructures could be left vulnerable to remote attacks. In such cases, where critical systems within the network infrastructure are threatened and current security defences cannot protect against the threat, reactive patching is essential.
Unfortunately, patching isn’t cheap. The cost of reactive patching can be astronomical, with a recent Yankee report revealing that if an organisation kept itself totally up-to-date, installing every Microsoft patch, it would cost £5,200 a year per desktop. Companies can limit these costs by minimising non-critical patches and by automating the patching process. Providing the patch has been tested, further patches can be rolled out across the organisation automatically, without human intervention. Other costs involved in reactive patching, include rectifying IT problems caused by patches interfering with other functions.
With costs high and potential disruption to the company’s main IT infrastructure a significant possibility, reactive patching should be kept to a minimum. A layered, synergistic security defence that significantly improves the security posture by deploying appropriate protection at the network perimeter and at the desktop (or laptop), combined with monitoring of the network for anomalous traffic, will allow the organisation to work mainly off a scheduled patching regime.
David Williamson is UK head of Ubizen