Breach, Threat Management, Data Security, Incident Response, TDR, Vulnerability Management

Rethinking virus protection

It's going to take a village to protect cyberspace.

At least, that's the way it looks now. With the growth of malicious programs currently outpacing that of legitimate applications, and traditional countermeasures proving inadequate, consumers and security vendors may need to join forces to ward off threats.

What is the game changer? Server-side polymorphism. Malware authors use this technique to spread code that evades detection by traditional security mechanisms. The attacker distributes his threats through a compromised website, for example, and automatically generates a new, mutated infected file every few minutes. Every time a user visits the compromised website, they're potentially infected with a unique version of the malicious program – each with its own unique signature. The result is a relentless stream of one-off threats that traditional malware signatures and even behavioral detection approaches cannot hope to address.

And there are a lot of server-side polymorphic threats out there. During one week in mid-November 2007, nearly 54,609 unique new files were identified; of those, at least 65 percent (approximately 36,000 files) were malicious – and most of them were either unique or had been reported by a very small number of users (less than five percent).

Clearly, traditional approaches to protection are simply not scalable in such an environment. Having to also keep up with every malware variant that momentarily emerges is simply not feasible using malware signatures and other traditional technologies.

One way to stem the tide is by using a reputation-based security model. Just as users can rate books at Amazon.com, it may be possible to have them rate applications and automatically derive a catalog of applications and reputation scores. Then each time a user downloads an application, the user can be presented with the application's reputation rating and allow them to make an informed decision about installing it.

The challenge is that while a customer might easily be able to rate a book that they've read, the typical home user has no idea whether the applications they use are legitimate or malicious – there's simply no easy way for them to tell. The answer is to automatically derive application reputation scores without prompting or inconveniencing the user. 

Longer-term, reputation-based systems could provide a path toward delivering a full whitelisting solution. Most whitelisting systems today lock down static, high-end servers whose operating systems and applications change infrequently. The approach is highly effective, but also expensive to implement, since the administrator must manually construct the whitelist for each protected server. Unfortunately, it's near impossible to manually build a comprehensive-enough whitelist to protect the typical desktop machine, which as a rule has a far more dynamic set of applications than those on a typical server (due to user downloads, self-updates, patches, etc.). Thus, highly-automated approaches are needed that can automatically generate a whitelist that is comprehensive enough to protect even the most dynamic end-user computers.

If the trend continues and bad programs outnumber good ones, then scanning for legitimate applications (whitelisting) makes more sense from both an efficiency and effectiveness perspective. Eventually, a comprehensive whitelist of legitimate software may be as close to a silver bullet as one can hope to find – one that best serves the evolving security needs of the growing cybercommunity.

 

Get daily email updates

SC Media's daily must-read of the most current and pressing daily news

By clicking the Subscribe button below, you agree to SC Media Terms and Conditions and Privacy Policy.