Is anti-virus (AV) dead? In a word, no. It's not even reeling; it's just evolving. That's the consensus in the marketplace, at least. And among users, the answer is mainly that it would be foolish to walk away from the technology, given the number of new viruses foisted on the world each day.
“People have been predicting the death of signature-based AV for years,” says Richard Jacobs, CTO at Sophos.
But as the maxim goes: “Would you give up your AV solution right now?” For most people, the answer would be “No!”
In the eyes of many, however, there is no question that the traditional AV model cannot be sustained forever. That model – in which viral code signatures are generated from pieces of identified malware, signatures propagated to end-user machines, and local scans used to discover them – is becoming technologically outmoded.
Just sheer numbers illustrate the problem. “A few years ago, we saw maybe a few hundred new viruses a year,” says Jacobs. “Now we may see 2,500 per day. And in a short time, we may see much more.”
Obviously, with the current model, trying to detect and remediate all the threats would overwhelm the processing power of machines, bringing user productivity to a halt and user ire to a fever pitch.
“With the speed at which things appear, it is not sensible to be purely reactive – it's impossible to keep up,” says Jacobs.
To deal with the emerging and evolving threats, vendors are developing evolutionary countermeasures, albeit with widely divergent approaches.
“Signature-based protection [still] is going to be part of the landscape going forward,” says Steve Orenberg, president of Kaspersky Lab Americas. “[But if] traditional anti-virus was all about detection, now it's all about prevention.”
One of the problems with anti-virus in its pure form is that it's essentially joined in a race to see who can get to the finish line first, a competition between attackers and the patch-makers.
Prevention requires a number of different approaches. Signature-based detection is extremely complex today – it involves examining a large array of characteristics, relating them to one another, and taking into account the various behaviors of the software.
“We talk about identities,” says Jacobs. “That is, where we used to look for a sequence of bytes, we now look for characteristics and the relationship of those characteristics to others.”
For instance, it is possible to see that a piece of malware will write to a certain part of the registry, or that it installs itself when it runs.
“We build identities based on those characteristics,” says Jacobs. “This is in lieu of looking for a specific machine-code instruction.”
Though proactive prevention is key, a harmony needs to be struck with performance. “It's a balancing act between performance and protection – against false positives and the like,” he says.
To some in the industry, as anti-virus – or traditional blacklisting – fades, it will be replaced, or at least supplemented, by whitelisting, or allowing only trusted processes to run. That is, blacklisting may have some value in the near term, but in the long term, a more promising approach is allowing only what is trusted to run on the user's computer.
“We feel whitelisting is necessary,” says Kaspersky's Orenberg. “There are applications that we know are bad, but there others that we know are good. Using a whitelisting approach, we can enable the system to not use up a lot of resources scanning for bad files.”
But in the end, the emerging technologies that survive will likely be a combination of blacklisting, whitelisting and reputation-based technologies.
“Signatures are developed only for attacks that are widespread enough to warrant the effort,” says Carey Nachenberg, vice president and fellow at Symantec. “A malicious attack focused only on your organization often operates below the radar of the anti-virus vendors.”
Orenberg adds: “We need the blend of technologies – to seal up attack vectors, and to include vulnerability management. We must identify where bad code is coming from. And more recently, vulnerabilities exist in applications – Flash, QuickTime and Adobe Reader.”
What about computing in the cloud – or virtualization? According to Kurt Roemer, chief security strategist at Citrix, for the cloud, the question becomes, “Do we even need anti-virus tools?”
He believes so. “As long as we have viruses, AV is still relevant, though the degree of relevancy could wane,” he says. “But sole reliance on the negative security model is not helping. There should be more emphasis on security associated with data as it moves up into the cloud.”
In the meantime, the traditional model still plays out for most. “Companies continue to have their corporate-managed PC, with their corporate-managed anti-virus on it,” says Roemer. “But in a virtual environment, the only things going back and forth are keystrokes, screen refreshes, etc. So it may not be necessary to look at file level security if there are no files involved.”
Some people see the anti-virus evolution as a natural process.
“This is a technology lifecycle issue,” says Don Leatham, senior director of solutions and strategy at Scottsdale, Ariz.-based Lumension Security. “When anti-virus was introduced to the market, people said, ‘OK, when we find out about something bad, we'll block it.' Now that we're way down the maturity curve of this technology, that approach is coming to the end of its lifecycle.”
Lumension's Leatham points out that perhaps the best approach is a combination of blacklisting, whitelisting and greylisting [greylisting is a list of applications that are not yet on either a whitelist or a blacklist, selectively permitted to run].
“This is a battle that we cannot win, but that does not mean that we are going to lose,” he adds. “The main vulnerability is not in the PC, it's what is sitting in front of it – people being tricked into doing the wrong thing.”