Trust today's AV test results?

Michael Kalinichenko
Michael Kalinichenko

Many organizations still rely on published comparative anti-virus test results to guide their product selection. But do these test results accurately reflect the real-world capabilities of the products? In my view, there's far too much room for different interpretations and outright errors.

All these tests still center around a single detection methodology – viruses from the WildList collection. To score 100 percent in this type of test, the anti-virus product has to detect all viruses on the list and trigger minimal false positives on “clean” files. But there are two big problems with this type of test: First, the WildList includes only viruses and worms that run under Windows and second, the WildList malware collection is critically small, with new malware being added very slowly. Clearly, tests relying on WildList collections bear little relationship to the real world.

Other testing methodologies deal with the quality of detection of different kinds of malware and the speed of vendors' reactions to new outbreaks. Collections of malware in these tests are much bigger than the WildList, with more than one million samples. While such collections are a better reflection of the real world in terms of samples, the test methodology is woefully outdated, analyzing infections activated by hard-disk scans. This is in dramatic contrast with the reality that the overwhelming majority of infections occur via the internet.

These tests are really only examining the capabilities of traditional signature-based detection and some aspects of heuristic detection, without regard to zero-day threats and other real-world infections. Technologies like behavioral analysis, host intrusion prevention systems, whitelisting, sandboxing, intrusion detection, and squid filtering of HTTP traffic are barely even considered.

Effective protection today requires multiple methodologies, and testing organizations must provide a way for users to effectively compare those methodologies.

Sign up to our newsletters

More in Opinions

Spotting the "black swans" of security

Spotting the "black swans" of security

How can it be that firms can feel confident in their security technology investments and their people, yet ultimately still believe that they remain at great risk?

Me and my job: Blake Frantz, Center for Internet Security

Me and my job: Blake Frantz, Center for ...

A brief Q&A with Blake Frantz, director of benchmark development, security benchmarks division, Center for Internet Security (CIS).

BlackBerry back in the game

BlackBerry back in the game

Thanks to BYOD, gone are the days of one single mobile device manufacturer or model to support, says Dimension Data Americas' Darryl Wilson.