For those of us in the anti-malware testing industry, this last month has had a lot of very interesting discussion.
To sum it up simply, there have been a lot of technology changes in the last five or so years due to a sea-change in the motivation of malware authoring, and testers have been changing their methods as well.
Existing testing labs undertook methodology changes based on their own strengths and the need they perceived, and a number of new testers have cropped up to address a need they perceived in the industry.
But depending on the expertise of those in the labs, and their understanding of how security software is intended to function, some testing changes were more useful than others.
At its best, testing is a help to not just users but vendors as well, as it points out strengths and weaknesses that they can improve within their product. Ideally, this is how tests should be — causing minimal risk to the general public during the testing, giving users an accurate picture of how the product will perform, and helping vendors to improve their products.
Without being controversial, I will say I see it as incredibly important that there be some guidance for those who are creating new tests. There are a lot of organizations getting into testing that don't understand the special needs of security software.
One does not put anti-malware through the same sorts of paces as a toaster or a word-processor. It is seldom useful to know the history of egg-beaters when one is evaluating its effectiveness.
Conversely, if you don't know how to safely isolate a network, you can infect your virtual neighbors with those live viruses you're using or — heaven forbid — have created. And if you don't know what an anti-malware program is intended to do, while your test may be completely scientifically valid, your results will tell users nothing about what their day-to-day experience with a product will be.
Those with experience will understand how to create a scientifically valid and meaningful test, but not every group creating a test has the luxury of employing an experienced anti-malware expert.
Publishing tests which are flawed in either area may lead to entirely sensationalistic results, and it makes not only the vendors but the whole testing industry look incompetent, as it becomes a public feud. The new testers feel unfairly punished because they didn't know better, while the vendors feel unfairly punished because of the flawed testing methodology.
Whatever else AMTSO has done, they have begun publishing papers that allow new testers to “know better”. What they then do with that information is up to them, as AMTSO is in the business of education, not enforcement.
The “watchers” are still essentially under the honor system, but it is certainly a step in the right direction..