Even if you haven't followed my incessant stream of articles on AMTSO and testing, you may be well aware that there's a certain amount of tension between testers and vendors from time to time, even in the rarefied atmosphere of an AMTSO workshop. And some of that comes from the fact that, if you think about it, the roles of tester and AV vendor are not as distinct as is sometimes assumed. What do you need to know to test anti-malware applications competently?
- How anti-malware really works
- How malware works
- Some other stuff, like statistics and the mechanics of testing that I'm not going to go into in a short article.
A good tester probably knows more of the detail about ESET's product range (the interfaces at least) than I do, being several steps removed from the development of the products, and good test methodology depends on comprehensive knowledge of what malware does and how it does it.
At the same time, vendors know (or should know) a fair bit about testing. Some of the same testing considerations may apply in development and quality assurance that apply in external product testing, and, of course, vendors are interested in evaluating the relative performance of their own and competing products. Such internal testing isn't generally made public, and the general public would probably recognize that reports based on such testing cannot be without bias. I'm not talking about deliberate falsification of data, but the methodology (and the test samples, for want of a better word, used in a detection test) are inevitably going to play to the strengths of the home team: a vendor is probably not going to design a test that covers functionality it never intended to include in the product.
Vendor-sponsored tests are a mixed bag: sometimes we see "apples and oranges" tests that seem intended to prove the superiority of a given product or technology, such as a recent report that tested both anti-malware and whitelisting products against attacks that were based on a certain exploit class rather than malware.
“The roles of tester and AV vendor are not as distinct as is sometimes assumed.”
- David Hartley, ESET senior research fellow
Inevitably, perceived flaws in methodology can lead to suspicion that the design and choice of samples are over-influenced by a sponsor. There are, of course, testers who expect to build independence of execution and methodological transparency into their comparative testing, even if they don't always get due credit for it. The major certification testers are in a state of ongoing negotiation with the vendors who are their customers, trying to strike a balance between maintaining their own independent vision of good testing and the business needs of the vendors that are their customers. That's not a bad thing, as those checks and balances help to keep everyone honest.
Consider, for example, all those tests that indicate that free AV is as good as the for-fee versions. In some respects, this is often true, in the abstract. After all, free versions often use the same core AV engine as for-fee products. However, once you factor in other aspects of security product functionality, such as availability of support and multi-layering of detection and blocking technologies, it's clear that this approach, rather than being the "whole-product testing" that AMTSO advocates, is another instance of apples versus oranges. But what is a sponsored test?
Here's a surprise: It's not always a report with the sponsor's name somewhere in the credits. Of course, magazine comparative reviews are not always immune to editorializing influenced by which companies contribute a major part of the advertising revenue stream: One particularly grim recent example was a report where even the top contenders were slighted by a "good, but not as good as the Editor's Choice" faint-praise gambit. Even tests carried out by reputable testers on behalf of third-party publishers are only as good as the agreement between publisher and tester allows them to be. At best, this is likely to involve compromise based on financial considerations, and the publisher, as customer, usually has the whip hand.
In some cases, however, affiliations are less obvious. Would you expect to find an independent testing organization run pseudonymously by someone also working for a major AV company? What about an independent testing organization covertly hosted by a company that distributed a certain product? It's the word "covert" that matters here. There are many honorable instances of information resources, at least one with a significant presence in product testing, that are not only open about their association with a particular vendor, but whose independence is nevertheless generally unquestioned, even by the career paranoids of the security research community. However, there are always going to be doubts when a testing organization isn't open about its affiliations (or, come to that, its methodology or its vendor-derived income), even if its tests are really as good as the tester claims.