At the beginning of September, I was at a small but perfectly-formed forensics conference in the UK: the 5th International Conference on Cybercrime Forensics Education & Training (CFET 2011). OK, they can be a little behind maintaining the website, but there is invariably some good forensic content to the presentations.
I was actually there to talk about a couple of things: social networking and a more forensics/anti-forensics-oriented talk on the use and misuse of multiscanning (the deployment of multiple AV scanner engines to inspect possibly malicious objects. (I actually put that together with Julio Canto, of the well-respected multiscanning site Virus Total, and it will be publicly available in due course, but that’s not the topic for today.)
The irony here is that the day after that multiscanning talk, Shadowserver announced its “new anti-virus backend test systems,” which is also based on the use of multiple engines, but sounds as if it’s specifically intended to offer comparative analysis. This is a position Shadowserver has stopped short of adopting in the past, acknowledging that AV engine implementation limitations compromised its effectiveness for comparing performance. There are a number of reasons for this, but a particularly significant one is (I’m simplifying a little here) that the scanner versions used by Shadowserver don’t necessarily have the full range of detection capabilities that a Windows desktop version of the same scanner has. And that’s the “irony”: that’s one of the points that we addressed in the CFET presentation with respect to Virus Total. Shadowserver’s announcement indicates that it has gone some way toward addressing this limitation by introducing some Windows scanning. But is it enough.
Kevin Townsend and I don’t always see eye to eye on the merits and demerits of AMTSO (the Anti-Malware Testing Standards Organization), of which I’m a director. But his article today on this topic strikes me as pretty close to the mark. (But then, since he quotes both myself and Panda’s well-respected Luis Corrons at some length, I would say that, wouldn’t I?) He suggests “…for the time being at least, don’t use Shadowserver’s statistics to form an opinion on the relative merits of different AV products.”
Shadowserver gathers a great deal of very useful intelligence from “the dark side”: this too is a service with a lot of potential, but I have to agree that comparative detection performance analysis is probably not one of them. Kevin believes that it will be at some point. I’m not so sure that this approach can be as viable as a well-implemented detection test by a professional testing organization like AV-Test. But since it’s inevitable that people will try to use it that way, I have to agree that personally I’d love to see Shadowserver engage in some way with AMTSO.