Many of the requests for comment on security news that I get are actually requests for quantification: how many, how long, how much impact? I have nothing against opinion pieces personally, or I wouldn’t write so many of them, but I’d like to think that they’re of some value, being based on a lot of security experience. But people expect objectivity in reporting, as Byte became all too aware recently, and while an article that was all statistical data and no interpretation would probably attract small audiences, reference to such data does suggest (if not prove) that the writer is interpreting real research, rather than making up data or simply airing unfounded prejudices.
Quality of research is a whole different thing, of course. Having been involved – in one sense or another – with anti-virus testing for many years (hence my current involvement with the Anti-Malware Testing Standards Organization) I’ve seen some pretty dubious data sets based on unconvincing methodology. I’ve even seen instances where different magazines have drawn very different conclusions from the same data sets, so quality of interpretation is also an issue. But that’s a drum I’ll no doubt be banging again, sooner or later.
Dinei Florencio and Cormac Herley of Microsoft Research are taking a slightly different tack in a paper called “Sex, Lies and Cyber-crime Surveys.” In fact, I came across it in a post by security über-guru Bruce Schneier, who has “been complaining about our reliance on self-reported statistics for cyber-crime.” The concern here is that: “Much of the information we have on cyber-crime losses is derived from surveys.” And it’s a justified concern – well, at any rate, it’s one that I share. Florencio and Herley point to a number of issues that can impair the accuracy of statistics based on surveys:
- Results based on an unrepresentative sample (hey, that’s an AV testing issue, too);
- Figures based on “unverified self-reported numbers”: I don’t think this is a concern about someone intentionally cooking figures, so much as the unreliability of “objective” data based on subjective responses;
- Evidence that extremely high estimates by a minority of respondents can skew the figures. That’s actually a function of sample size: The nearer the sample size is to the total population, the more that effect is likely to be mitigated, but most surveys don’t have respondent volumes in billions. (According to http://www.internetworldstats.com/stats.htm – that’s not a statistic I’ve verified personally, of course, but it’s not one I have reason to doubt – there were 2,095,006,005 internet users at March 31, 2011.)
Ah!, I hear you say, but didn’t you just put up an article here about a survey commissioned by ESET Ireland? (You probably didn’t say anything of the sort, but I won’t let that get in the way of a rhetorical device.)
Indeed I did. And I even said that “66 percent of users always reacted appropriately to warnings from their AV product.” Somewhat sloppily expressed and, I’d like to think, an uncharacteristic slip for someone whose earliest reading matter included How to lie with statistics, and spent many years working in medical informatics. Indeed, I’ve just awarded myself a slap on the wrist for this silliness.
Sixty-six percent of a thousand respondents is obviously not the same as 66 percent of all AV users in Ireland, let alone all computer users, or internet users, or whatever. There’s no way I’d try to extrapolate from such a small sample (n=1,000) if I was talking about crime figures, infection statistics and so on. In fact, ESET doesn’t give out absolute numbers as infection statistics for exactly that reason, as a matter of policy.
The difference is that in this case, the exact proportions don’t really matter. What does matter is the message:That is, quite a lot of people (according to this survey) claim to behave “appropriately,” but a significant number (in a non-statistical sense) appear to be prepared to indulge in risky behavior in spite of warnings from their security software. Surveys tell us a lot about attitudes, if they’re well-designed, but they don’t usually generate universally authoritative statistics in the context of populations this large.