This is a high-level (decidedly non-technical) summary of the threat landscape with some statistics from non-specialists – such as the BCS Institute and the Ponemon Institute – as well as high-profile security companies. ESET tends to avoid extrapolated data as this as it can be so easily misinterpreted. It's true that in the United States, enterprises are often required to supply data relating to breaches, but accurate evaluation of such data remains problematic. Still, it's hard to disagree with the premise that cyber crime is an issue that needs to be addressed. And I suppose government has to try to quantify that before it commits resources to attempting to remediate.
I haven't noticed any truly horrendous errors, but I have no idea why they would say something like: “Thus the elements of a piece of malware may legitimately be used as software as long as there is no malicious intent.” For example, legitimate software using compressors and obfuscators that are normally used by malware gangs may not have malicious intent, but that doesn't mean they can't cause problems. It seems to me that the report has stumbled across a relatively minor issue and commented on it without realizing the complexities behind it, probably because the issue concerned one of the groups or individuals that provided it with information. That doesn't really matter as long as it doesn't suddenly become the basis for some form of hard policy/guidelines somewhere down the line.
There are good things in this report. For instance, the continuing endorsement of the principle that the government should “enhance the ability of the public to report cyber crime.” That's a bigger deal than it sounds. Historically, governments and law-enforcement agencies have often made it harder to report crime than it needs to be, and one has to suspect that this is (intentionally or otherwise) a filtering mechanism to lessen the risk of being overwhelmed. The question is, will there be resources allocated to verifying and evaluating the likely higher volumes of report? If not, the likelihood is that sheer glut will compromise the ability of agencies to respond quickly and where they are most needed.
Then there's the recommendation for security companies to work with government “to find a way to use the development of a cyber hub to facilitate the detection of malware.” Frankly, I think the already-recognized need among security companies to share high-priority samples and information as quickly as possible in the interests of the wider community would sit uneasily with the traditional governmental approach glacial development and bureaucratic over-caution (financial and strategic). (I'm talking here and elsewhere in this summary of governments in general, not the U.K. government in particular.) But a government that could really work with the security industry rather than try to direct it could make a real difference in the reduction of cyber crime. And while many of my peers in the security industry won't agree that there's any mileage in educating the end-user, I seem to see some recognition here that helping the public to take some of the responsibility for their own safety by giving them accurate but easily-assimilated information is a remedial measure worth allocating some funds and real effort to. And that strikes a very real chord with me.
While Richard Clayton's dismissive comments in the commentary submitted to the Select Committee are inaccurate in detail – if AV was really always updated “too late to protect anyone” I wouldn't be wasting my time working in this sector of the industry – it is correct in principle that “computer users should learn how to be safe online rather than become reliant on anti-virus software to protect them from cyber crimes.” And Clayton's remarks on the gap between government and commerce, as quoted by Kevin Townsend, makes a very good point.
In short, worth the considerable time it takes to download and read the document, even if you're not a U.K. citizen, but don't assume that it's been scanned in from tablets of stone.