Cybersecurity teams spend the majority of their time trying to figure out exactly what is happening on their networks. They would be better served trying to focus on how often things happen on their network.

That's not to discount the value of being able to point at a system and figure out exactly “what is going on” – but an emerging best practice, frequency analysis, is the root of the question of how often things happen.

Frequency analysis involves an automated network assessment to determine which user processes are run by the many and which are run by the few. Why is this important? Because adversaries are more likely to compromise an enterprise and explore within by staying low-key, limiting themselves to a select few internal machines.

Sure, there will be massive assaults like the Code Red worm of days past. But, let's face it: if you're hit by something that big, you've probably already read about it in the news, and can immediately evaluate whether you are, indeed, among its victims.

Frequency analysis attempts to hunt down the attackers who prefer a subtle approach. It all comes down to simple numbers: you run an automated scan of processes and software installations, among other things, and find out which ones are present within a very small percentage of computers – maybe three, fave or 10 percent. Then, you ask yourself how many legitimate processes exist within the network on such a small scale. Aren't most programs deployed enterprise-wide or at least department-wide? Yes, most are (though obviously some exceptions exist). Therefore, it is more likely that the small-scale programs could have been introduced by a threat, thus demanding a closer look.

This practice promises to save security teams vast amounts of time. Currently, they spend too many hours checking computers on an individual basis, chasing log files and anti-virus warnings. Think about it: if there are 5,000 employees/users, is any business honestly going to hire hundreds of IT pros to ensure every machine is clean once a week? That would be an absurd allocation of the tech budget – a huge price to pay to ensure network security when the investment could further IT innovation instead.

We realize traditional tools aren't up to the job anymore. We've gone through the same routine over and over. You run one anti-virus product on your network, you use websites that scan potentially malicious files against 50 different antivirus products and they tell you whether anything is fishy within your systems. But the bad guys are using these products too, so they create a file which none of the products have flagged. Then, the file is introduced into the network, steals data, shuts down operations and otherwise inflicts damage, and the anti-virus companies play “catch up.” They produce a matching signature and declare that their customers are “cured” of the new malware. Then the bad guys come up with something else – rinse and repeat – in an endless arms race.