Breach, Data Security, Network Security

Missing the big picture in the Sony hack

Are we missing the big picture, again, in the fervor around the Sony hack?

It makes a great scene, you can picture it in a movie: Employees showing up on Monday unable to login or begin their day due to the flashing skull on their monitors. Frenzied white-collar workers rushing to secure pen and paper and lining up to use the fax machine. An escalation of damage as a puzzlingly worded extortion threat manifest as leaked sensitive internal information, then deleted and lost data, and finally followed by the loss of the crown jewels themselves – DVD quality rips of yet-to-be-released movies.

Despite the initial novelty, however, the industry and pundits reacted in a depressingly familiar fashion: victim shaming, an over-focus on malware, and an overall failure to look at the attack process and the attackers' goals as opposed to specific events of the attack. And we've failed to reflect on why this keeps happening.

After the last few years of ever-larger data breaches, it is disappointing that victim shaming is still so prevalent. By now, we should realize that this could happen to any of us, and immediately blaming the target for insufficient security benefits no one. Yet, as an industry and society we continue to blame the victim.  Maybe it makes it easier for us to shrug off the risks we face with the standard dodge: “That could never happen to us...”

Meanwhile, the FBI is rushing out flash notices warning U.S. companies of new malware that is more damaging than what we've typically seen before. It isn't exactly as if wiping Master Boot Records (MBR) or overwriting data for secure deletion is cutting edge technology.

It is unlikely that adding yet one more signature to our vast list of malicious executables will make a difference. These attackers clearly spent time in the network: they penetrated, spread, stole data, exfiltrated, and infected on the way (perhaps even using the infection solely to inflict damage and not necessarily as the means of penetration). They probably would have made sure that whatever damage they planned to inflict wouldn't immediately be flagged and prevented by the resident anti-virus systems. Yet, as an industry we continue to focus only on the malware itself, the mere technical artifact of the attack.

Did malware identify that employee payment information, severance packages and even planned terminations was juicy and news-worthy when leaked? Did malware promise that more would be leaked, and ask reporters to request anything of interest? Did malware post Sony's movies to file sharing servers?

Of course not, the attacker(s) did, because in this case it appears that their goals seem to be to inflict much more direct and immediate damage than is typical of the more common, financially-motivated, hacks.

So, should we rush out signatures for this latest version of malware, or should we take a step back and figure out how to focus our technology and security operations around identifying attackers that are active in our systems – before they wreak such havoc?

We've seen this movie before.

It is time to realize that it is our over-focus on malware and specific technical techniques, and our over-reliance on an impermeable perimeter that is the problem. The best practice in security has been to attempt to prevent 100 percent of attacks by stopping them from “getting in.” This despite the fact that the concept of “in” is increasingly fuzzy. Recently, this has been supplemented with the idea of detection, but largely by shoehorning in solutions not actually designed to detect attackers. We've attempted to build intelligence into log aggregation or network monitoring solutions (e.g., SIEM, network behavior anomaly detection), but this has just resulted in a deluge of alerts. Sure, a real incident or evidence of same may be buried in the pile – but staffing security operations sufficiently to find the needle in a haystack (or more aptly, the correct needle in a stack of needles) is beyond the financial reach of most firms. And when it turns out that an alert was missed, that can prove even more embarrassing. Given that the vast majority of attacks are reported by external parties (or in this case, because the detonation was so damaging and obvious), it is clear that our current detection attempts are insufficient as well.

So, what can an organization do?

A large organization with a huge security budget can try to throw bodies at the problem. Find, train, hire, and retain the security ops folks who can pore through the flood of alerts and sift out the relevant from the vast noisy mess produced by current detection tools. That works, barely, for some of the largest financial institutions. But it is expensive, and given the difficulty of finding and retaining such personnel, is best augmented with as much technology and automation as possible.

It's time for a new take.

We think it is time for a more intelligent solution that automatically profiles activity on the internal network, integrates host visibility with cloud expert systems, and provides security ops with a manageable set of true positive alerts. Let's recognize that 100 percent prevention is an impossible goal. Instead, let's focus some effort on effective detection, and make it as automated as possible. Let's reduce the amount of time attackers are in the network (the “breach detection gap”), and try to catch them before damage is done.

It's probably also a good idea to make sure your insurance policies cover a hack. After all, while we try to prevent fires, and we have smoke alarms and sprinklers to detect and respond, we also all have fire insurance. It is a well-established risk mitigation practice to share (or transfer) risk, not to solely attempt to reduce or absorb it. In fact, in the short term, that is probably the smartest thing you can do.

Get daily email updates

SC Media's daily must-read of the most current and pressing daily news

By clicking the Subscribe button below, you agree to SC Media Terms and Conditions and Privacy Policy.