The fallacy of targeted attacks
The fallacy of targeted attacks
It's time to admit that the bad guys can always make a first move, says Damballa's Manos Antonakakis.

Over the last few months, I've had the opportunity to visit, meet and hold extensive discussions with a variety of people across the security landscape. In the process, I've realized there is a great level of misconception around the detection or even prevention of targeted attacks and advanced threats.

First of all, prevention is simply not possible. If that problem were solvable, academia and industry would have solved it many years ago, and other types of threat detection companies simply wouldn't exist. The attacker has the first move, thus the infection vector is bounded by the attacker's skill sets. Now, let's try to demystify the problems of detecting advanced and targeted threats and discuss to what extent we can rely on sandboxing-based technologies to solve the problems for our organizations.

By definition, an advanced threat will try to evade the most basic traditional defenses. In Computer Security 101, you learn that when you want to understand what malicious software does, you run it in a sandbox. The second thing you learn is that dynamic analysis of any code is an undecidable problem and it does not scale. It is undecidable due to some very fundamental computer science problems. Without delving too deep, it suffices to state that even when you have the malware, you cannot automatically tell what it is capable of doing. Thus, using sandboxing for the detection potential C&C communication is at best unreliable.

...sandboxing does not scale for many reasons...”

On the other hand, sandboxing does not scale for many reasons, including: You do not always have access to the binary (due to encryption, packers etc.); you do not a priori know the length of time you must execute the malware. Thus, if the malware sleeps, there's nothing you can do; the malware authors have developed sophisticated techniques that detect sandboxing environment based on their network and system properties.

But this is enough on advanced and sophisticated threats. Sandbox-based detection fails – we know that objectively and a priori.

Now, lets examine the targeted threats. Let's try to get into the shoes of an attacker. It is quite reasonable to assume that the adversary will spend a significant amount of time crafting the attack vector. It is also quite reasonable to also assume that the adversary will not just push the targeted malware in plain text, so sandboxing companies could trivially grab it.

The simplest way to achieve this is to sacrifice commodity RAT-like malware before the actual targeted malware drops. From the adversary's standpoint, if you drop the first-stage RAT-like malware and run in a sandbox, why would you proceed to use it to deploy your targeted threat? We really have to assume that the adversary is at least that smart, otherwise we are not chasing targeted malware, but rather novice attackers that create malware on their free time based on “open sourced” malware kits.

Realistically, can sandbox-based detection technologies help us defend against advance malware or targeted threats? Going after the malware as a “detection” trigger, is a battle that we will never win and we will always be behind the threat.

By definition, sandbox-based analysis tools are available to any adversary planning an attack (since they are commercial products, and available to the class of adversary planning an APT-class assault). Simple malware may be caught by sandboxes, they're no doubt useful. But in the case of APT, the malware authors test their attacks before releasing them. Thus, it becomes tremendously difficult to detect, classify, and attribute APT threats via sandbox-based methods.