Breach Detection Systems (BDSs) identify patterns of events in order to detect network compromises. Event streams include network activity, host activity, and the analysis of various artifacts that are observed in the network.
One of the goals of BDSs is to provide automated detection with minimal false positives because excessive false positives cause "fatigue" in the incident responder. This means that the sensitivity threshold of a BDS system must be set so that an alert is generated only when a substantial amount of supporting evidence is gathered.
However, these systems traditionally operate by modeling what's bad and detecting instances of behavior that match these models. An alternative approach is based on building models of what's good, and flag everything that does not conform to these models. The advantage of this approach, called anomaly detection, is that it is possible to detect previously unknown attacks, and, for this reason, it has been studied for more than thirty years.
An anomaly detection system provides a series of observations that are either anomalous per se (according to a pre-established model), or anomalous when put in the context of the historical behavior of a network or a user. The following examples illustrate the observations produced by such a system:
Anomaly detection works under the assumption that malicious activity will result in anomalies in some event stream, and, at the same time, anomalies in an event stream are caused by malicious activity. Unfortunately, in the real world, both assumptions are sometimes incorrect, and anomaly detection has been riddled by both false negatives (because malicious activity does not always generate anomalies) and false positives (because benign activity is sometimes anomalous).
Therefore, one cannot take the anomalies as face value. Instead, it is important to "ground" the anomaly detection analysis with the detection of compromised hosts.
The events associated with a confirmed compromise can then be compared with related anomalies in order to create patterns of anomalous behavior that are associated with the tools and techniques used by the attackers. Once this anomalous pattern has been established, it is possible to look for similar anomalous patterns across the network, to identify likely compromised hosts that are part of a large-scale breach.
This process, called magnification, allows an analyst to move from being a trapper to being a hunter. By using a combination of compromise detection techniques, anomaly detection, and machine learning, the analyst can recover the complete blueprint of a complex attack.
If these techniques, instead, are used in isolation, the risk is the generation of false positives, non-relevant detections, and ghost alerts.
Giovanni will be presenting "From Trapping to Hunting: Intelligently Analyzing Anomalies to Detect Network Compromise" at InfoSec World 2018 in Orlando, Florida, on Monday, March 19, 2018, at 11:00 AM ET.