The security information and event management (SIEM) segment has always been one of the most interesting groups that we examine. It is particularly interesting because over the years the definition of what we mean by a SIEM has evolved. It came out of two separate product categories: security information management (SIM) and security event management (SEM). At the beginning of the genre there was a distinction made between event and information management. Today, they are combined and have been for some time. While some refer to SIEM as security incident and event management, most professionals today agree that SIEM is security information and event management. The term was coined by Gartner back in 2005 and it has stuck with us.
This is totally appropriate since information is necessary to interpret events. Certainly it's the events that trigger alerts, but it's the information that gets the analysis done. So, what should we be looking for in a capable SIEM given that this is an evolving category - even though it's pretty mature at the moment? SIEMs do a lot of things, but at the core of why we need one of these beasts is that they take lots of data from lots of sources and provide useful, actionable information from it.
Let's dig into that a bit. Useful security-related data in a large enterprise comes from a lot of sources. Firewalls generate logs. Intrusion detection systems (IDS) or intrusion prevention systems (IPS) generate logs and alerts. Routers and switches generate net flow data. Computers generate system logs. All of these assets need to be aggregated and correlated in order to be of any use. For a large enterprise, that could mean quite a bit of data. So what does a SIEM give us that helps us analyze and alert?
First, the SIEM must aggregate the incoming data. That means that it must know how to read the different file types that generate data for it. There are a variety of ways that SIEM developers do this, but the bottom line is: If the SIEM at which you are looking cannot decode most types of security data, it is not of much use to you. The other piece of aggregation is the ability to collect all of the data without dropping packets.
Next, the SIEM needs to be able to correlate data that it has collected. This means distilling it into common events and flows. The analysis cannot begin until the correlation is complete.
SIEMs alert as well as analyze, so there must be a good way of determining alerts. Limiting or eliminating false positives, alerting based on weighting, and criticality and correlation with vulnerabilities all are important aspects of the alerting functions of a capable SIEM.
Also, the tools need a good way of displaying the results of their analysis. That usually means a good graphical dashboard, but there also is the need to drill down to original data, particularly the original source. This leads into the need today for compliance reporting.
Finally, SIEMs can process a lot of input. So you need to consider how you are going to archive the massive amount of facts that the sources feeding the SIEM generate daily. There are a couple of sides to this particular requirement. On one hand, you can archive metadata and that will let you perform credible analysis over time to get historical perspectives on threats and vulnerabilities. However, that usually does not let you drill down to the source data. That means that you will not be able to reconstruct sessions, including the data payloads of the source packets.
So, what distinguishes one SIEM from another? The SIEM that you select needs to have the features that you need in your environment. It usually needs to be scalable, and that might mean being able to function in a widely distributed network. SIEMs that do that often have a master device that communicates with subordinate devices.
Don't focus too much on cost. Rather, concentrate on value. For a large-scale SIEM, you might pay a bit more, but you may need its capabilities.
Frank Ohlhorst and Mike Stephenson contributed to this Group Test.