Security operations teams rely on asset inventory to access specific types of information needed for investigations. To build an asset inventory that’s complete, fully contextual, unique, credible and current, they need a diverse set of data sources.
These data sources can include tools such as endpoint detection and response, endpoint management, mobile device management, identity and cloud tools. They can also include directory services, network routers, switches and firewalls, and other infrastructure tools. They each offer different information about assets. Security pros must connect to these data sources, extract and correlate relevant data, deduplicate the different assets, and then aggregate all the information to get a complete view of an individual asset.
Unfortunately, in many instances security practitioners rely on incomplete, gap-filled traditional asset inventory methodologies:
- Agents don’t work because security pros may never find or know every device to deploy the agent in the first place.
- Scanning tools are typically a point-in-time snapshot, meaning unavailable devices are missed and the data gets quickly outdated.
- Network-based discovery tools often aren’t deployed to every network segment. Instead, they may only contain data for devices most recently seen, and typically lack deep context for the devices they capture.
Each of these methodologies has its own shortcomings, but in most cases they all lack the diversity that comes from leveraging a wide array of data sources. Here are five reasons why incorporating multiple data sources into asset inventory matters when security practitioners conduct investigations:
- The data contains all assets.
Some companies don't account for cloud assets or virtual machines, while others might overlook containers and mobile devices. Security pros need to feel confident that they can find complete information about all devices across the enterprise. Otherwise, they waste valuable time looking in other tools to complete alert triage and investigations. It’s inefficient and distracting, a problem compounded by staffing constraints in security operation centers.
Security pros need many different data sources because some devices only exist in a single data source. For example, an ephemeral container in a cloud platform may never get an agent deployed to it. It may not be live during a vulnerability scan cycle. And a visibility tool like a NAC may not extend into the cloud. The only data source that will know about the device is the cloud platform itself. All other traditional methodologies would fail in this instance to know about the device.
- Delivers context.
Incorporating more data sources adds necessary, rich context to the asset inventory. Although some data fields may overlap, each source contains unique elements that offer context. It’s important to have context around attributes like patches, vulnerabilities, installed software, open ports, and device users. These are critical pieces of information. For example, analysts can quickly correlate a network-based sensor alert with this type of target system data. This leads to an important series of determinations, including the criticality and exploitability of the target host and the overall impact and response of the security team.
- Deduplicates assets.
Security practitioners sometimes encounter duplicate asset information, or worse, they find more than one asset assigned to the particular IP address. IT Service Management (ITSM) teams don't always have the time to build normalization, deep correlation, and deduplication rules to untangle data sources to arrive at a unique inventory, and security analysts don’t want to spend unnecessary time manually comparing device name, Mac address, and serial number to deduplicate different data sources during an alert triage. Having a multitude of data sources means there’s sufficient overlap between data sources to cross-correlate source and device information.
- Deconflicts data.
Security teams often find that there’s a lack of standardization in naming data sources. For example, some data sources are named for their Mac addresses and others are called network interfaces Mac address. This conflicting data isn’t always accounted for. Security pros need to combine a multitude of data sources driving the density of field-level data and apply a series of algorithms to deconflict the small and sometimes wide variances found between different sources. Algorithms should account for the data source’s proximity to the device and various time elements. Think of a vulnerability scan as a snapshot in time while agents running on the device are more frequently reporting device information to their source.
- Frequently updates data.
Security pros live in a near-real-time world. When they’re triaging an alert, they need the most current information about an IP address or device. For example, they need to know the asset’s location and type, if it’s known and managed, if the core software has been updated, and the additional software installed. This type of information changes all the time. Security pros need frequently aggregated information from data sources so they have a complete, accurate view into all assets.
Security pros need to strengthen their approach to asset management. Updated information for every single device exists in often many data sources across the organization. Frequently polling these data sources will find all the assets and provide a near real-time, complete picture of each individual asset, which will help when conducting investigations.
Patrick Kelley, vice president, Axonius