Content

Time to call out the detectives

When enterprise security became mainstream in the mid-1990s, the philosophy for IT security was similar to physical security – the ultimate goal was to protect the perimeter of a corporate or private network from all outsiders. Instead of gates and fences, organizations installed firewalls. Instead of alarms, they implemented intrusion detection systems (IDSs). Instead of security guards, they used security administrators.

However, the beginning of the new millennium has also marked the launch of a new era in information technology. As the complexity of both networks and systems increases, cyberattacks, in parallel, are becoming more sophisticated and harder to detect. But protecting the network from external threats is just one piece of the puzzle.

Data gains value when it is shared with those who need to use it. As a result, IT has to face up to the growing challenge of safeguarding the dissemination of important, yet vulnerable, information between customers, business partners, suppliers or internal departments. The use of rule-based or signature-based systems is not enough to mitigate the risks of inappropriate network use and unauthorized access, or the activities of rogue nodes. Already overburdened staff have to focus more heavily on information assurance versus asset protection.

Furthermore, corporations now have to understand how their networks are being used to move critical information, including source code, employee records, financial data, and so on.

According to 269 respondents in the annual 2004 CSI/FBI study, the theft of proprietary information and insider internet abuse were among the security incidents resulting in the most financial damage – the first accounting for losses totalling more than $11 million, the second for losses of more than $10 million.

Having all the security solutions in the world will not be enough to protect against these threats in the absence of effective security management to govern policies, accurately identify these threats and appropriately implement escalation procedures.

Network security analysis is a methodology that provides the framework in which security and incident response teams can quickly assess, investigate and inform. These actions are translated into four key processes – identify, correlate, analyze and inform.

Identify

Network security analysis starts with knowledge. Network forensics solutions can enable you to identify communications as they relate to data and network assets, and assess overall architecture and communication patterns, thereby providing a thorough understanding of what, where and how protection and supervision should be implemented.

Organizations using network forensics technology usually begin by collecting a series of activity samples to help establish a baseline of network usage. This baseline serves as a reference point for further investigations and helps an enterprise understand how users communicate within the network.

Correlate

Once network activity surrounding a suspicious event is captured, focus should be placed on other sources that might contain useful data. The ability to import, display and simultaneously correlate data from several sources enables the forensic examination of suspicious events across multiple security platforms and environments. For example, alerts produced by an IDS and status logs from a firewall can be combined with network data to provide a more complete picture of network activity.

Analyze

Phase 1 The examination of packet headers provides information about source and destination addresses (such as IP, MAC or hostname), protocols, port numbers, timing and other pertinent parameters. This analysis shows communication patterns and how data propagated. Effective network forensics solutions use various intelligence algorithms to determine nodal interdependencies, exposing potential anomalous behavior.

This analysis provides useful information to both security and network operations staff, since it quickly shows how the network is really being used, rather than reporting on physical topology. We call this "virtual topology mapping" because, in essence, we are charting communication patterns regardless of where the senders and recipients really are physically located. The added benefit compared to physical mappers is that this process does not flood the network with active scanning, but gathers and processes this information passively. Due to this fact, it is important to carefully consider probe deployment in switched environments in order to gain visibility into the dataflow.

Phase 2 Next, the analysis concentrates on reference content – a reference can be an email, a web session, a Word document, an Excel spreadsheet, and so on. It provides insight into the content of communications traversing the network, such as the relevant data that triggered an IDS alert, or a document that was attached in an email. The best network forensics solutions are not limited to keyword matching, or rules based on commonly used phrases to examine reference content. Rather, they use complex mathematical algorithms, like n-gram analysis, to determine content relationships. Such statistical processes eliminate the possibility of missing important matches due to the use of aliases, or because of linguistic misinterpretations.

Consider this simple case: by searching for the word "computer" using keyword matching, you will either get 100 percent or zero percent matches. What if someone replaces certain letters to avoid detection? In other words, if they type "com%uter" instead of typing "computer," a keyword search engine will fail to identify the similarity.

The same concept applies to phrases. If a user does not type the phrase or sentence as entered in the inspecting system, then that system will fail to find a match. On the contrary, a statistical match is, for lack of a better word, more lenient when it is inspecting content because it will identify similarities that might be less than 100 percent.

This approach provides reference clusters with similarity scores, segmenting thousands of references into their respective clusters.

There are two modes of operation. The first is blind clustering, which enables you to perform the analysis without a predefined topic (or keyword). Results will equally compare all references and cluster them according to a relativity coefficient. In this way, documents with similar characteristics will be grouped together.

Then there is topic analysis, which enables you to perform the analysis using a topic (such as a phrase, formula or a section of a document) to determine its inclusion or transmission within the investigation period.

Note that this analysis is language-independent, meaning that it works equally well if the references were all in English, German, Greek or even Japanese (double-byte characters).

Inform

Once the analysis is complete, the information must be presented in a logical, sequential method. To address this challenge, network forensics solutions should provide a variety of reporting capabilities.

In a complex enterprise network, portraying findings in a way that management and non-technical people can readily understand can be tricky. Aside from textual, tabular reports, data visualization is becoming a popular requirement. Forensics solutions should provide powerful, interactive graphical reports. These visualizations should empower users to display information and turn it into actionable knowledge.

Proprietary data delivers the most value when it is appropriately shared with customers, suppliers and business partners. Restricted by governmental regulations surrounding electronic communications, enterprises must be able to quickly identify any security breaches and inappropriate activities in increasingly complex network and system infrastructures. Although it is unlikely that a single solution will ever solve all IT security concerns, correct security solution management will help mitigate security exposures.

The right network forensics solutions provide the methodology to perform extensive forensics investigations – empowering security and incident response teams to assess, investigate and inform. It takes security management to the next level by visualizing traffic patterns into behavioral clusters, which quickly provide a graphical depiction of nodal communications and dependencies. Furthermore, advanced analytics capabilities and centralized data repositories enable collaboration between security and network management by providing the information necessary for both deploying and maintaining network security.

Yiannis Vassiliades is product manager for CA's eTrust brand security management solutions

Get daily email updates

SC Media's daily must-read of the most current and pressing daily news

By clicking the Subscribe button below, you agree to SC Media Terms and Conditions and Privacy Policy.