Compliance Management, Security Strategy, Plan, Budget

Regulations can provide much-needed relief for security professionals

Achieving a reasonable state of IT security has been an unobtainable objective for many organizations, primarily due to the corporate environment in which business units, IT groups and other teams operate. With management focused on driving shareholder value while increasing revenue and profits, IT security often takes a back seat to financial issues. Many top level executives aren't interested in investing enough in security until there are consequences, such as losing customer trust, paying a hefty fine or going to jail. Even a company with extensive security defenses can find it is unable to answer basic questions regarding the safety of its most important assets and information, as well as whether its investments in technology are indeed improving security.

A 2008 Verizon Business Risk Team report on data breach investigations found that data compromises are considerably more likely to result from external attacks than from any other source. In August of 2008, the Identity Theft Resource Center of San Diego revealed that more data breaches involving the loss or theft of consumer information had been reported in the first eight months of 2008 compared to all of 2007.

More than ever, businesses are beginning to understand the need for risk management practices that help identify and close the door on vulnerabilities. Regulatory compliance requirements have emerged on the scene in order to nudge, and in some cases push, management toward better practices. Many companies face multiple external regulations. Responding to these requirements, though, often conjures up thoughts of the time and concentrated attention needed to define new practices, document processes and procedures, and assess all the defensive components involved in protecting critical corporate assets.

Compliance regulations, however, should be viewed as a good thing by IT security professionals. Communicating what needs to be done and why in terms of meeting compliance requirements provides a great first step toward approaching the subject of IT infrastructure security with executives and top level business managers. When security needs are stated in terms of achieving a compliance requirement, management is far more likely to listen and understand and resources are far more likely to be assigned.
 
Security objectives and compliance objectives don't just align themselves automatically. There are significant strategic benefits that IT security teams can extract by engaging seriously in compliance. Today's best security shops get out in front of the wave, since it certainly can't be stopped. The trick is making it work to benefit the company. There are a number of techniques and approaches that enable a security team to make compliance assessment an ally, not an enemy.
 
The diversity of compliance standards

Over the last 12 years, a number of new compliance requirements defined by specific industry groups have been released. The incessant march of acronyms includes many now familiar names. NERC wrote a set of compliance regulations for protecting the nation's power grid, released by FERC (the Federal Energy Regulatory Commission) in 2007. PCI-DSS was created in 2006 with ongoing updates today. SOX, released in 2002, outlines a set of compliance regulations for protecting corporate records. And HIPAA, released in 1996, defines protection of patient data.
 
In every context, the rules are different. The way a power grid is protected differs from how patient records or credit cards are safeguarded. There is often significant flexibility in how one determines compliance. For example, many regulations allow a company to effectively “set its own exam.” Typically, the regulation requires an organization to define its internal procedures and policies, then provide proof it is following those procedures and policies. Often, there is no judgment on the quality of the processes. Instead, it is simply mandated that the processes be repeatable.  
 
There are IT security managers who are in continuous assessment mode and face almost all regulatory sets at once. Additionally, business partners are becoming more demanding and expect an ability to review compliance practices before committing. But there is a major pitfall if each compliance standard is treated in isolation. It is hard enough to be on a continuous compliance treadmill, but it is worse if the rules and scope of audits keep changing. Because of this effect, it can be a real mistake to write IT policies for each different regulatory set. Instead, a better approach is to look for commonalities among the regulations as they apply to IT systems and infrastructure, then write one set of rules that can function as the meta-set supporting all IT compliance requirements.
 
Looking across the various compliance sets from each industry, there is one that stands out as a prime candidate for serving in this meta-role: the PCI-DSS. It is the most prescriptive standard regarding IT and network security. Clued-in IT security managers are looking at these requirements to see what they can leverage. The good news is quite a bit can be leveraged. IT security professionals often say that while they are not required to follow this standard, they are adopting PCI as a benchmark because their board and management understand that PCI compliance translates to applying “due diligence for our important stuff.”
 
There's a clear advantage in converging the goals of all the various compliance objectives and it centers on scope. Anyone manually measuring compliance today will tend to reduce the scope of the project as much as is possible. However, this greatly changes when automation is applied. The automation of the analysis of firewall and router configurations provides an objective shift from a small scope to a unified scope. The ideal target is a single set of tools and processes for compliance, evaluated against a single infrastructure, with a single set of rules for what's compliant. Reaching this ideal situation isn't trivial, but in a world where compliance burdens are continuously increasing, it is a critical survival strategy to unify and automate this work as much as possible. The strongest organizations are well along this path, finding commonality across regulation sets and applying fixed standards in a turnkey fashion. The contrast is stark between the efficiency of such teams and those in reactive mode, struggling to clear each different regulatory milestone as an isolated project.
 
The untenable complexity


There are numerous horror stories about the various approaches that have been tried and failed. One example involves a company that decided to get serious about aligning its security practice with its assessment practice. Starting from the premise that firewalls are key, the company wrote a procedure into their compliance practice that said all firewall rules must be audited.
 
The company then built a database of every rule in every firewall and identified the owner for each line. A procedure was also defined indicating that once every 90 days every owner had to reapprove every rule. Shortly after the practice was introduced it was found to be untenable. While it appears logical and follows the letter of the law of the regulations, it is simply not scalable. Humans can't reliably review many thousands of complex firewall policy statements accurately. Even if they could, it takes too long for the business benefit to be extracted. Something went wrong – a reasonable audit requirement turned into a monster.
 
There's actually a deeper issue in this device-by-device approach. Validating every rule, separately, in every firewall does not mean the IT organization understands the network's defensive posture. The surrounding network context can make a world of difference. A set of firewall rules taken out of context hardly provides the whole picture. Traffic moving through a network involves many complex devices. Knowing which rules, spread across devices, are involved in this flow of traffic is a hard task in even medium-sized networks. Understanding all the interactions between devices in a multi-layer fabric with load balancers, address translators, traditional and application-level firewalls is too detail-oriented a task for all but the most skilled and dedicated personnel. If an audit finds a company has the right protections, but the protections are in the wrong place, they achieve nothing.
 
For one example, bandwidth economics has transformed many retail networks from predominantly leased-line networks to VPNs on public infrastructure. The network complexity can be staggering – thousands of remote sites, each with their own firewalls and VPN equipment. The number of interactions needed to determine such a network is working as designed is truly staggering. Technically, it is a graph theoretic problem – a very hard one with an awful lot of edges. It is humanly impossible to keep track of the interactions that make up a network by simply printing out configurations and reading through them.
 
Taming the assessment process


Identifying what the network defenses are doing is key to knowing whether sensitive data is secure. Understanding this on an isolated device-by-device basis, however, is not scalable, and not effective. Networks are complex entities with complex moving parts. Auditing the setting of each knob and dial on every low-level network device may sound sensible, but it is the equivalent of assessing patterns in the bark in order to understand the forest. To assess defensive posture, it is necessary to know what data traffic is permitted and what is blocked. This is inherently an end-to-end question, not a device configuration question.
 
This level of complexity, compounded by the constant business demands for change, is a bad recipe for achieving even a reasonable security stance. Even in the best run networks, defects are introduced due to mistakes and omission, which lead to security exposures. This is the norm, not the exception, in most operations today.
 
Leading IT security managers are stepping back from the idea of auditing every piece of bark on the tree. Instead of managing every rule in every firewall, a better approach involves managing groups or zones of activity. One large software company broke its network into 12 groups including internet, extranet, customer database, ERP system and wireless areas. It then created a 12 by 12 matrix, with each group represented across and down the matrix. Within each cell was a description of every legal type of traffic between different zones, providing a zone-based rather than a rule-based approach to security management.
 
The wise IT security manager does not make this matrix 1,000 cells by 1,000 cells, but boils it down to something more manageable. PCI Requirement 1, for example, utilizes a four by four matrix. If a business doesn't run on credit card transactions, the cardholder heading may be changed to another asset security professionals are charged to protect. This approach begins to tame the complexity of a network.
 
Once the matrix is defined and allowable traffic is described within each cell, a business must evaluate that the inter-relationships are accurate. This helps reduce complexity enormously. Still, most organizations do not have an efficient, accurate method for determining whether the entire network is operating as defined. The organization that defined a 12 x 12 matrix assigned its best and brightest network security professionals to the task of poring through the details of the firewall rules and creating a manual network “map” that validates what the zone chart defines. This approach is much better than the rule-based one (extracting sign-off from business units for every line in every firewall), but still involves a tremendous amount of time from well-trained and experienced talent. It is not scalable and often not accurate.
 
The zone matrix gives security professionals a common language with which to communicate issues and exposures to other units and upper management, as long as the zone-to-zone relationships make sense to other people within the organization who aren't security geeks. With such a tool, the IT team is well on its way to getting management's attention, getting the necessary resources assigned and, ultimately, getting the security hole fixed.
 
For example, a matrix presented to corporate executives may have two red boxes within the matrix that indicate non-compliance. One of those boxes is the fault of a business unit manager who has never responded to security's repeated requests to tighten up an access point between his network in Poland and the corporate ERP network in the U.S. The CIO and the CEO will then turn to the top line manager over this unit and ask when this issue will be fixed.
 
After taming network complexity through zone management, it is time to automate the compliance assessment process. There are products available that can automatically analyze firewall and router configurations across the entire network and produce end-to-end traffic flow diagrams. This is a great job for computers, but a terrible job for people as it is detail-oriented, technically demanding and any small error can be magnified in unexpected ways. Through automation, security teams are finding forgotten servers attached to networks with access connections to other servers which create security holes. These teams are finding mistakes and omissions within router access or firewall rules that create problems multiple hops down the line. Through automation, valuable time is recovered and accuracy is increased.
 
This approach allows for network growth, the easy management of compliance assessments and the continual improvement of a company's security posture within a complex, rapidly changing IT infrastructure. The chance of being the subject of a forensic study by the Verizon Business Risk Team decreases even with that recently acquired foreign division now connecting into the corporate network.



Mike Lloyd, chief scientist for RedSeal Systems, has more than 20 years experience in the modeling and simulation of dynamic systems. He holds a degree in mathematics from Trinity College, Dublin, and a Ph.D. in epidemic modeling from Heriot-Watt University, Edinburgh.
Mike Lloyd

Dr. Mike Lloyd has more than 25 years of experience in the modeling and control of fast-moving, complex systems. He has been granted 21 patents on security, network assessment, and dynamic network control. Before joining RedSeal, Mike Lloyd was Chief Technology Officer at RouteScience Technologies (acquired by Avaya), where he pioneered self-optimizing networks. Mike served as principal architect at Cisco on the technology used to overlay MPLS VPN services across service provider backbones. He joined Cisco through the acquisition of Netsys Technologies, where he was the senior network modeling engineer. Mike holds a degree in mathematics from Trinity College, Dublin, Ireland, and a PhD in stochastic epidemic modeling from Heriot-Watt University, Edinburgh, Scotland.

Get daily email updates

SC Media's daily must-read of the most current and pressing daily news

By clicking the Subscribe button below, you agree to SC Media Terms and Conditions and Privacy Policy.