Data Security, Encryption

Assessing Cryptographic Systems

By Ed Moyle

It’s a truism that there are things in life for which “visibility” is out of proportion to importance. Anyone who has ever had a plumbing issue—say a backed-up drain or a leak—has experienced this firsthand. We all rely on functional plumbing, but it usually operates seamlessly and outside our scope of awareness; in fact, it operates so seamlessly that unless there’s a catastrophic failure we tend not to notice it operating at all. 

This can be equally true in a technology context. There are technologies operating in our environments that we tend not to pay attention to unless there’s a major problem. One such area where this can happen is in the operation of cryptographic systems. These systems help control the protection of our information by keeping data confidential when it’s stored or while being transmitted, and they also service other security technologies like authentication and function as the invisible “plumbing” upon which security rules are enforced. Cryptographic systems are important, but very often go unnoticed. 

The need to assess

To fully understand what I mean, take for example a technology like TLS. Occasionally you might encounter a situation that causes you to notice the operation of TLS, such as an “expired certificate” warning in your browser, which highlights the fact that somewhere, someone neglected to anticipate an upcoming certificate expiration. Or maybe you were impacted by a recent SSL vulnerability such as DROWN or POODLE, which is brought to light when a vulnerability scanning report shows that an application or service has been affected. However, as a normative course of conducting business, potentially hundreds of thousands or even millions of TLS sessions are negotiated every day without it coming into the direct line of sight of the practitioner. 

This lack of awareness can be dangerous in the right circumstances. Why? Because operation can be undermined in such a way that it is not immediately obvious. For instance, this can occur when a legacy protocol or algorithm is allowed to persist in active use long past when it has been demonstrated to be vulnerable to attack or when the underlying implementation has some flaw (such as a coding issue) that needs to be remediated. If we’re not actively looking for this type of issue, we can operate in a false sense that everything is “hunky dory” when in reality it is anything but. This means that to execute on our mission of keeping the organization protected, it behooves us to retain a proactive awareness of cryptographic technologies: to ensure that their use is appropriate, that we stay ahead of issues that might impact their operation, and that the implementation is robust. 

This can, understandably, seem daunting to many security practitioners; not only does understanding the lowest levels of operation (i.e., the cryptographic primitives that make up a cryptosystem) involve a specialized field of mathematics that most security pros only have a modest familiarity with, but assessing the software engineering involved in their implementation requires coding skills that are likewise outside of what the industry expects security pros to have. That said, there are some very useful and proactive steps that we can have in our back pockets to help ensure that these technologies are operating effectively and that they remain doing so over time. 

Listed below are a few strategies that security or audit professionals can leverage to evaluate the use of cryptographic technologies and help maintain this proactive posture. It goes without saying that these are not the only strategies that exist. However, these particular areas provide immediate and lasting value that any practitioner (regardless of their mathematical inclination or software development chops) can accomplish, with a minimum financial investment. In other words, these are things you can do right now to assist in proactively finding, flagging, and remediating potential areas of concern. 

Strategy 1: Establish governance

The first area where an organization can reap immediate value is through approaching the governance of cryptographic systems and tools systematically, meaning, putting together a systematic approach to understand risk tolerances for the organization, defining risk mitigation expectations for cryptographic components (protocols, tools, primitives, etc.) in light of those tolerances, and building out the methods by which those constraints will be evaluated and measured over time. 

Note that none of these steps relies explicitly on any special understanding of the mathematics involved or the implementation of the various technologies at the software engineering level; they rely instead on tools that almost every security practitioner already has access to risk management, understanding of the internal environment and technologies in use, ability to create and publish policy and procedures that communicate expectation and intent throughout the organization.

This isn’t “rocket science” —much like you might create governance structures (e.g., policies and procedures) around the adoption and use of any other technology, employ those same methods for the governance of cryptographic tools, protocols, and techniques. Develop policies that clearly articulate intent about how these things can be used (e.g., a cryptographic key management policy that defines expectations key storage and protection). Not only does this give you a benchmark to evaluate usage against, but it also helps you discover usage that might be outside those parameters as the policy is socialized. 

Strategy 2: Inventory usage

Related to the first strategy, the second strategy is systematically building a detailed understanding of the cryptographic tools in use throughout the organization—in other words, gathering information about usage, information about the context in which that usage operates, and as much information as possible about the manner by which that usage is implemented. 

This is an important step as it can form the foundation for much of the other actions we can choose to undertake. However, doing so is harder to accomplish than it sounds. Keep in mind that cryptography is operational at pretty much every level of the stack. Applications might make use of libraries or underlying services, operating systems might employ encrypted volumes or otherwise make use of cryptographic services as a normative part of their operation, transport layer communication might take place over cryptographically-protected channels, etc. Even the very lowest levels of the OSI stack can be impacted. The diversity of types of services and the varying levels at which they might operate is one of the key reasons why a systematic approach works best. 

Having an understanding of what tools, techniques, and protocols are already in use is helpful for two reasons. First, because it helps us separate usage that we might “take on faith” vs. usage we might wish to evaluate more closely. For example, we might discover that applications in our environment heavily rely on TLS implemented via known, reliable implementations (e.g., implementations built into the operating systems we use or supplied via ubiquitous open libraries like OpenSSL). While we might wish to place parameters around some of the specifics of how these tools are used (for example, to stipulate that they use only current protocol versions or that they maintain a certain patch level), the fact that they’re being used might cause less heartburn (and thereby require less scrutiny) than a one-off, custom developed, non-standard implementation a developer “whipped up” for a single custom situation. One thing to consider along these lines is to formally define a list of “known good” implementations based on some criteria—for example, cryptographic modules that have undergone FIPS 140 evaluation or those that the security team has already vetted thoroughly. When you discover identical usage to what is on the list, you may choose to forgo additional intensive scrutiny as a time-saving measure. 

The second reason having a thorough understanding of your tools, techniques, and protocol environment is important is that it gives us a jumping-off point to be able to track the status of implementations, protocols, and primitives. As research that might impact these things comes to light, we can be sufficiently versed in responding accordingly. For example, if we know we have SSL 3 in the environment and an issue like POODLE surfaces, we can put two and two together and realize that the environment has been impacted and that we now have to build out a response plan. 

Strategy 3: Establish a review mechanism

Likewise, establish a mechanism to evaluate usage and implementations that might fall outside the list of “orthodox” usage situations defined above. This, of course, will be more or less detailed depending on the skills a particular organization has in-house. An organization that has a development team already performing engineering tasks involving cryptography (for example, implementing special-purpose authentication or encryption tools) might choose to perform a more intensive review of a given usage or implementation than one that does not. Either way though, determine what review criteria are practical and assign accountability to ensure that some evaluation occurs. 

The methods you might wish to employ will likely be dictated by the strengths of the team chartered with this responsibility, but a number of approaches are possible. Use of a formalized threat modeling approach, for example, can be adapted to formally assess a given situation. Alternatively, leveraging third-party validation (e.g., evaluation as per the CMVP for FIPS 140) might be considered. 

The goal here is not to inflict an onerous process on every situation where cryptography is used or to "boil the ocean." Instead, the objective is to develop a practice that allows the organization to universally apply a constant and consistent level of scrutiny. Will it be perfect? Probably not. Will it give you a well-understood baseline, help you gather information, and help you understand your current posture? Absolutely. 


Ed Moyle is Director of Thought Leadership and Research for ISACA. Prior to joining ISACA, Ed was Senior Security Strategist with Savvis, and a founding partner of the analyst firm Security Curve. Ed is co-author of Cryptographic Libraries for Developers and a frequent contributor to the Information Security industry as author, public speaker, and analyst.  

Get daily email updates

SC Media's daily must-read of the most current and pressing daily news

By clicking the Subscribe button below, you agree to SC Media Terms and Conditions and Privacy Policy.