The National Institute of Standards and Technology (NIST) recently published the second draft of the proposed update to the Framework for Improving Critical Infrastructure Cybersecurity, which was originally released in 2014. This second draft update of the framework aims to make it easier to use and help organizations of all types. However, I see more words being added without any actionable movement or strategic plan. The updates, from my point of view, were tactical at best and not working towards improving the security posture and resiliency of our critical infrastructure.
The first change was refining the section that discusses how to better outline and discuss cybersecurity requirements within the supply chains. In addition, I would like to see a continuous aspect to the validation and communication as part of this. The way it currently reads sounds more like infrequent/periodic testing, and given the high degree of technological change happening at every juncture, there needs to be an updated solution to address that.
The next change involves self-assessment of cybersecurity risk. This seems far too static in nature. Risk has increasingly become an elastic entity and to be able to tie/correlate it to business tolerances and outcomes, the self-assessment process also needs to be continuous. Current approaches to security testing are failing, as it is often assumed there are no vulnerabilities until one is discovered, and that exploit unfortunately is often performed by a malicious hacker. I don’t believe that incremental changes will be nearly enough to increase the cybersecurity resiliency of our critical infrastructure. We need to step back and take a broader, deeper view, and make some wholesale changes.
Zero Trust Approach
I’ve been a proponent of the Zero Trust Model for network security for a few years now. The core thesis of Zero Trust is instead of taking a well-defined perimeter to security, assume that there is no perimeter protected by a firewall, and use other solutions and techniques to detect anomalies and take swift corrective remediation action. Here I’m proposing that we take a similar approach for code and application security to protect critical infrastructure, where the static and dynamic testing is performed as early in the software development life cycle as possible under the assumption that there are always potential security vulnerabilities being introduced.
What the Zero Trust model does is it shifts security controls from a well-defined perimeter to the endpoints of both users and devices. I often also describe it as “Assume that you have no network security in your environment. What steps would you take to secure your users, devices, and data?” Start by assuming that every code commit could introduce a security vulnerability, and the same for every artifact build and application deployment to the production environment. Instead of “trust and verify”, the Zero Trust model needs to “verify, then trust”. This calls for a need to “shift left” and bring security testing into the early phases of the Software Development Life Cycle (SDLC).
The term and concept of “shifting left” isn’t a new one and initially started with more traditional QA testing to ensure that software tests occurred as early as possible in the SDLC. As previously stated, the current approaches to code and application security testing aren’t working, and both the number and severity of breaches continues to increase, with the recent Equifax event being the unfortunate poster child of poor security hygiene. From my experience, code and application security testing are performed on a periodic basis and often not performed until the end of the development cycle. There is little to no collaboration amongst any of the security engineers and developers to work together to improve overall security resiliency.
The Zero Trust approach is a complete paradigm shift where security testing moves to be continuous and in line with the various development, integration, and deployment processes in order to surface and remediate vulnerabilities before they are ever delivered to the production environment. Included in this shift is a cultural transformation in which the security team is much more collaborative during all phases of the SDLC instead of only being involved at the very end of the cycle or, worse yet, after updates are delivered to the production environment.
Measurement, which is one of the core tenets of a DevOps culture, is also paramount to the effectiveness of a Zero Trust approach. The initial security tests, Static Code Analysis, Software Composition Analysis, and Application Security Testing will establish a baseline of security resiliency. Then, the continuous testing in the SDLC will allow the monitoring and measuring of the number of vulnerabilities detected as well as how rapidly they are remediated. Resiliency is greatest when the time delta between detection and remediation is as small as possible, especially when it comes to critical infrastructure.
In closing, one of the challenges with any strategic plan is how to get started. The transformation to Zero Trust application security as it relates to critical infrastructure and the cultural shift to DevSecOps is a journey, not a destination. The integration and automation of security testing can be injected at any point in the SDLC and then additional steps can be covered as the organization becomes more familiar with the processes. Security assurance will increase as additional testing is layered in over time.