DevOps is a cultural and professional movement seeking to break down silos and improve collaboration between those involved in software delivery processes. We now know that information security departments must also be involved in the development lifecycle, giving rise to the term DevSecOps. But collaboration isn't merely facilitated by tools or processes. Development, information security & compliance, and operations teams are composed of people. If those people don't use the same words to mean the same things, they risk squabbling in situations where they actually agree. It's why mission-critical industries, like aviation, have very specific meanings for terms like “mayday” or “pan-pan” and why those two signify different levels of emergency.
Vulnerability, risk, policy, compliance and governance are words that get lost in translation between development, security and operations and cause confusion. Let's dig into why:
Let's start with a term that seems reasonably straightforward: vulnerability. Security vulnerabilities sound like they are unquestionably bad and need to be addressed with the highest urgency possible. Yet discussion of known vulnerabilities (and how and when they should be remediated) is useless without triangulating their impact and probability level. Nobody wants to be like John, the CISO in The Phoenix Project; armed with a binder full of security vulnerabilities, he tries to correct them as though they were all equal priority and is ignored by the business as both alarmist and obstructionist.
As a DevSecOps engineer you can reduce confusion by using tools that help classify vulnerabilities based on holistic business impact to your situation; for example, your operating environment, whether a system or application is customer-facing or not, whether there are mitigating or compensating controls, and the like. Pre-made vulnerability profiles from vendors are useful as a starting point. But be wary of any products that continue to ring alarm bells for vulnerabilities you know aren't high-risk. Alarm bells that ring all the time get ignored, masking real problems.
This one is related to “vulnerability”, but while everyone generally agrees vulnerabilities need to be fixed – at least eventually, in priority order – not everyone in an organization views risk equally. For example, while security teams are likely to view risk solely through the lens of breaches and hackers, product and engineering teams weigh the risk of using less-mature (and potentially more vulnerable) tools against the business risk of not capturing a market window. A concrete example is serverless technology and the prevalence of Node.JS as a framework for event-driven programming. While Node.JS has had some high-profile ecosystem issues around governance of its libraries, the language is still a huge accelerant for software engineers. Advice for security teams: find ways to help product engineers gauge what risks are worth taking in their choice of tooling, deployment platforms, and languages, rather than being seen as the department of “no” by blocking all use of innovative technology. On the flip side, engineering teams should be fully accountable for the risks of using less-mature technology and should be prepared to respond quickly in the event of serious flaws, security-related or otherwise.
The next three terms are ones that often strike fear into the heart of product engineering teams because they have been used as cudgels in the past by security to block creativity. Policy, in particular, is like censorship: sometimes the mere existence of a policy is enough to stop engineering teams from trying to experiment, to the overall detriment of a company trying to create unique digital experiences.
Policy is often created to establish good governance across a company's assets and development processes, based on known best practices and an evaluation, again, of risk versus probability & impact. Where policy goes wrong is when it gets weaponized as a policing instrument. To be effective, policies must be living documents, and there must be a way to negotiate changes with the policy owner – or even to gain exemptions quickly for situations where the policy clearly is inapplicable.
In his recent book, The Startup Way, Eric Ries talks about a team at GE so intimidated by a legal policy about risk that they are afraid to even talk to the legal team that originated it. When, at Eric's behest, they do muster the courage to call counsel, they are immediately advised that the policy clearly doesn't apply to such a small experiment and that they can proceed. This story serves to highlight two lessons. First, policies that aren't the basis for an ongoing discussion are not useful to the business and block innovation. Second, DevSecOps isn't just about collaboration between three groups; it's about making and keeping relationships with all areas of the business.
This is another hot-button word for software development and operations teams, bringing to mind painful quarterly audit cycles or security teams citing ‘compliance' as a vague reason why certain goals cannot be accomplished. Yet development teams agree that their code should be evaluated for correctness and write unit, functional and integration tests for that purpose. If we start to think about compliance with security or regulatory rules as just another aspect of code correctness, then it becomes less scary. It also opens up the possibility for security & development teams to start collaborating on prioritization of these rules, rather than compliance being seen as an all-or-nothing exercise.
In general, engineers want to follow rules when they make sense. They are just not experts in compliance rules. Security engineers who position themselves as collaborators rather than the police are more likely to achieve results for the business.
Many of the previous terms roll up under the umbrella of “governance”, and again, implemented poorly, a governance framework can stifle innovation. Governance overkill can occur when security and operations folks create “default deny” systems for developers to access resources they need to do their jobs. Some examples of these are complex service catalog workflows requiring managerial approval to request virtual infrastructure or locked-down web proxies that prevent the access of resources that engineers need. Software developers know that the most modern companies have “default allow” policies in place, trusting engineers to do what's right for the company. At the same time, there are verification systems in place to ensure safety and to catch bad behavior. Engineering, security and operations leadership should get together ahead of time to establish the right level of governance to enable innovation, rather than creating a low-trust environment for employees.
As established companies are challenged to become software-driven organizations, it's imperative that all groups involved in the process are aligned towards the goal of delighting customers and delivering continuous value. This demands a new level of communication and collaboration. Information security can no longer see itself as a gatekeeper or the police, saving the business from nebulous, ill-defined “risk”, just like software engineers need to be accountable to the operability and security of their software all the way to production. These approaches already exist and it's what lets the Facebooks, Googles, and Amazons of the world be so successful.
All the tools and processes in the world, however, don't matter if we aren't all speaking the same language. We must shift some of the terminology traditionally used in security and compliance to become the language of “yes, and” rather than “no”. Only then can we turn to focus on customer value rather than arguing about semantics that are irrelevant to our users.