The incident model has dominated the security community since the start. It’s at the heart of security products and training, dominating how we talk about our work. The model gives us practical ways to talk about how we define an incident, how the team comes together to resolve the incident, and how to measure the number of incidents and time to resolution consistently. It’s rooted in history and works across disciplines: watching firefighters respond to a fire alarm reveals a similar approach.
But just as fire alarms don’t prevent fires—they only limit the damage and risk to life after a fire starts—a security model built around incidents will not prevent security incidents. We need something better.
Incident response across industries
In over 130 years of research and effort to improve industrial safety, one of the most ubiquitous results that’s emerged are the “x days since an accident” signs that enjoy pop culture life in media and memes.
The signs and the memes invert the typical incident reporting statistics in that they emphasize the positive case—a count of time without incident—rather than the equally correct accumulative statistic of total incidents over time. Though many people no doubt reported total incidents over time, as well as the time to recover from various incidents, the signs reflect a focus on improving safety outcomes and reducing total incidents.
Shortcomings of today’s incident response model
Despite a decade of effort to “shift left,” incident response in the software security industry falls way short. A security model built around incidents won’t prevent security incidents. Moreover, the standard incident model through which incidents are tracked is also flawed. Here’s why:
- The standard incident model graphs don’t show whether developer activity has been increasing or decreasing.
- Charts don’t highlight whether the incident rate has accelerated or decelerated relative to the activity rate.
- Graphs do not show how many tracked incidents have been fixed or are in the process of being fixed.
- Incident models don’t show how quickly incidents are resolved or the mean- time-to-repair (MTTR).
And what about incidents included in these charts that were false positives? Incident model charts don’t measure actual security outcomes.
By not including variables such as these, we’re left with a largely incomplete picture of how secure a company’s current security measures are. That’s problematic since we need to see the whole picture to understand where we can make improvements. While part of seeing the full picture involves tracking insightful metrics, it also requires adjusting the language we use.
How we define when an incident matters
While some software refers to “incidents,” some refer to these as “violations.” We could just as easily refer to incidents as “opportunities for improvement.” Still, labeling everything as a “violation” escalates common development activities into punishable offenses and ignores the likelihood of false positives. It’s also not a far jump to think that if a violation exists, we must eliminate it, but that’s not always the case. And when software delivers only negative feedback, it ignores a century of research that has been conducted on improving learning outcomes.
In most cases, detecting a security issue has been fairly easy relative to getting detected issues fixed. Research published in ACM shows that when security issues are reported to developers, it can have a big difference on the outcome.
We have gravitated toward a “diff time” deployment, where analyzers participate as bots in code review, making automatic comments when an engineer submits a code modification. Issues reported at diff time saw a 70% fix rate, where a more traditional “offline” or “batch” review where bug lists are presented to engineers, outside their workflow saw a 0% fix rate.
The ACM paper later explains that the poor outcomes for fixing issues presented as a batch to developers were despite a false positive rate below 5%.
Transform the security model for better outcomes
There’s also the unfortunate assumption that developers don’t care about security, when that’s far from the truth. Developers are charged with turning business goals into valuable outputs that drive growth, often with too little time or other resources to do the job “right.” Developers know they ship imperfect solutions, but they’re also trained that “done is better than perfect.” For developers, “done” means something the customer can use and (hopefully) drive business growth. Though it pains them to make the compromises necessary to meet schedules, every effective developer also cares deeply about the correctness, quality, and security of their code.
As professionals responsible for balancing business demands and delivering value, developers are also passionate about the quality and effectiveness of the tools they use to achieve those goals. Unfortunately, most security tools designed to solve security problems don’t respect or solve the challenges developers face. To continuously improve upon security outcomes, security processes need to work with developers, not against them.
After all, it’s likely that most people wouldn’t feel motivated if other departments consistently critiqued their work, or tried to implement the use of unfamiliar tools that prevented them from completing their work efficiently. But this isn’t the only problem with the way most organizations think about security.
As security professionals, we need to acknowledge the lesson here that the most perfect code security detection system is worthless without a workflow that can present feedback to developers in a way that leads to fixes. Security outcomes aren’t improved by detection alone.
Security models must highlight ways to improve and make positive changes. These changes are often as simple as incorporating new ways to represent data within security models, all to better set security teams up for success to continuously improve and even lead to better outcomes.
Casey Bisson, head of product and developer relations, BluBracket