Cloud Security

What Shouldn’t Be Automated, Really?

By Ben Tomhave

In preparing for my Cloud Security World 2016 talk, "Automagic! Shifting Trust Paradigms Through Security Automation," I've been thinking a lot about what can be automated, how to automate, and how to demonstrate and measure value around all that jazz. It has, however, occurred to me recently that perhaps I've been looking at this question all wrong; it’s not necessarily a question of whether something should be automated, rather it's a question of what shouldn't be automated.

At first blush this may seem like a silly way of thinking about automation. After all, it's probably still too early to talk about automating, well, just about everything, right? As it turns out, this isn't the case. Not even close. There are so many ways to automate many of our standard development, operational, and security responsibilities that I'm actually surprised we're still hearing complaints about inadequate hirable resources rather than complaints about too much automation stealing jobs.

That said, there are certainly several places where automation requires human involvement, either as a fail-safe, or as a manual process. Here are a few of those categories and a little information on why fully automating is at least premature, if not an outright bad idea.

Forensics and Incident Response

Several security automation and orchestration vendors offer capabilities in support of forensics and incident response, but these functions tend to center around enrichment rather than automated response. Increasing opportunities to automate some responses will start to emerge, and we have, in fact, seen this sort of action already around certain classes of attacks (for example, DDoS protection and brute force login attack response).

That said, for the foreseeable future, a continuing need for humans to be in the loop for forensics and incident response will exist. Skilled forensic investigators and incident responders provide value that we cannot yet easily automate. Oftentimes it's necessary to have a human fail-safe making decisions about types of responses. Moreover, limitations in underlying IT architecture (network, endpoint, cloud) add complexity such that it's not trivial to automate responses. This, however, will change.

Patch Management

Much of the vulnerability management process can and should be automated. Vulnerability scans can be automated, feeding data into GRC and/or ticketing systems and/or CMDBs. These systems can then further automate mapping identified vulnerabilities in environments to system and application owners, and in turn automatically generate tickets (work orders) for resolution. However, that is typically where the human fail-safe has to intervene to ensure that the patch is safe to deploy, is approved to deploy, and to then schedule the deployment. Automation can then take over after this point once a change is reviewed, approved, and scheduled, but there is still the need for a human.

Now, that said, the DevOps world reduces the need for human involvement. In fact, in a heavily automated continuous integration/continuous deployment (CI/CD) pipeline where real-world A/B testing is always going on, it is more than feasible to automate the patching activities, pushing out updated images with the patches deployed, and then watch the deployment to ensure it lands successfully. Old images can then scale down as new ones scale up, dramatically removing the need for human involvement. In fact, human involvement is then only necessary as a break/fix fail-safe.

Break/Fix

Standard break/fix scenarios are largely the domain of humans, and seem likely to continue that way for the foreseeable future. After all, break/fix is the very scenario where you want the human fail-safe to be involved. When bad things happen, recovery can be automated, but root cause analysis is still highly valuable, and typically necessitate human interaction.

As artificial intelligence (AI) and machine learning (ML) continue to advance, it is conceivable that much of the break/fix context and enrichment data can be collected and pre-analyzed, but I suspect that 100 years from now we'll still have engineers in the loop to review and analyze an event to determine what needs to be done to prevent it from occurring in the future.

Coding

Even though an AI has apparently written a passable short-form novel, that's not to say computers are ready to write their own software just yet. This will undoubtedly change in the future, but until then we will have humans in the loop, and that means we'll almost certainly continue to have problems with application and software security. However, fear not! Future generation languages are inclined to move toward natural language and abstract constructs that can then be more easily coded and manipulated. We're undoubtedly far closer to a brave new world and self-writing software than we may care to admit.

Baseline Builds

When referring to "builds" in this context, I'm not talking about software builds that are frequently already automated. Instead, I'm talking about the underlying components that need to be assembled in order to reach that desired CI/CD pipeline state. Components include the CI/CD pipeline itself, standard images, language and IDE customizations, and choosing and deploying various components of preference such as repositories, builders/packers, QA/testing tools, appsec testing tools, orchestration and automation tools, and so on.

Once all of these pieces are in place and properly chained together, humans will carry a decreasing set of responsibilities. But, the role doesn’t completely cease to exist. Human involvement with standardizing images (at least overseeing updates, if not conducting them manually) and for building additional tooling (like Netflix's Simian Army) will also be necessary.

Approvals and Authorizations

Humans will continue to need to review and approve certain requests. However, over time we will see standard patterns emerge that will reduce the amount of human involvement necessary to authorize/re-authorize things like access. On the flip side, automation may lead to an increased need for human interaction to review and approve certain activities. For example, out-of-band approval methods for code commits or code deployments could provide a degree of utility to ensure that unauthorized persons aren't submitting code into the CI/CD pipeline. Additionally, there may be cases where exfiltration of sensitive/restricted data may trigger an out-of-band fail-safe response for a human to review and approve the action (thus helping limit data exfiltration by attackers). The future state for approvals and authorizations will be a nice example of human fail-safes in action.

Policy and Process

As automation increases, the necessity for humans to define policies and processes for governance and oversight will also increase. We can think of this as building policy and process guardrails into automated activities and workflows, not dissimilar from codifying Asimov's Threat Laws of Robotics. The objective for most human interaction will quickly begin to tend toward these governance responsibilities within a heavily automated world, with other emphases on architecting and building solutions becoming more abstract in nature (and less hands-on). Clearly articulating limitations for automation and specifying where a human fail-safe must be involved will be an incredibly important task.

Wrap Up

To conclude these thoughts, humans will continue to be involved for the ongoing, foreseeable future in many areas. However, what's also clear is that these positions will be increasingly specialized and require a high degree of training and/or experience. This (not-so-distant) future state will pose interesting challenges to new workers as they enter various fields.

It will be interesting to watch as automation, AI, and ML continue to evolve, mature, and impact the industry. The hope is that it will lead to great efficiency and effectiveness, as well as more inherently secured environments. Many of the traditional problems will continue to exist, for example, application and software security, identity and access management, and ensuring adequate audit trails are maintained for various activities and authorizations.

We indeed live in interesting times, midway through the digital industrial revolution!
 



Ben Tomhave is a recognized and respected leader in the security community, currently serving as a Manager of Information Security Architecture at Ellucian. He holds a Master of Science in Engineering Management (Information Security Management concentration) from The George Washington University, and is a member and former co-chair of the American Bar Association Information Security Committee, senior member of ISSA, former board member of the Northern Virginia OWASP chapter, and member and former board member for the Society of Information Risk Analysts. His talk, “Automagic! Shifting Trust Paradigms Through Security Automation” will be featured on June 14th at Cloud Security World 2016.

Get daily email updates

SC Media's daily must-read of the most current and pressing daily news

By clicking the Subscribe button below, you agree to SC Media Terms and Conditions and Privacy Policy.