Content

Adapting the classical art of penetration testing to the cubist world of cloud

Share

Many technical practitioners may believe that, at the end of the day, penetration testing is penetration testing. Proper penetration testing, however, is an art that must adapt over time. As an artist’s tools, materials, and media evolve, art evolves. With this evolution in “technology, techniques and approaches must change as well.

Our “IT medium” has changed. With IT models fundamentally shifting from “on-prem with a little cloud thrown in” to a cloud-dominant implementation, penetration testing approaches must fundamentally evolve, just as approaches to fine art have evolved through the influences of culture, tastes, and emerging tools of the trade.

A mere 50 years ago, our industry began to shift from a single-user to multi-user computing model. Multi-user systems were considered a risk, and penetration testing saw its roots in military operations, testing the security of the new multi-user, time-shared computing systems. Thus began the cave-painting era of penetration testing: relatively brutal techniques against simple systems (compared to todays).

30 years ago, ARPANET gave way to the Internet, and by the mid-nineties, we had interconnected and exposed systems across the globe. These changes exposed vulnerable systems (remember Nimda and Code Red?), which altered the course of security testing and brought in a new color pigment in the proverbial security palate, called “patch management.” Penetration testing evolved to focus on evaluating exposed services on the operating system of servers in an attempt to find unpatched systems.

The internet brought with it web applications, which broadened the focus again. As the advent of oil painting provided a new dimension of depth and breadth of color and expression, so did the addition of application attacks to the network attacks in the penetration tester’s arsenal. SQL injection, then cross-site scripting and other high-profile attack techniques emerged, bringing application security to the forefront of security concerns.

In 2006, Amazon launched Amazon Web Services (AWS), and Google launched Google Docs, commercializing the concept of cloud computing. This major milestone didn’t significantly change the way penetration testing was conducted since the days of the managed service provider; it simply broadened it. While a traditional managed service provider was rather unlikely to permit penetration testing, the major cloud providers embraced it to gain a deeper understanding of their own shortcomings. The most important aspect of these technology launches was that they were the leading edge of the next phase of evolution: serverless architectures. 

Some say that serverless architecture represents the biggest change to the IT ecosystem we’ve seen in decades. This change would move us past virtual machines and containers and into a code-only world, taking the “ops” back out of “DevOps” (and baking “sec” back into “dev”). This has the capability to break a lot of existing practices and processes, from challenges with monitoring and debugging, securing the much larger attack surface, and the inevitable vendor mating that comes with developing on a specific cloud vendor. 

How will our current incarnation of the art of penetration testing evolve to meet this oddly shaped, abstract world? To start with, it’s no longer nearly as simple as traditional “network” attacks versus “application” attacks – we must adapt and address the entire attack surface of the solution. To that extent, it’s quite difficult to identify the entire attack surface of the solution without collaborating with the DevOps team responsible for it. How is one to tell what serverless infrastructure components are in play? Even in the simplest solution I can imagine – a simple website – it’s exceedingly difficult. I’ll use a quick example and reference AWS components (simply because that’s what I know best). Let’s say we’re building a simple website:

  • Static content could be hosted in S3 OR on server-based infrastructure.
    Static content could be distributed by Amazon CloudFront OR through server-based infrastructure. 
  • Dynamic content comes via an AWS API gateway, OR possibly through a web service hosted on EC2.
  • AWS API gateway calls could call Lambda functions. Lambda functions likely are in a VPC, but not necessarily.
  • Those Lambda functions might query and return data from RDS, ElastiCache, S3, DynamoDB, RedShift, or EC2? The possibilities are almost endless!
  • Authentication? You’ve got to guess it’s IAM, but it might not be.

Given the vast array of services available around which to build even this simple app, it eliminates black box application testing from even being considered an option.* The only option to comprehensively address the components is to understand the architecture and the attack surface of each component before embarking on a penetration test. We will require a collaborative approach between the tester and DevOps team, developing a threat model for the application before engaging in any kind of testing. Working from that threat model, attack scenarios could be established and prioritized for execution.  

A cloud architecture-focused threat model assembled for penetration testing purposes should describe the architecture, identify the entry points within each architecture component, and identify dataflows between components. The threat model must be used to inform the penetration test and, for best results, inform the remediation recommendations as well. Without it, the test may fail to address risks across the solution.

Is serverless computing a revolution in the cloud? Will it be a revolution in application penetration testing? It could be akin to how the development of the camera drove the beginning of modern art. Artists questioned the purpose of painting, and the Picasso’s and Pollock’s ultimately drove the cubism and abstract expressionism movements in response. Just as painting evolved from watercolors on limestone and plaster to the “pop art” we see today, our penetration testing process must move from an “offensive security testing” approach to a collaborative “threat modeling” project.

* Not that black box testing ever was a viable option for testing; it is too generic an approach. Bob Ross had happy little trees, but they pale next to a Van Gogh.

Get daily email updates

SC Media's daily must-read of the most current and pressing daily news

By clicking the Subscribe button below, you agree to SC Media Terms of Use and Privacy Policy.