Deconstructing PCI 6.6

Share this article:
Deconstructing PCI 6.6
Deconstructing PCI 6.6
Organizations handling credit cards feel pressure building as the deadline for PCI Requirement 6.6 compliance, June 30, 2008, approaches. Most are still evaluating how to strategically ensure compliance with this requirement, while maintaining a strong security posture.

The addition of stringent industry guidelines for web application security is long overdue. With the escalating threat of web attacks, organizations must remain vigilant. Web applications are a special breed of living code -- always online, always accessible, always being modified, and always subject to attack. Diligent web application security demands frequent assessment/attack research and findings targeting specific web applications are posted daily.

Requirement 6.6 is currently the subject of debate due to confusing terminology and the objective has been veiled by clever vendor marketing campaigns promoting specific solutions.

What does PCI Requirement 6.6 really say?
Requirement 6 is about “developing and maintaining secure applications and systems.” Requirement 6.1 requires that vendor-supplied security patches be applied within one month of release. Securing and fixing custom application code is not quite as easy as downloading a patch from your favorite software vendor. web application vulnerabilities must be identified, fixes developed, tested, and deployed. In short, you're on your own for the entire process.

Specifically, PCI Requirement 6.6 mandates the following:

PCI DSS version 1.1 Requirement 6.6: Ensure that web-facing applications are protected against known attacks by applying either of the following methods:
  • Having all custom application code reviewed for common vulnerabilities by an organization that specializes in application security.
  • Installing an application layer firewall in front of web facing applications.
PCI DSS version 1.1 Requirement 6.6 Testing Procedure: For web-based applications, ensure that one of the following methods are in place as follows:
  • Verify that custom application code is periodically reviewed by an organization that specializes in application security; that all coding vulnerabilities were corrected; and that the application was re-evaluated after the corrections.
  • Verify that an application-layer firewall is in place in front of web-facing applications to detect and prevent web-based attacks.
The confusion stems from the interpretation of the requirement. First, let's clear up some high-level misconceptions:
  • Requirement 6.6 is not just for “level ones.”
  • It does not specify service providers or merchants.
  • It does not specify either source code reviews or web-application firewalls.

What does PCI 6.6 really want?
The spirit of 6.6 can be met objectively and systematically. The ultimate goal is to ensure secure web applications. For applications developed or customized in-house, the following process must be continually performed: Identify vulnerabilities (find), correct them (fix), and test to confirm that the correction is effective (prove). Find, fix, prove, find, fix, prove.

Some security vendors have marketed that PCI Requirement 6.6 may be met through either installing a web Application Firewall or outsourcing expensive source code reviews. This marketing distracts from what the PCI Council is seeking in Requirement 6.6.

The intended outcome of Requirement 6.6 is the establishment of a web application vulnerability lifecycle – leading to the effective elimination of risk. Vulnerabilities must be detected, communicated, and corrected.

This can be done through various measures such as:
  • Black box testing (run-time assessment)
  • White box testing (source code review)
  • Binary analysis
  • Static analysis
  • Remediation by developers
  • Web application firewalls

Requirement 6.6 also requires separation between the developers and the security testing team. Clarification released by the PCI Security Standards Council states the testing must be objective.

Application security testing is complex, and resource intensive. The tools and expertise required to perform safe and accurate testing can be costly. Keep in mind both hard and soft costs associated with finding vulnerabilities -- hard costs for tools, training, consulting, employees, etc.; soft costs such as resource outages, development meetings, production outages, resources required to work outside hours, manual validation and elimination of false findings, among others.

How to comply with Requirement 6.6
Requirement 6.6 is about protecting web applications, plain and simple. Given our modern threat landscape, it is no wonder that PCI Requirement 11.3.2 dictates “application penetration tests” be performed after every “significant change.” Meaningful web application security management requires frequent assessments as code and threats evolve continually. Requirement 6.6 is about developing a repeatable methodology that connects the “Find” (the vulnerability detection) process to the “Fix” process for the systematic, efficient elimination of vulnerabilities from web applications.

1) Find vulnerabilities in web-facing applications

Regardless of your classification as a Merchant or Service Provider, if you have a web-facing application, it must be assessed. This will be far more exhaustive than a network vulnerability scan, and will require authentication to access the majority of application functionality. This testing requires human expertise to exercise the application, validate findings, and test for logical vulnerabilities and other threats a testing tool cannot identify.

The PCI Council has not asserted itself as an authority on application security; it leaves the verification of compliance to approved auditors. What the PCI Auditors seek is evidence of due care.

Demonstration of due care in testing requires:
  • Thorough coverage. Automated tools alone only cover roughly half of the web Application Security Consortium's Threat Classifications. If an application is worth protecting, test it thoroughly with both automated and human means.
  •  Frequency of coverage. Web applications are continually changing, as is the threat landscape. Test the application, in production, as frequently as is meaningful, for example, with each code change.
  • Efficient communication. Vulnerabilities identified become a known liability and must be managed. Vulnerabilities must be communicated clearly and effectively to groups tasked with remediation.
  • Precision in testing. Testing custom application code must be done methodically, and retesting must follow the same processes where possible. Patch development, validation of remediation, and corrections will be simplified if you follow a consistent methodology.
Vulnerabilities in custom application code can be found in a variety of ways. The Web Application Security Consortium has classified 24 different types of attacks targeting web applications. Half of those threats (13 technical vulnerability classes) can be identified at some level of effectiveness through automated means, including run time code testing as well as source code analysis. As with any detection technology, there is a certain signal-to-noise ratio; human validation is required to separate true vulnerabilities from false findings. There are many variables in application security testing, so your mileage will vary. There are 24 threat classifications, with two current appendices (HTTP Response Splitting and Cross Site Request Forgery), which have not yet been formally ratified into the WASC Threat Classification document.]

Runtime assessments (referred to as “black box” testing), Source Code Reviews (“white box” testing), binary and static analysis, etc., are all effective methods to find vulnerabilities in web applications. There is a misconception that the detection techniques all try to achieve the same end goal and compete for the same budgetary dollars. The fact of the matter is that each testing ideology brings different benefits to the table at different price points, almost all of which are complementary and help paint a complete picture of application weaknesses.

2) Fix vulnerabilities

PCI Requirements 11.3.2 and 6.6 require this. For context, reread PCI requirement 6.1. Proving you have installed a patch to commercial applications and operating systems is easy. Proving you have corrected a weakness in custom application code is a little more complicated. This is where having a consistent testing and reporting methodology will come in handy. There are two approaches to code fixes:
  • If you own the web application code -- fix it.
  • If you do not own the code, or have valid business case or cost restrictions that are impediments to fixing the raw code, correct the vulnerability through other methods (e.g., a web application firewall).
Be aware that simply buying expensive WAF hardware does not meet this requirement. Configuring that application-layer firewall to fix known vulnerabilities is complex, and entails the risk of misconfiguration, and potentially blocking legitimate traffic to your website -- but you must configure the WAF in blocking mode to satisfy PCI 6.6 requirements that the vulnerability has been corrected.

3) Prove it

After significant investment in managing the web application vulnerability lifecycle, an auditor (SOX, PCI, or any other auditor) needs documentation to prove the fix worked. Ensure the mitigation applied does in fact correct the vulnerability in practice and in writing.

The PCI 6.6 compliance process of “Find, Fix, Prove” can be simplified further. If the “Find” process is done with sufficient precision and creates proper documentation, the “Find” process can be done in a continual or ongoing manner -- and will in turn document proof of the “Fix” actions as they occur. Auditors like to see trends, especially when they involve continual detection and removal of vulnerabilities -- this makes proving due care very easy.

Find, fix, prove(n)
With a clear understanding of PCI Requirement 6.6, compliance is not only achievable, but can provide great value to web application owners and users. This requirement creates a need for visibility into the lifecycle for vulnerability detection and correction, and will serve to mature web application security. Applying metrics to the efficiency of detection, the cost of producing vulnerable code, and the associated costs of correction will only serve to advance the goal of total web application security.

Trey Ford serves as director of solutions architecture at WhiteHat Security, a leading provider of website security services.  Questions can be directed to

This material may not be published, broadcast, rewritten or redistributed in any form without prior authorization. Your use of this website constitutes acceptance of Haymarket Media's Privacy Policy and Terms & Conditions