Network Security, Vulnerability Management

No reason to keep application security in the backseat

When it comes to IT security, it is painfully obvious that both government and industry focus too heavily on the perimeter and endpoint protection — network security, host security, virus protection, trusted internet connections, core configurations, firewalls and identity management. 

Meanwhile, the applications that protect vital information and automate critical processes remain too vulnerable. 

Why is this true when research by both US-CERT and a 2009 government study found that 79 percent or more of the attacks that led to data loss in 2009 were on applications?

Overlooking vulnerabilities within software that already has been deployed puts government agencies and industry at tremendous risk for attacks, data loss and process interruption. 

One need look no further than the attack on Google by hackers in China earlier this year, enabled by a zero-day vulnerability in Microsoft's Internet Explorer, for a sobering reminder of how even the biggest software companies with the best processes can produce insecure code. 

Is the software purchased and developed by government agencies better? No. According to evidence from code-level analysis performed by Veracode's automated static and dynamic security testing on nearly 1,600 applications over 18 months prior to February 2009, half of all government applications failed to demonstrate acceptable security, compared to slightly more than half for all applications.

No one is doing a good job on application security.

What justification could there be for not pursuing application security far more aggressively? How can government agencies and industry break the habit of overdependence on perimeter defense? What would it mean to cybersecurity if applications behind the firewall were at least as safe as the firewall? 

To answer these questions, let us examine some of the common myths of application security in the government.

Myth #1: If I protect the network, I am adequately protecting my organization.

Reality #1: This is a necessary but insufficient defense. Data breaches are shifting more to the application layer, putting software at the root of federal cyber vulnerabilities.

Economic and time-to-value imperatives have driven agencies to reuse code and purchase software wherever possible. Vulnerabilities in any piece of software can be a door that bypasses network and endpoint controls and gives an attacker access to everything. Until government agencies and industry secure both their application development efforts and their software supply chain, they're vulnerable. 

We patch software with known vulnerabilities because we know our perimeter and endpoint security cannot protect from many software vulnerabilities. The only solution is fixing the root cause in the software with a patch. 

We need a solution for the unknown software vulnerabilities that is equivalent to patching. We need an application security process.

At Computershare, a leading provider of financial-related software and services, perimeter security was well established when the security team began to look into strengthening application security. At first, the software development organization questioned the need. In a short time the evidence revealed that even the best perimeter would not be sufficient if vulnerabilities in applications remained unidentified and unresolved. 

Today, security and development professionals have embraced application risk management as an essential component of their comprehensive security efforts.

Myth #2: It is not possible to know the security state of all my critical software. 

Reality #2: The idea that application risk management is time-consuming, complicated and disruptive is anchored in an outdated understanding of what is possible. 

With cloud-based security testing and revolutionary technical innovations that enable automated testing on software in its final form rather than in source code, it is possible to assess hundreds of applications within one year or even a few months.

By prioritizing applications based on each one's level of business criticality, government agencies quickly can test the most mission-critical applications.

The right application risk management solution can fit easily into current internal certification and accreditation processes and integrate easily into the many different software development lifecycles (SDLC) used across the enterprise, without causing disruptions.

Third-party applications and code can be evaluated, too, so that agencies can cost-effectively evaluate the security of every application behind the firewall. Application risk management solutions delivered through a cloud-based model and able to evaluate every application regardless of its supplier are able to scale globally across teams and geographies without the need for any hardware or software, leading to lower operational expenditures, more complete coverage and a more accurate understanding of risk and compliance.

The Federal Aviation Administration, for example, realized benefits quickly with an enterprise program running within a few weeks without hardware, software or consultants. eLearning initiatives were used to educate FAA security and development personnel on how to use the solution to perform an assessment on their applications. Within a month, offices from across the FAA were submitting applications for code review and remediating vulnerabilities.

Myth #3: If my developers write secure code, my applications will be secure.

Reality #3: This myth rides on the back of an even bigger one: that software is written from scratch. 

The reality is that nearly all software contains code written by someone else, and every organization relies significantly on commercial, open-source and outsourced software providers. 

It is simply not economical or smart to ignore all of the high-quality, reusable code that makes the powerful software we all rely on possible. However, this is also why it doesn't matter if your developers write perfectly secure code: Your risk is in the security quality of the code they didn't write. 

Recent innovations that can evaluate the security of both internally developed and third-party code make it possible to address the reality that your developers don't write all the code in the applications you rely on and the possibility that the code they do write isn't perfectly secure.  

For the U.S. Army TAMIS (Total Ammunition Management Information System) program, developers were equipped with on-premise source code security testing early in the project.

As the tools began to be used, it quickly became clear that the pace of adoption, required tool expertise and the issue of third-party code coverage necessitated a different strategy. Recognizing that source-code testing was good but insufficient, the Army adopted a cloud-based independent verification and validation application risk management services platform to ensure more complete, accurate and cost-effective coverage.

**

Without a change in the way government agencies and industry are protecting themselves from the exploitation of software vulnerabilities, progress can't be made.

Patching quicker and updating anti-virus and IDS/IPS signatures faster is not stemming the tide. The threat space moves too quickly. 

And, while no software will ever be perfectly secure, understanding the nature of software vulnerabilities across your entire portfolio of critical applications and how they contribute to enterprise security risk is crucial for protecting your organization.

Chris Wysopal

Chris Wysopal is Chief Technology Officer and co-founder at Veracode. He oversees technology strategy and information security. Prior to co-founding Veracode in 2006, Chris was vice president of research and development at security consultancy @stake, which was acquired by Symantec. In the 1990s, Chris was one of the original vulnerability researchers at The L0pht, a hacker think tank, where he was one of the first to publicize the risks of insecure software. He has testified to the US Congress on the subjects of government security and how vulnerabilities are discovered in software. Chris received a BS in computer and systems engineering from Rensselaer Polytechnic Institute. He is the author of The Art of Software Security Testing.

Get daily email updates

SC Media's daily must-read of the most current and pressing daily news

By clicking the Subscribe button below, you agree to SC Media Terms and Conditions and Privacy Policy.