The ramifications of bringing to market an application that is not secure have become painfully clear for far too many organizations.
In fact, a recent Ponemon Institute survey of 45 breached companies actually put a number on it, calculating that the average cost of a data breach in 2009 was $6.75 million.
With vulnerable applications serving as the No. 1 cause of data breaches, it seems obvious that organizations need to take a proactive approach to security. Not only will this help reduce costs, it can also save a company from the potential damage on their reputation that a security breach can cause.
To help prevent this, organizations must take the steps to help ensure applications are designed securely from the initial stages of development, not added on as an afterthought.
Clearly, the traditional "bolt-on" approach of adding on security after systems are developed or implemented is no longer effective or safe. After all, what business wants to explain to consumers and regulators that code defects allowed attackers to steal sensitive and perhaps regulated information, especially when this could have easily been prevented? Not to mention that the cost of identifying and repairing a security vulnerability in a product that is already being used by consumers can cost thousands of dollars, versus the minimal cost of finding the vulnerability earlier on in the lifecycle.
The urgency to create secure code has never been greater given the rapid rise in interconnected applications flooding the marketplace.
This is creating lots of opportunity and possibilities, but with that comes new complexities and risks. There is also the need to ensure the integrity of existing, legacy, and mid-development applications in a network-oriented world. Companies must ensure that code is secure from the start to protect data privacy, preserve customer loyalty, safeguard sensitive information and maintain operational integrity.
With this in mind, it is imperative that any organization serious about designing secure software begins by measuring its source code review process against the following three criteria:
- Does it create consistency? When developing code, developers must create consistent processes, policies and a culture of improved security.
- Does it provide the whole security picture? When it comes to dangerous vulnerabilities, large-scale design flaws typically exceed individual coding errors. Fixing individual vulnerabilities have little effect if data is not encrypted, authentication is weak or open backdoors exist in an application.
- Does it prioritize remediation? When reviewing existing code, developers must identify all vulnerabilities in the code and remediate the greatest risks first.
Once the source code review process is in place, there are four steps that must be taken to ensure that code is secure at the early stages of development and to protect that code from future vulnerabilities:
Target the potential places vulnerabilities may exist: To effectively measure the risk proposed by a given application, IT managers can identify the locations of vulnerabilities by searching for two types of errors.
The first, implementation errors, are generally caused by poor programming practices and typically stand alone when identified and remediation is applied. The second, design errors, include the failure to use or adequately implement security-related functions, such as authentication, encryption and the use of insecure and external code types.
Understand how to actively seek out vulnerabilities: While manual code reviews and ethical hacking are common approaches to actively seeking vulnerabilities, both methods can be time consuming, costly and provide an incomplete picture of the overall application security.
To effectively seek out flawed source code, IT managers must employ automated software vulnerability detection tools to spot all potential flaws and understand the apparent source code vulnerabilities from the root. Only advanced source code vulnerability testing tools and the related software development lifecycle can efficiently and effectively ensure that code is secure.
Evaluate existing applications as well as code under development: To ensure secure code, the most efficient and effective measure is to test the applications and code in the development stage against the five broadest types of code vulnerabilities that represent the likeliest and most dangerous risks contained in current and legacy code.
These types of code vulnerabilities include security-related functions, input/output validation and encoding errors, error handling and logging vulnerabilities, insecure components and coding errors.
Following security-related issues through the entire source code of a given application significantly reduces the vulnerability of the application and the critical data it processes and protects.
Apply a source code checklist: Assume applications guilty until proven innocent.
Given the myriad of risks that are posed by the wide range of vulnerabilities, a comprehensive review process must be put into place to effectively test potentially vulnerable source code.
That said, source code vulnerability testing tools alone do not ensure secure software. Developers and IT managers must work hand-in-hand with the tools to assess and measure potential risks while evaluating code.
Maintaining secure code needs to be regarded as a key concern for organizations interested in strengthening their overall security posture.
By following the steps outlined above, IT professionals can ensure that they are well positioned to protect their applications from potentially devastating vulnerabilities.