Opponents of this argument say we cannot have perfect code and therefore should not try for good code. But the software industry has not made a concerted effort to write better code so it is too early to surrender. It does not make sense to give up when we appear to have plenty of time to invent new technologies that do not solve meaningful problems.
We develop software to solve problems nobody knows they have but we do not have time to build software right the first time (or sometimes the second, third and fourth times). Are secure coding practices less important than refrigerators that know when the milk is past its expiration date and can send a message to the grocery store?
Most customers have little to no information on the security-worthiness of the products they buy, and some risks cannot be mitigated. The single best thing the industry can do to mitigate customer risk is to write better software. Software development must change for the better because it has become part of our critical infrastructure. As such, software development needs to be held to the same standards as other facets of critical infrastructure.
Imagine if civil engineers built bridges with the same careless attention to fundamental engineering practices as many software developers. It would not be acceptable to hear: "I can't be bothered to figure out how to make the bridge secure from failure. I am only interested in using the latest building materials and having a sexy facade," or, "It's not my fault if the bridge fails. I didn't expect so many heavy trucks on it."
There are no perfect bridges, but engineers are aware of the stakes if they observe poor engineering practice. If civil engineers were as unschooled in secure design practice as the average software developer, failing bridges would be causing a severe loss of life.
We all pay dearly for bad software and the only reason there is not a customer revolt is that nobody knows the true cost of securing an organization's IT infrastructure. The industry cannot expect customers to make risk or cost trade-offs when they have imperfect information on this, in addition to no information at all as to how secure the code is, or its cost of ownership.
All vendors claim products are secure because nobody would sell a product by stating: "The license fee is $100 per user. You will spend ten times that in the first year patching your systems. You may contract a virus that will cause millions of dollars' worth of damage."
For some, the justification behind writing poor software is that it is faster and cheaper to get the project rolled out immediately, please your boss and help the company make its quarterly number, but the end result is self-defeating.
Secure coding is good coding practice coupled with a good development process – the coding equivalent of a straight line – and it actually gets a product out the door faster and with a higher quality.
Developing and implementing secure coding does not have to be difficult. Virtually every industry has "time-to-market" pressures, yet automobile manufacturers do not decide to add another row of seats and put the air conditioner in the trunk while the car is on the assembly line. Mature companies have a development process with release milestones. Properly integrating security in the development process is the only way to achieve secure software and you can integrate security into a development process you already follow.
There are ways to motivate parties to ensure secure programming. Customers can demand secure software. For example, the Department of Defense requires formal, third-party security evaluations for products used in national systems. Evaluations force vendors to adopt a secure development process.
Customers can demand that vulnerability testing be done on the software they buy. The desire for vulnerability analysis is behind the National Security Agency's push for "higher assurance" – EAL4-plus – security evaluations.
If there were more tools available to do vulnerability analysis (of source as well as object code, in multiple languages), it would be in every vendor's interest to use them. Detecting and fixing vulnerabilities pre-release quickly pays for itself on almost any licensed product where the security fault occurs on all product versions on all platforms.
I present the industry with a modest proposal. As more than 50 percent of security faults result from buffer overflows, eliminating them in the next two years means we could reduce security faults by half. It is measurable and achievable and "checking boundary conditions" is something all developers should have learned in their first programming class.