"You're never going to get everything, ever," says Michael Howard, senior security program manager for Microsoft, Redmond, Wash., and the author of several books on secure coding practices.
Long ago, this inability to achieve coding perfection wasn't a huge concern for software creators or end-users. But as clever hackers began exploiting developer miscues for fun and profit, it became increasingly clear that in order to root out security risks, developers had to improve their coding practices.
Over the past several years, many of the software industry's dominant vendors have acknowledged this situation and have put tools in place to minimize the number and severity of flaws.
"I think if you look at what's been going on in the industry, you see a lot more recognition that secure development practices are just as important as the features and functions you offer," says John Heimann, director, security product management, Oracle, Redwood Shores, Calif.
Recognizing the problem is an important step, but it is only the first in the path to shoring up vulnerable code. Though the idea of secure coding is gaining traction, for every Oracle or Microsoft there are many more software vendors out there who have very few development processes in place to ensure the security of their software.
"Software development practices today don't create secure software," Howard says. "If they did, then we would have far fewer security vulnerabilities across the industry. We need to change the process."
Software vendors, however, aren't the only ones that must worry about coding practices. There's also the growing number of in-house development shops that are pumping out web applications with little regard to security testing.
According to David Grant, vice president, marketing, at Waltham, Mass.-based Watchfire, as these shops have grown in size and number they have been unable to keep pace with security best practices. In the past, the most an in-house shop would do is hire one or two security auditors to check the web application during the final stages of the development cycle, he says.
"And that was fine when you're talking about one or two applications over a month, or potentially even a year," he says. "Now what we're seeing is a lot more applications being published and changed. Big companies have hundreds, if not thousands, of these applications going through their shops on a yearly basis. So having two to three specialists doesn't cut it anymore."
These experts agree that in order to successfully build secure code, companies must do several things. First, they have to persuade their bosses that this is a priority. Second, they must provide developers with the right education and tools to understand what secure coding is — and to test for secure coding along the way. And third, they have to change their processes to get developers to write better code earlier in the cycle.
Educating execs and developers
The ultimate goal, say experts, is to have your developers thinking about security throughout the development lifecycle — not just when you get to testing. But before secure code best practices can be put in place, the security crusaders must battle ignorance within the organization — both at the top and the bottom.
The first group of people that must be educated about the ramifications of building flawed applications are the execs.
Once you have executive buy-in, then it is a question of educating the worker bees. The only way to effectively eliminate dangerous code is if the developers know that they're making mistakes.
When Microsoft and Oracle began their secure coding initiatives years ago, one of the first things both organizations implemented was mandatory security education for all developers.
Now all Oracle developers must go though mandatory classes online, and take tests to prove their proficiency in secure coding principles. Similarly, Microsoft employees are required to take a combination of classes taught online and in the classroom.
Once the developers gain a fundamental understanding of application security, there is still the matter of getting them to change their work habits. But even more important than the right code analysis and testing tools are the right processes that tell the developers when and where to use them, Heimann says.
"What you need to do is develop some standards for what being secure means," he says. "That may mean writing your own coding standards or using something put together by The SANS Institute or someone else," Howard says. "You can fix code, but there's more to it than that. It starts tactical and becomes strategic."
One of the most valuable strategic results of Microsoft's Trustworthy Computing initiative, he says, was the development of the Microsoft Security Development Lifecycle (SDL). This internal policy guide is the company's strategic roadmap for implementing secure software from the bottom up.
Because code can't be perfect, Howard says Microsoft focuses on two main objectives with SDL: reducing the number of vulnerabilities in the code and reducing the severity of those they miss.
Though the company is still publicly slammed for newly found vulnerabilities, Howard says Microsoft has made tremendous strides in securing its code since implementing SDL four years ago.
Howard is so confident that these SDL strategies work that he's spreading the word of Microsoft's internal secure code practice with a new book, co-written with his boss, Steve Lipner, The Security Development Lifecycle.
According to Howard, the strategies should translate well to any business that develops software. "When we wrote this book we had this overarching goal that it was applicable outside of Microsoft."
Visit www.scmagazine.com/us/podcasts to listen to our podcast with Dr. Brian Chess, chief scientist for the coding analysis company Fortify.