We have all heard such predictions and they have yet to materialize. Such is the nature of statements that promise a “total” solution to a problem — they usually turn out to be spurious and we dismiss them as over-enthusiastic marketing, though often they fundamentally point in the right direction.
Such is the case with recent talk about software that will be able to defend itself, echoing the “self-defending networks” idea of a couple years back. The logic is that if you could embed security into every level of code, you would be establishing many small checkpoints inside the application, instead of in a few large ones outside it. It sounds good in theory, and it probably is a good idea if implemented judiciously — but it is not a panacea.
It all starts, innocently enough, with secure coding. There has been growing recognition that security must be embedded into the fabric of applications during development, rather than as an afterthought left to security bolt-on features or third-party tools. This is not new, and it is probably the easiest way to create a barrier against some of the more common attack vectors. It is an objective worth pursuing, and I regularly recommend it as a first step for securing applications.
It is quite a leap, though, to bootstrap secure coding to the point where software could purport to “defend itself,” and to then suggest that a possible outcome is the obsolescence of dedicated security products.
There will always be dedicated security products. There are many real-world obstacles that make “self defense” in software exceedingly difficult, such as tradeoffs between functionality and security, diminishing returns of investing in bullet-proofing applications, and human error, to name a few.
Even if we assume that we could rid ourselves of the practical limitations, is it even conceivable that software could be written to defend itself completely? Not really. There are inherent, structural reasons why this would never be possible:
Ever-changing threats: There is simply no way for applications with only “embedded” security to catch up quickly enough to cope with new threats, whereas this is a primary function of many dedicated security products.
Changing requirements: Here is a familiar scenario — an enterprise builds a database for internal use, and a few years later finds itself opening it via the web to its customers. A company implements a CRM solution for its call center, only to open it up to web self-service later on. Can anyone seriously claim that such scenarios could be predicted at the time of coding, or even implementation? We are talking about step changes in the level and class of threats, which would not only require dedicated security products, but probably a new class of such products.
Layered security: We all know that a key principle of any security policy is having several security layers of different kinds. Even if a bank has the most advanced safe in the world, they will not give up on their alarm system or let their guards go. Software may be able to defend itself, but when it comes to sensitive data, we will always need additional external measures to truly defend it.
Improving the security of software is always a welcome move. It is also inevitable that vendors of infrastructure — networks, databases, operating systems — will gradually build more security into their products.
All that means, however, is that security vendors need to keep on their toes, focus on the next emerging threat, and plug unforeseen holes in applications — secure as they may become. As long as applications have inputs, outputs, users, administrators, integrations and interfaces, they will be vulnerable. It is a never-ending chase.
– Slavik Markovich is CTO and co-founder of Sentrigo Inc., a provider of database security software. His blog, “Musings on Database Security,” can be found at http://www.slaviks-blog.com .