Content

To Hide or Not to Hide?

Security through obscurity is always the subject of heated debate, although the relevant parties often wish it wasn’t, since no one likes their dirty laundry being aired in public.

The debate takes many forms, but the general idea is the same: is it enough just to hide what you're doing, and hope a potential attacker doesn't figure it out? Of particular relevance at the moment, is it sufficient to hide security flaws and hope people won't find out until the vulnerability is fixed?

Efforts by large vendors (the usual suspects) and the IEFT to coordinate a bug-reporting effort which would then require a gag of about a month (giving the vendor time to fix the problem and deliver a patch) have met with opposition. Detractors claim that this encourages slow turnaround from the developers, assumes that hackers won't figure it out themselves (a laughable supposition), and does nothing to address the underlying problem of users not applying patches anyway.

There are many sides to this which I won't go into right now, but one aspect strikes me as being particularly noteworthy, and that revolves around reverse engineering - the assumption that attackers won't release an exploit for the bug you're trying to fix, before you've fixed it.

Being peripherally involved in open source software development, I have to wonder if these people have any clue at all. Reverse engineering has moved from a science to something more approaching a blend of art and sport. It's seen as a challenge; something fun to do in your spare time that is likely to yield results quickly, rather that a really difficult task best left to white-coated experts in a lab who may or may not get results despite massive investment of resources.

Witness the efforts of the community to circumvent the security mechanisms in Microsoft's X-Box gaming console. The console hasn't been on the market more than a few months, and an anonymous benefactor has offered a $200,000 prize for the first coders to successfully run Linux on the device. Considering the sheer engineering overhead involved, that's astounding, not only that it's even possible but that members of the community really believe it to be an achievable goal. There's a massive industry built around "modding" game consoles of all sorts; why even bother with the obfuscation in the first place?

Let's also look at the recent efforts of the Honeynet project, which managed to capture a binary installed by a cracker on a compromised host. A challenge was issued to reverse engineer the binary and document its activities. The winner managed not to not only completely decompile the binary (a denial-of-service and remote-control agent), but also to conclude the operating environment used to create it, and thus suggest a host of possibilities about the author's demographics. A great job, and my point is that if this can be done for this binary, it can be done for a commercial product just as easily.

Hoping a cracker will not guess your secret password-hashing algorithm is thus completely pointless, and just inviting abuse. I note that SQL Server has suffered just such a compromise this very week.

I mentioned open source software. I have to say that the open source versus closed source argument so often raised in this issue actually doesn't hold much water. The argument is that open source software is subject to better peer-review, and thus bugs are spotted and fixed sooner. Maybe so, but whether or not you release your source code, there are people who can and will take the product apart and discover the holes. A significant percentage will then publish their findings, release tools which make exploiting the vulnerability that much easier, and continue to be a thorn in the side of the commercial developer.

So what's the answer? Not the ostrich solution, that's for sure. If anything, telling the market there's a bug but refusing to describe it only serves to fuel a frenzy of experimentation by curious hackers.

Perhaps an acceptable compromise could be publishing a synopsis of the vulnerability plus a workaround ("Malformed requests can root IE via Gopher; disable it. Fix due this week."), and keeping the technical details out of the public eye until it's fixed. The key here must be to satisfy the need of the user (immediately protect against the threat) without pandering to the attackers. At present, the user is completely vulnerable to any attacker ahead of the developer.

Lastly, let's talk about what this means to the development process. If you're an independent software vendor (ISV) or a developer of in-house software, not paying close attention to possible vulnerabilities is inviting disaster. The assumption that no one will ever find out simply because your home-grown protocol isn't documented was once short-sighted; now it's just dumb. Stick to best practice, use the right tools and shun developers who try to convince you anything else is acceptable.

Not that obscurity never works. I did, after all, write about steganography last month. Obscurity raises the bar to the attacker, and can gain you time for detection and reaction. But that's no substitute for proven technologies and practices. Implement these, and then obfuscate to your heart's content.

If any component of your security relies in hoping your attackers don't unravel your puzzles, you've probably already lost the fight. It's just a matter of waiting for the bad guys to realize it, that's all.

Jon Tullett is U.K. and online editor for SC Magazine (www.scmagazine.com).
 
 
 

Get daily email updates

SC Media's daily must-read of the most current and pressing daily news

By clicking the Subscribe button below, you agree to SC Media Terms and Conditions and Privacy Policy.