Content

Lessons of the Honeypot II: Expect the Unexpected

Observation and monitoring of computer intruders have long been performed in a haphazard fashion, if at all.

But the domination of prevention mindset in computer security, still manifesting in the form of the infamous "But we have a firewall!", is slowly subsiding.

The need for more intelligence information about the blackhat community has been emphasized by many in the security industry. However, information security still tends to be defensive. Security vendors might argue that defense should be proactive or reactive, but security is still mostly based on a castle metaphor. And whether the defenders choose to rely more on stone walls (passive defense) or on the damage of their ballistas and burning oil (active defense), they still have very little idea what is going on outside the walls. Looking at the intrusion detection and demilitarized zone firewall logs and trying to make an educated statement about the blackhat community based on them is akin to measuring the attacking army strength by peering into the darkness from the top of the castle bastion. Enemies in the open (a.k.a. "script kiddies") will be seen, while those hiding in the forest will remain undiscovered.

While known to security processionals for a long time, honeypots recently became a hot topic in information security. However, the amount of technical information available on their setup, configuration and maintenance is still sparse, as are qualified people able to run them.

The term "honeynet," used in this article, originated in the Honeynet Project and describes a network of computer systems with fairly standard configurations connected to the Internet. The only difference is that all communication is recorded and analyzed and no attacks can escape the network. The systems are never "weakened" for easier hacking, but are often deployed in default configuration with minimum patches (as unfortunately, are so many others on the Internet). See my previous "Lessons of the Honeypot" article (www.infosecnews.com/opinion/2002/06/19_04.htm) or the Honeynet Project web site (project.honeynet.org) for more details.

After running a honeynet (www.netforensics.com/honeynet1.html) for several months, another important trait of running a honeypot came up: one has to expect the unexpected to successfully run the honeypot.

One might argue that the whole security space is the realm of unexpected, but in honeypots this uncertainty is dramatically higher.

In most cases the intrusions to production networks are handled to minimize the network exposure to hostile elements. Returning to production and "putting it the way it was" (hopefully, better secured) are at the focus of the recover effort. In honeypots, there is a prolonged contact between the attacker and the target honeypot system, leading to a much higher chance that Murphy's Law will rear its ugly head and something will go wrong. The good thing is that for research honeypots deployed to study the intruders, the impact of such failures is less dramatic than for production security systems, but the chances of it happening are greater. It should be noted that production honeypots need to be deployed with much greater care so that they do not actually lead to increased risk.

Removing the uncertainty from computer security appears to be impossible, even if all of the components - people, process and technology - are taken care of. Some believe that an audit approach to security, where all the weak spots are to be ironed out by the careful design and audit of the critical applications and system configurations, will work to reduce the amount of unexpected. However, it is commonly known that "there is no 100 per cent security" and some recent cases (such as this one www.wired.com/news/infostructure/0,1377,54400,00.html) only emphasize that nobody is unassailable. The people referenced in the article know how to run secure systems better than then everybody, but they still ended up in the article.

As for what might happen - some stories follow below.

Setting up a honeynet involves configuring security software, which might be complicated at times. Apparently, mistakes happen. That is why having at least two of everything helps a great deal. Honeynet Project guidelines (project.honeynet.org/alliance/requirements.html) specify a degree of redundancy for both data control (preventing attacks from the honeypot) and data collection (keeping the evidence of the compromise) to mitigate both human and software errors. Security software is often pushed to its limits in the honeynet. For example, during a one-hour compromise, the attacker tried to push more then 5Gb of packet data out to flood the victim in a short period of time. Data control functionality prevented the flood of traffic from reaching the victim, but one of the data collection systems failed due to the overflow.

In addition, even popular and commonly used software packages have bugs. These critters have a tendency to crawl out under stress, thus making the defense component to fail at the least desirable moment. Bugs in firewalls are especially nasty as they may let the attacks out of the honeypot, thus leading to liability concerns.

The intruders also bring an incessant stream of surprises to the honeynet. New exploits, new tools and tactics never stop to amaze the honeypot operator. On January 8, 2002, one of the honeynets operated by the Project was compromised via the yet unknown Solaris exploit. While there were rumors about this vulnerability in the dtspcd daemon on Solaris, the actual exploit code never surfaced until the Honeypot compromise. The analysis of the compromise made possible a CERT advisory (www.cert.org/advisories/CA-2002-01.html) and made the attack known to the security community.

New tools were also captured on the honeypots. The now famous story of "the-binary," featured in two of the Honeynet Challenge contests (project.honeynet.org/reverse/ and project.honeynet.org/scans/scan22/) shows that the development of new hacker technology never ceases. Communicating over the rarely used NVP protocol, this integrated remote control and denial-of-service tool utilized encryption (albeit simple to crack), password protection, reverse engineering defense and spoofed evasion techniques, and was capable of performing several different DoS attacks, including the insidious reflexive denial-of-service in addition to giving the attacker full control over the compromised system. After the incident with the binary, several Honeynet guidelines were updated to handle similar cases in the future.

The "SucKit" rootkit, first described in Phrack magazine as a curiosity, appeared in the wild, to the chagrin of system admins. The tool uses the runtime kernel patching technique, allowing it to modify the running UNIX kernel even without module support and grant the attacker remote access to the system.

The attackers seen by the Honeynet Project have yet to develop the capabilities to detect the presence of the honeypot, but it is rumored that such development is taking place in the underground. For example, there are tricks enabling knowledge about the presence of VMWare, often used to deploy virtual honeynets (project.honeynet.org/papers/virtual/).

Some of these unexpected cases later serve as an example to educate others on security issues, such as featured in the Honeynet Project Challenges (project.honeynet.org/misc/chall.html). Learning from past experiences however, does not seem to impact the appearance of new surprises - research honeypots continue to bring forth the stream of new security insights.

Anton Chuvakin (www.chuvakin.org) is a senior security analyst with a major information security company. His areas of infosec expertise include intrusion detection, UNIX security, honeypots, etc. In his spare time he maintains his security portal www.info-secure.org.

Get daily email updates

SC Media's daily must-read of the most current and pressing daily news

By clicking the Subscribe button below, you agree to SC Media Terms and Conditions and Privacy Policy.