Content

The Politics of Vulnerabilities

In the last few months, debate over the ethics of disclosing details of vulnerabilities has been rekindled.

As one might expect, the interests of the parties in the debate shape the discussion to a large extent. Vulnerability researchers, commercial and amateur, claim that full public discussion of software flaws is the only effective way to get necessary information into the hands of those responsible for securing systems on the Internet and to advance the science of software engineering. Software vendors, on the other hand, believe that specific information should be limited so it is not readily available to those inclined to exploit the vulnerabilities for malicious purposes.

Government is starting to weigh in, too. The Council of Europe signed a cybercrime convention about a year ago and in the last month several other countries, including the United States and Japan, also signed on. This convention lays out guidelines for the harmonization of cybercrime laws and includes a provision calling for signatory countries to make the possession of hacker tools a crime. (See Chapter 2, Section 1, Article 6 of the Convention. https://conventions.coe.int/Treaty/en/Treaties/Html/185.htm)

The substance of the debate about information related to vulnerabilities revolves around two key issues: the availability of code to exploit or demonstrate the existence and nature of the vulnerabilities, and the level of detail that security advisories should include. One extreme believes that security researchers are duty bound to release any and all information they have about a vulnerability, including any code that exploits or demonstrates the problem. The other end of the spectrum holds that no information should ever be made public, especially not code, because such disclosures essentially put a loaded gun into the hands of the hackers. Of course, most opinions fall somewhere in the middle.

In November 2001, at the Trusted Computing Forum, several companies (@Stake, BindView, Guardent, Foundstone, ISS and Microsoft) announced that they had banded together to start driving toward standards for both the content of security advisories and the handling of vulnerability reports not yet made public. This group ignited significant controversy by declaring three principles they believe represent the middle ground and should be driving the standards process.

  • First, that someone who discovers a vulnerability should work with the vendor to create a patch or workaround before announcing the discovery to the public (and, as a corollary, that the software vendor should repair the flaw expeditiously).
  • Second, that code to exploit the flaw should never be publicly released.
  • And, finally, that technical details about the vulnerability should be suppressed for thirty days following the availability of a patch to repair the problem.

This group (along with other companies that have since joined the effort) is currently drafting two documents that it intends to submit to the Internet Engineering Task Force for approval and issuance as a best current practice (BCP). The first relates to vulnerability handling procedures, laying out the responsibilities of a discoverer of a vulnerability and of a company whose software is affected. The document will spell out specific timelines vendors must meet as well as communication procedures a discoverer must follow. The second document covers guidelines for the content of public vulnerability announcements.

The vast majority of independent security researchers subscribe to an ideology called full disclosure. The central tenet of full disclosure is that the most effective way to improve security is to engage in public discussion of the flaws. In this way, software engineering practices can be improved; academic research of security, software flaws, and engineering practices can all proceed unfettered. Also, users of the software can be made aware of the flaws and take appropriate measures to protect themselves. Often, full disclosure includes small programs that are used to locate vulnerable systems, assisting system administrators in protecting their networks.

As with any public discussion, however, not everyone listening has good intentions. Full disclosure is criticized as allowing hackers to gain information and programs that help them attack systems and generally wreak havoc.

The most widely accepted compromise position, adhered to by most commercial researchers, is sometimes called responsible disclosure. In an attempt to answer criticisms about informing the 'bad guys' about security problems, this ideology holds that releasing full information about vulnerabilities is necessary to further the state of the art of security and to assist system administrators in protecting their systems, but that programs to exploit flaws cause more harm than good. Where possible, non-invasive programs to test for the presence of a vulnerability are released, but sometimes the only way to test for a vulnerability is to actually exploit it. In those cases, adherents of responsible disclosure generally will not release the code.

Historically, disclosure of vulnerabilities has been a lever required to get vendors to fix problems. Many have taken the view that hiding vulnerability information will make it more difficult for the 'bad guys' to take advantage of the problems. Vendors also find public disclosure of flaws to be embarrassing. Thus, they have used many tactics, including lawsuits, to prevent researchers from releasing their results. Over time, most have come to realize that there is some benefit in researchers discovering flaws and notifying the vendor, giving them time to address the issue before notifying the public. This practice is certainly preferable to the flaws being discovered when they are being actively exploited.

Much was made of the fact that the vulnerabilities that Code Red and Nimda exploited were fully disclosed. Certain critics leveled the charge that full disclosure enabled the creation and rampage of the worms. Responses pointed out that other worms have been discovered that were based on vulnerabilities that were not publicly known and were only less virulent because the vulnerabilities being exploited were not as widespread.

So far, no consensus has emerged about the best way to handle vulnerabilities. Some researchers regularly notify vendors before releasing advisories and limit the capabilities of any code they release. Other researchers refuse to give advance warning to vendors and release everything they have, including tools to exploit the problems. Vendors are continually improving their software engineering practices, but the demand for bigger, better, faster features often offsets the security improvements by introducing new, complicated programs that are vulnerable to attack. Policy makers are starting to get into the act. Following the worms of 2001, pressure is mounting from the public for their vulnerability to be reduced. The battle lines for significant debate in 2002 are drawn and well-established.

Scott S. Blake, CISSP, is vice president, information security, BindView Corporation (www.bindview.com).

Get daily email updates

SC Media's daily must-read of the most current and pressing daily news

By clicking the Subscribe button below, you agree to SC Media Terms and Conditions and Privacy Policy.