Nation-states are stockpiling software exploits to compromise and spy on their rivals. But do their gains represent a loss for manufacturers, developers and the public? Bradley Barth reports.
For all of the hype surrounding the release by WikiLeaks of the Vault 7 documents revealing CIA hacking tools and digital surveillance techniques, the revelations by and large didn't surprise most privacy pundits. If anything, cyberespionage experts were more shocked that the CIA was careless enough with its secrets to allow an individual to exfiltrate them from a secure facility.
Let's not be naïve: It is a given that nation-states around the world are stockpiling vulnerabilities in hardware devices, software programs and websites that they can exploit at will against their targets. But that does not mean that there aren't costs to such actions, or that these tactics aren't met with opposition from certain developers, manufacturers and cybersecurity advocates. Indeed, some experts contend that by hoarding bugs instead of responsibly disclosing them, governments actually make society less safe, as these flaws remain discoverable to malicious actors.
Still, with Russia, China and other rival nations playing spy games against the U.S. and Europe, is it practical for federal agencies like the CIA to stop secretly gathering zero-days that could allow them to eavesdrop on an enemy? Or would such a philosophy prove dangerous, placing them at a distinct disadvantage?
Somewhere between full disclosure and total secrecy exists the right balance. But it's a grey, fuzzy area.
“As a society, we haven't had an honest conversation about where our government's responsibility for public safety begins and ends, which would color where the line needs to be drawn with regard to exploitation of our communications equipment,” says Chester Wisniewski, principal research scientist at Sophos.
“Spy agencies, police and others need the ability to continue to intercept communications and more often than not in the 21st century that means exploiting vulnerable software,” Wisniewski continues. On the other hand, “I believe they should work with vendors and disclose vulnerabilities responsibly. These flaws are also being exploited by our adversaries and keeping them a secret ultimately weakens our own defenses more than it helps us undermine our adversaries.”
Eric O'Neill (left), national security strategist at Carbon Black and a former FBI counterterrorism operative, says one feasible approach might be for government agencies to publicly disclose only those vulnerabilities that they know other countries are already aware of or actively employing.
“We cannot place ourselves at a disadvantage by opening our entire espionage bag of tricks, but we can tactically neutralize vulnerabilities when we know rival nations are also exploiting them,” says O'Neill. “If the vulnerability is widely known among intelligence agencies, it's better to reveal the vulnerability, protect our domestic citizens, and make it more difficult for foreign intelligence agencies to function than to needlessly hoard a vulnerability.”
A recent study from the RAND Corporation, a global policy think tank, determined that among any given entity's stockpile of zero-day vulnerabilities, only 5.7 percent of these bugs will be discovered and publicly disclosed by a second party within a year's time. (Note that the study does account for additional groups that may also find some of the same bugs but decide to secretly hoard them.) Moreover, the study found that exploits and their corresponding vulnerabilities have an average life expectancy of 6.9 years before they are uncovered and patched.
This means that an agency's zero-days tend to remain both proprietary and useful for long periods of time. For that very reason, “There's not a huge security advantage to be had via the disclosure of one organization's zero-days,” says Ryan Kalember, SVP of cybersecurity strategy at Proofpoint. “A notable exception to that would be a really major, widespread vulnerability like Heartbleed,” he adds, referring to the critical security bug in the OpenSSL cryptography library that was disclosed in 2014. “One could argue that the policy should be different for vulnerabilities of that magnitude.”
To that end, some intel agencies might perform a risk assessment to determine whether or not to disclose a newly discovered zero-day. This assessment might, for instance, weigh the popularity and ubiquity of a product against how difficult the vulnerability is to uncover, explains Corey Nachreiner (left), CTO at WatchGuard Technologies.
In some cases, it's possible the assessment might determine that disclosing a bug would actually be more harmful than keeping it under wraps. After all, just because a software vulnerability is responsibly disclosed doesn't mean that users will diligently apply the patch that fixes it.
“Our cybersecurity services team sees this all the time when software companies provide patches,” says Mike Buratowski, SVP of cybersecurity services at Fidelis. “Malicious actors then either find an available exploit or develop their own, knowing that there will be a window of opportunity to go after people who aren't diligent in updating or upgrading.”
Case in point: Many of the exploits revealed in the Vault 7 documents weren't zero-days at all, but rather bugs that were already well known to the cybersecurity community. On the other hand, disclosing vulnerabilities can serve as an effective counterintelligence tactic, diminishing the power and destructiveness of an exploit before it can be used against potentially millions of users, Buratowski acknowledges.