Nation-states are stockpiling software exploits to compromise and spy on their rivals. But do their gains represent a loss for manufacturers, developers and the public? Bradley Barth reports.

For all of the hype surrounding the release by WikiLeaks of the Vault 7 documents revealing CIA hacking tools and digital surveillance techniques, the revelations by and large didn’t surprise most privacy pundits. If anything, cyberespionage experts were more shocked that the CIA was careless enough with its secrets to allow an individual to exfiltrate them from a secure facility.

OUR EXPERTS

Mike Buratowski, SVP of cybersecurity services, Fidelis
Ryan Kalember, SVP of cybersecurity strategy, Proofpoint Edward McAndrew, partner, Ballard Spahr
Corey Nachreiner, CTO, WatchGuard Technologies
Eric O’Neill, national security strategist, Carbon Black; former FBI counterterrorism operative
Chester Wisniewski, principal research scientist, Sophos

Let’s not be naïve: It is a given that nation-states around the world are stockpiling vulnerabilities in hardware devices, software programs and websites that they can exploit at will against their targets. But that does not mean that there aren’t costs to such actions, or that these tactics aren’t met with opposition from certain developers, manufacturers and cybersecurity advocates. Indeed, some experts contend that by hoarding bugs instead of responsibly disclosing them, governments actually make society less safe, as these flaws remain discoverable to malicious actors.

Still, with Russia, China and other rival nations playing spy games against the U.S. and Europe, is it practical for federal agencies like the CIA to stop secretly gathering zero-days that could allow them to eavesdrop on an enemy? Or would such a philosophy prove dangerous, placing them at a distinct disadvantage?

Somewhere between full disclosure and total secrecy exists the right balance. But it’s a grey, fuzzy area.

“As a society, we haven’t had an honest conversation about where our government’s responsibility for public safety begins and ends, which would color where the line needs to be drawn with regard to exploitation of our communications equipment,” says Chester Wisniewski, principal research scientist at Sophos.

“Spy agencies, police and others need the ability to continue to intercept communications and more often than not in the 21st century that means exploiting vulnerable software,” Wisniewski continues. On the other hand, “I believe they should work with vendors and disclose vulnerabilities responsibly. These flaws are also being exploited by our adversaries and keeping them a secret ultimately weakens our own defenses more than it helps us undermine our adversaries.”

Eric O’Neill (left), national security strategist at Carbon Black and a former FBI counterterrorism operative, says one feasible approach might be for government agencies to publicly disclose only those vulnerabilities that they know other countries are already aware of or actively employing.

“We cannot place ourselves at a disadvantage by opening our entire espionage bag of tricks, but we can tactically neutralize vulnerabilities when we know rival nations are also exploiting them,” says O’Neill. “If the vulnerability is widely known among intelligence agencies, it’s better to reveal the vulnerability, protect our domestic citizens, and make it more difficult for foreign intelligence agencies to function than to needlessly hoard a vulnerability.”

A recent study from the RAND Corporation, a global policy think tank, determined that among any given entity’s stockpile of zero-day vulnerabilities, only 5.7 percent of these bugs will be discovered and publicly disclosed by a second party within a year’s time. (Note that the study does account for additional groups that may also find some of the same bugs but decide to secretly hoard them.) Moreover, the study found that exploits and their corresponding vulnerabilities have an average life expectancy of 6.9 years before they are uncovered and patched.

This means that an agency’s zero-days tend to remain both proprietary and useful for long periods of time. For that very reason, “There’s not a huge security advantage to be had via the disclosure of one organization’s zero-days,” says Ryan Kalember, SVP of cybersecurity strategy at Proofpoint. “A notable exception to that would be a really major, widespread vulnerability like Heartbleed,” he adds, referring to the critical security bug in the OpenSSL cryptography library that was disclosed in 2014. “One could argue that the policy should be different for vulnerabilities of that magnitude.”

To that end, some intel agencies might perform a risk assessment to determine whether or not to disclose a newly discovered zero-day. This assessment might, for instance, weigh the popularity and ubiquity of a product against how difficult the vulnerability is to uncover, explains Corey Nachreiner (left), CTO at WatchGuard Technologies.

In some cases, it’s possible the assessment might determine that disclosing a bug would actually be more harmful than keeping it under wraps. After all, just because a software vulnerability is responsibly disclosed doesn’t mean that users will diligently apply the patch that fixes it. 

“Our cybersecurity services team sees this all the time when software companies provide patches,” says Mike Buratowski, SVP of cybersecurity services at Fidelis. “Malicious actors then either find an available exploit or develop their own, knowing that there will be a window of opportunity to go after people who aren’t diligent in updating or upgrading.”

Case in point: Many of the exploits revealed in the Vault 7 documents weren’t zero-days at all, but rather bugs that were already well known to the cybersecurity community. On the other hand, disclosing vulnerabilities can serve as an effective counterintelligence tactic, diminishing the power and destructiveness of an exploit before it can be used against potentially millions of users, Buratowski acknowledges.

From Nachreiner’s point of view, hoarding bugs is never worth the gamble. “It makes much more sense to me to aggressively repair all vulnerabilities so your adversaries have minimal technological issues to use against you and your citizens,” he says. “A software vulnerability is like a buried landmine waiting for anyone to step on it. If you find one in a well-traveled area and decide to leave it in hopes your enemy steps on it, you risk letting your allies step on it too.” Especially, he adds, when even the friendliest of nations tend not to share exploits with each other.

Another concern for Nachreiner is that an agency’s zero-day stockpile can itself became the target of outsiders, as evidenced by WikiLeaks’ release of the Vault 7 documents. “If the entities hoarding zero-days can’t keep them safe, this puts everyone at higher risk,” he says.

Of course, one way to mitigate this danger would be to ensure that no one specific department or agency is responsible for a country’s entire cache of cyber weaponry.

“If this incident shows us anything, it’s that the aggregation of tools in any one place creates major security concerns,” says Edward McAndrew, a partner at Ballard Spahr, who co-leads the law firm’s Privacy and Data Security Group. “Keeping the crown jewels or all of the tools in one place is a recipe for disaster,” he adds.

Government agencies must also consider how stockpiling zero-days might foment distrust among the makers of the very products they are exploiting – especially after they are publicly caught in the act.  Consequently, intel sharing arrangements between the public and private sectors could suffer.

Buratowski (left) says that private-sector companies will be less likely to collaborate with the government in the wake of the latest WikiLeaks reveal – “not necessarily because they don’t want to help the government, but because of the potential backlash from their consumers,” he says, citing as an example, Apple’s refusal last year to help the FBI circumvent iPhone security features in the case of the San Bernardino shooter.

But not everyone is convinced the impact will be so dire. “I think the effect is minimal,” says Wisniewski at Sophos. “The industry suspected this was happening and there hasn’t been much progress on the public-private cooperation front. It is nearly always a one-way street – information goes to the government, but useful information rarely comes back out.”

But even if relations between business organizations and the government don’t erode, the same may not necessarily be said for the general public’s trust of consumer products. “When the public learns about a flaw in a popular product that one government was hiding, it makes customers in other nations distrustful,” Nachreiner explains. “Did the vendor know about this? Are they cooperating with the government in question? These questions, even if unfounded, can affect product sales.

He adds that even if intel agencies were to give up all of their zero-day exploits, they could still ably conduct spy operations using other proven tools and techniques. “Spear phishing, where you trick people into making a human mistake, is a much more common way of infiltrating a network than exploiting a zero day,” he says.

Still, an agency that forgoes some of its most valuable weapons would almost certainly weaken its position against less ethical actors. “It’s often hard for us to recognize that to succeed against malicious actors who don’t play by our rules, our government officials have to do somethings that may be distasteful to us,” says Buratowski, who stresses that the U.S. must never violate founding principles, such as protection from unreasonable search and seizure.

“It is a double-edged sword,” he continues. “We expect our government to protect us from malicious nation-states and we will not accept failure in their mission. However, we also expect them to do it in a way that is socially acceptable to us.”

It’s a fine line that perhaps no agency can walk these days without occasionally falling into the murky abyss.