New federal regulations designed to constrain the sale of surveillance tools and “intrusion software” abroad are drawing confusion from private sector companies, with some claiming they are overly complex, may inhibit some forms of proactive threat hunting and will cost more to implement than the government estimates.
The interim rule, issued by the Department of Commerce in October and set to take effect Jan. 19, 2022, is part of the United States’ delayed implementation of the Wassenar Agreement and lays out a complex set of new guidelines meant to limit the sale and export of “intrusion software” and hacking tools to geopolitical rivals with troubling human rights records, like China or Russia.
Restricting the proliferation of such tools has become a heightened priority for the U.S. in recent years after media organizations and non-profits have revealed numerous abuses of NSO Group’s Pegasus spying tool on journalists, human rights activists and even U.S. government employees. While many cite or allude to NSO Group abuses when referencing the restrictions, the Israeli-based company would not actually be subject to U.S. industry regulations, though BIS later added NSO Group and other spyware vendors like Candiru to a sanctions list that prevents U.S. companies from doing business with them in the future.
Companies that would be captured under the new rules were able to submit public comments before they go into effect, and many took the opportunity to call for changes and clarifications to make compliance more targeted, cheaper and easier to understand.
The Information Technology Industry Council raised questions about whether the definitions BIS uses for “vulnerability disclosure” and “incident response” are too narrow and limited to include the sharing of technical information that may not directly relate to an incident or disclosure but nevertheless add important context to ongoing threats.
The group cites technical data on threat actors’ “tactics, tools, techniques, and behaviors” as well as certain vulnerability handling activities as examples of information that could fall outside the defined scope of the new rule. They also worry the rules as currently constructed could make it harder for security researchers to share threat information across borders in regions like the Middle East where such sharing may be vital to protect local populations vulnerable to their repressive governments.
“It is the process of sharing and analyzing cyber-threat information and ‘Indicators of Compromise’ broadly – characteristics of adversary behavior, preferred targets or methods of intrusion that can include exploit information or meta-analysis of the exploit – that are necessary to arm cybersecurity professionals with the knowledge necessary to make risk-based decisions about how to calibrate their defenses,” wrote John Miller, ITI senior vice president of policy and Mike Flynn, senior director for government affairs.
Microsoft, a member of the ITI, submitted its own comment that echoes many of the same points, saying the lack of clarity “could have unintended consequences that frustrate legitimate cybersecurity activities within Microsoft and the broader cybersecurity community.”
In an unsigned comment, the company described how the rule may inhibit their ability to share threat information that is not directly related to an incident or vulnerability disclosure process across borders in real time, a key component of how it tracks and finds previously undetected hacking techniques.
“Importantly, these activities are often proactive versus reactive in nature – intended to identify new vulnerabilities before a vulnerability or incident occurs. This could include, for example, technology involving tactics and techniques of bad actors, or preventative information sharing related to vulnerabilities in systems and software,” the company wrote.
Their chief complaint is that it’s not clear from the regulatory language whether the exceptions for sharing only information that is “necessary” to vulnerability disclosure or incident response would end up constraining their broader threat hunting work.
“We remain concerned that the Rule will have unintended consequences on companies like Microsoft and their ability to continue to engage in routine and legitimate cybersecurity work across borders, given confusion as to what is and is not allowed,” Microsoft wrote.
Others are asking Commerce for more time study the lengthy and detailed proposed regulation. Law firm Akin Gump, which advises clients subject to BIS export controls, requested a one-month extension of the public comment process because “our clients have expressed concern that they have not had sufficient time in the current rulemaking process to fully evaluate the effect of these rules on their cybersecurity operations.”
Dave Aitel, a former NSA research scientist and current partner at Cordyceps Systems, called for Commerce to delay the new rule, calling it “highly complex” and “a tangled web of words” that would only serve to badly confuse security researchers about where the legal boundaries are when it comes to discussing the threat landscape.
“One question that comes up over and over in the community is around sitting at an information security conference dinner table and discussing exploitation techniques. If it’s a private dinner, are people supposed to examine an [Authorized Cybersecurity Exports] flowchart to figure out if they are OK to attend the dinner and have a frank discussion?” Aitel mused.
Charges of overcomplexity regarding the rule, the types of technology covered, as well as where and under what circumstances they would apply has been a focal point for critics.
Google threat researcher Winnona DeSombre warned that “While NSO or other similar firms do run amok in the international sphere,” the rule as currently constructed would create a costly compliance regime that will sweep up whole new industries and entities and create new costly compliance burdens for smaller businesses with no experiencing navigating export laws and regulations.
She also called the total estimated compliance costs of $2,250 put forward by BIS “a gross underestimation” and said many smaller companies will likely need to hire third-party legal experts that will push the true costs anywhere from $3,000-$10,000.
“The rule is incredibly complex, and smaller firms [with] little compliance exposure will struggle,” DeSomebre wrote. “The complexity is recognized even by experts in the space, and many smaller firms are starting from a weaker playing field than the companies BIS is used to dealing with – those with larger compliance budgets and experts on retainer.”
Some believe the restrictions don’t go far enough and could be further tightened to reduce abuse. Peter Micek, General Counsel for Access Now, argued for the inclusion of biometric surveillance technologies, electromagnetic surveillance systems or equipment and drones in Commerce’s definition of covered cybersecurity items.
He also argued that the scope of cybersecurity products and intrusion software should be further expanded to include certain dual use commercial technologies that aren’t explicitly marketed as intelligence or surveillance tools but can be used as such by repressive governments.
This would include IP network surveillance tools like Deep Packet Inspection technology, which has been used by countries like China, Egypt, Russia and other authoritarian governments to surveil their own citizenry, as well as remote hacking and device forensic tools like the kind sold by Cellebrite, GreyShift and other companies.
“We understand the definition of ‘intrusion software,’ which are specially designed ‘to avoid detection by security monitoring tools or to defeat protective countermeasures,’ covers not only remote hacking tools, e.g. Pegasus spyware by NSO Group, but also device forensics tools, which overcome the security system of the device,” Micek wrote.