The cybersecurity executive order issued by President Joe Biden in May covered a lot of ground, moving the needle on issues like breach reporting, zero trust architecture, and software insecurity.
One part of the order requires the director of the National Institute for Standards and Technology and director of the NSA to publish minimum standards for how vendors doing business with the government test their source code for security vulnerabilities or dependencies on other software applications or interfaces that may introduce risk.
In a world interconnected by software and shared risk through the supply chain, one idea that has popped up in recent years is pushing or requiring companies to submit their code to review by third party, who would oversee the work and where in the software development process to focus.
While the order doesn’t mandate it, some industry groups are already warning the U.S. government that such third-party testing or review would be overly intrusive and might not add much benefit, especially if the focus is on source code or earlier stages of the development process.
Alexa Lee, a senior manager of policy at the Information Technology Industry Council, said looking to at source code alone is just a snapshot in time, one that often comes well before other security processes in the software development take hold.
“Source code testing is not a panacea, or a holistic approach to ensure software security,” said Lee said during a June 3 software security workshop hosted by the National Institute for Standards and Technology. “While these tools may identify issues, they do not indicate whether any of the issues identified are in fact exploitable, as there could be a check elsewhere in the code that prevents exploitation.”
Lee also expressed concerns that any efforts by the U.S. government to mandate such third-party testing, however well meaning, would only embolden efforts by other countries to do the same. Already, authoritarian governments in Russia and China have implemented laws or policies over the past five years that have required outside companies to submit their code to review before they’re able to access those markets.
“From a global perspective, the [US government] should be careful in setting any requirements on source code testing because it will set an example for other governments around the world,” said Lee. “Consider that other countries would likely ask the same requirements of U.S. companies and…in certain jurisdictions it could do more harm than good.”
Broad skepticism of mandates
While any regulations that come out of the executive order would only legally apply to federal agencies and companies that contract with the government, its impact could be felt beyond those two groups. As Tim Mackey of the Synopsys’ Cybersecurity Research Center and Adam Isles from the Chertoff Group wrote shortly after the order was released, the White House is “leveraging the government’s procurement process and contractual language to drive compliance [and create] a model that could be adopted in the commercial sector.”
The federal government’s contracting footprint is huge, composed of hundreds of thousands of companies (including nearly all of the large, recognizable brands in most industrial sectors) and millions of individuals. But those incentives would also translate to any company that may one day want to do business with the government, as they would need to gear their products, business and security strategy to be eligible. That combined cohort alone could be enough to shift market standards.
Others expressed similar skepticism about the utility of such reviews, pointed to other ideas they felt could be more effective or cautioned that they would only be useful if deployed under certain conditions and in tandem with other solutions.
Sandy Carielli, a principal analyst at technology research firm Forrester who focuses on application security, pointed to other ideas found in Section Four of the executive order, like shifting to more secure software development environments, using automated remediation tooling, implementing software bills of material across industry and vulnerability disclosure programs, saying all of them would probably be more useful to our collective code security.
“These items are going to have a greater impact on software security than mandating third party testing because they will help organizations find and fix security flaws earlier in the lifecycle,” said Carielli in an email. “There’s nothing wrong with doing third party testing – many organizations do this through penetration testing services or through bug bounty programs. However, third party testing is a later stage check that cannot replace the more left-shifting initiatives proposed by the [order].”
Paul Anderson, vice president of engineering at GrammaTech, which offers software security testing services, said any third-party code review requirements would be “unlikely to make a dent in the problem.”
He too said he would prefer to see ideas, like a software bill of materials or static and dynamtic testing encouraged and implemented first, while any requirements around third party testing should be narrowly scoped, even for government contractors.
“If the government is buying some software that is super high criticality, then there’s an argument for yes, having third parties come in and test that software independently. But for most of the software they procure, I don’t see that there’s a good argument for mandating it,” said Anderson.
Who watches the Watchmen (and ladies)
Several experts reached by SC Media questioned how effective some third-party code reviews would be, at the source code level or otherwise, given that the companies who built and own the software being tested often struggle to identify and remediate vulnerabilities, despite having far more institutional and contextual information.
“The problem with mandating third party testing is that the quality of outputs varies so greatly – if a third-party test reveals nothing, is that because the product is secure or because the third party lacked the skill to discover key issues?” asked Carielli.
Others told SC Media that using source code as a testing base often doesn’t give you insight into how that code might perform at the production and end user stages, where many software vulnerabilities are ultimately exploited by threat actors.
“We must not lose sight of the biggest attack surface – the hundreds of thousands if not millions of application instances in production. These applications needs to be tested in production to find and mitigate vulnerabilities that are current and exploitable,” said Setu Kulkarni, vice president of strategy at WhiteHat Security.
Chris Wysopal, founder and chief technology officer at Veracode, concurred with that assessment, saying he doesn’t think third-party testing is a practical replacement for the more proactive approaches that companies should be doing already to better bake security into the software development process.
But that doesn’t mean those businesses should be entirely off the hook or immune from third-party scrutiny either.
He suggested that rather than reviewing code, third party auditors could instead test the process that companies use to determine whether a particular vulnerability can be exploited in their software. If a software development team believes that a particular bug can’t be exploited in real world conditions, they should document their justification for why, allowing outside auditors to do targeted spot checks of those claims, something that can reveal much about shoddy or thorough the process was to reach those conclusions.
That would require more documentation on the part of software developers, but would also allow them to get some form of external security validation without other parties sifting through their source code or large parts of their development environment.
“In order to have some assurance, there needs to be governance and oversight,” said Wysopal. So there has to be questions asked like what tools were used, what findings were there…what types of security bugs had to be fixed, which ones were deemed acceptable, which ones were deemed to be not exploitable and why,” said Wysopal.