As companies continue to peel back the layers of the SolarWinds compromise and investigate its impact, some are seeing security strategies implemented years ago put to the test.
In the course of investigating the impacts of its own breach, Microsoft security specialists discovered “unusual activity” within a number of internal accounts, including one that was used to view the company’s internal source code.
For many companies, this would be cause for alarm. Source code can include things like API or encryption keys, or contain intellectual property, unique algorithms or other sensitive business assets that can be reverse engineered. It also theoretically allows an attacker to spot or infer weak points in your network and system security that can be exploited.
Thus far, however, Microsoft believes the impact was extremely limited.
“At Microsoft, we have an inner source approach – the use of open source software development best practices and an open source-like culture – to making source code viewable within Microsoft,” wrote Microsoft’s Security Response Center in a Dec. 31 blog announcing the findings. “This means we do not rely on the secrecy of source code for the security of products, and our threat models assume that attackers have knowledge of source code. So viewing source code isn’t tied to elevation of risk.”
The accounts did not have the ability to change code or re-engineer any systems, and the company claimed the accounts did not have any impact on services or customer data. That is partly due to the way Microsoft does – or rather doesn’t – engineer its software and security.
Some were surprised to learn about Microsoft’s seemingly cavalier attitude toward protecting its own source code, but it is not terribly different from what many open source advocates have argued for years: that basing security around the secrecy of your source code is foolhardy and the more access the public has to a software program’s code, the easier it is to crowdsource out security vulnerabilities and other software flaws.
Last year Yemi Oshinnaiye, then-deputy chief information security officer for the U.S. Citizenship and Immigration Services, recalled the reaction he received from colleagues when he suggested they use GitHub to keep tabs on different aspects of an ongoing IT project.
“You can’t use GitHub, that’s a public tool! You can’t do it, it has no security!'” He recalled. “Really? It’s a public tool with people that work on it [all the time], it has more security than the things that we’re using internally.”
Rob McLeod, Senior Director for eSentire Threat Response Unit, said that no matter which approach an organization takes, many best practices around security architecture don’t rely on keeping the code secret.
“Secure architecture and code review is fundamental to securing the software development lifecycle, however, history has shown that solely relying on this strategy is not 100% effective in either closed or open source,” said McLeod. “Finding bugs and vulnerabilities through code inspection can be difficult, and it only represents a fraction of the threat surface. A secure SDLC includes threat modeling, secure design and architecture, an automated code quality, static/dynamic analysis and testing pipeline, reproducible and automated deployment and configuration.”
To be clear, there are no silver bullets in cybersecurity and using open software is no guarantee against compromise: the 2014 Heartbleed attacks were traced back to OpenSSL, and open source code library, and the Equifax hack was facilitated in part by exploiting a vulnerability in Apache Struts’ open source framework. For every advocate of open source security, you can find other information security specialists who are skeptical.
Additionally, Microsoft’s embrace of inner source is restricted to employees and not the same as an open source approach, which would be equivalent to posting their code on GitHub or an open source library on the internet.
Rather, the approach suggests that security through obscurity rarely works, and any protections a company puts in place for its systems and products should rely on other principles, namely ones that implicitly assume your code will eventually wind up in the hands of bad actors.
“Sure, it makes reverse-engineering a bit easier, which is probably why the hackers went for it, but hackers can and already do reverse-engineer Microsoft products to look for bugs,” said Matt Tait, an independent security researcher and a former information security specialist at the United Kingdom’s Government Communication Headquarters, on Twitter.
Indeed, as Tait pointed out in a follow up, Microsoft already shares controlled access to parts of its source code with dozens of countries around the world through their transparency centers as a means of building increased trust and security into its products.
Switching to an open or inner source approach can also lead to other, indirect security improvements. A report released last year by Snyk suggests using open source can result in an improved security mindset and culture within organizations, reduce the number of new vulnerabilities and push the ones that are reported towards lower impact software.
“The open source landscape more than doubled in some ecosystems, yet the growth of vulnerabilities is not showing matching growth,” wrote authors Alyssa Miller and Sharone Zitzman. This is certainly something worth paying attention to for the future.”