Today’s columnists, Pascal Geenens and Daniel Smith of Radware, say that while the SolarWinds case brought supply-chain attacks into the limelight, they are not new and security teams must finally manage them more effectively. ecooper99 CreativeCommons Credit: CC BY 2.0

The recent news about the SolarWinds hack has put software supply-chain attacks back in the limelight. But these types of attacks on commercial products aren’t new. In the past few years alone, at least four others come to mind.

Security pros may recall the 2017 NotPetya attack on tax accounting software by M.E. Doc that crippled Ukraine as well as impacted computer operations in other parts of the world. That was only four years ago. Later that same year, researchers found an advanced backdoor embedded in one of the code libraries of NetSarang’s server management software. Then, hackers broke into Piriform’s servers and inserted malware into CCleaner’s releases. And in Operation ShadowHammer, malicious actors targeted the Asus Live Update Utility that inserted a live backdoor, impacting more than one million users.

The term “shift left” refers to a practice in which DevOps focuses on quality and security earlier in the development process. Shifting left looks at secure coding practices and should result in less vulnerable production code. Shifting left does not resolve all security issues and does not remove the need to secure the right side of the DevOps chain, where code runs in a production environment. A good security strategy should cover both sides of the chain.

Supply-chain attacks often prey upon the open source community and its collaborative projects because of the popularity of Python and Javascript and the fact that developers don’t always have control of the components they seek to use in their software development. That’s a problem because hackers can easily backdoor libraries or steal SSH credentials. At least one Python package was compromised when someone put a backdoor in the code, copied all the SSH credentials and sent them off to a website. Hackers have also been known to replace crypto-currency addresses in legitimate software to hijack funds.

Bad actors also increasingly use tactics such as typosquatting (name hijacking). One example of this involves Mongoose for NPM, a popular object data modeling module for MongoDB. A malicious threat actor copied the original code and put a backdoor in a module called Mongose, an easy typo anyone could make that would result in the download and installation of the code that was backdoored. The typosquatted module behaved exactly as the original Mongoose module, but with an added malicious backdoor. Similar attempts have included colorama (colourama), jellyfish (jeIlyfIsh – with a capital i), and urllib (urlib). Everyone from beginners to experienced developers have at one point installed the wrong Python library, something to consider for coders who use NPM and Python’s PyPI often.

Here are five tips to consider that can protect organizations against supply-chain attacks:

  • Avoid the use of third-party modules.

If the team has to use third-party modules, make sure they are pulling the right module from the appropriate repository. In reality it’s impractical to completely eliminate the use of third-party modules, because they contribute to the efficiency of the developer’s process. Developers thrive and are often judged on velocity, so third-party modules are often the only way to keep up with demand. Always double-check the package before downloading.

  • Watch for threats when using modules by unknown authors.

Avoid copying and pasting code from Stackoverflow or other popular forums, or even tutorials for that matter. Always try to verify code from multiple independent sources. Developers often never know when the author of a malicious module has copied the blog of a well-meaning author, changed the code in the blog to import a typosquatted malicious module, and then used some SEO tactics to ensure their malicious tutorial gets ranked higher in searches.

  • Perform automated scans of code submitted in repositories.

Try to follow those that do these scans very closely so the team can take action if the team detects something in the modules they use. Maintain a list or map of imports and automate checks against known compromised modules.

  • Have a plan for external services.

Use “command and control channels” or “phone home” features to receive commands or exfiltrate sensitive data from external services. Ensuring good visibility in traffic patterns and detecting irregularities can let the team detect malicious attempts early. Of course, the SolarWinds case shows that these irregularities aren’t always obvious to detect.

  • Develop an on-premises and cloud strategy.

When working on-premises, use enterprise EDR and application-level gateways, centralize logs and run anomaly detection across all collected data and events.

In the cloud, use automated systems that track activity and detect anomalies. Also, check for workload protection solutions that can analyze logs. The major public cloud providers have facilities that let teams do event and data collection without agents. It’s just a matter of enabling the tools to get visibility and detection of irregular behavior. This becomes very useful when considering agility and DevOps that create and destroy environments in the cloud for testing, research and development. Agents and strict controls do not work well in the DevOps realm, so use an agentless and turnkey solution for securing those environments and ensuring they do not grow out of the security team’s control.

Remember that any backdoor or malware will eventually run some anomalous behavior and expose itself, so stay attentive and find ways to keep the number of false positives limited so the detection stays useable. For now, there’s little we can do to stop bad actors from trying to attack the left or right side of DevOps. So, organizations must pay attention and secure the entire DevOps process. It only takes one mistake to put the company’s data and organization at risk.

Pascal Geenens, director, threat intelligence; Daniel Smith, head of security research, emergency response team, Radware; co-hosts of Threat Researchers Live