Time to make it tougher for attackers to use social engineering to enter DevOps environments

Social engineering

When LastPass disclosed late last year that an “unauthorized party” had found a way to access customer data stored in a third-party cloud service shared by LastPass and its parent company, the breach set off alarm bells in the popular press.

That was predictable. After all, if a company whose own security product purports to take care of customer passwords can’t protect itself against attackers, it’s a bad sign.

But something like this was bound to happen as the breach revealed a little-discussed problem that’s become the “soft underbelly” of the software development world.

Most of the conversations about the LastPass breach have focused on the company’s security posture and what passwords and other information might have been compromised. But we’re leaving out an even more important question: how did the attackers ever go that far in the first place? They got there because they could compromise targeted individuals through social engineering and then waltz into a development environment, allowing them to pick up valuable technical information and keys at their leisure.

The “soft side” of collaboration

Every development organization aims to build a collaborative environment. But that also raises questions that many organizations have failed to grasp adequately.

And this challenge isn’t confined to LastPass. Any organization developing its own applications often involves other entities. But as they set up large collaborative environments, they also invite potential risks as most organizations have little idea who accesses these environments.

The collaborative nature of software development also requires a healthy amount of open source. Younger developers are comfortable harvesting publicly-available code to complete their tasks. But do they fully understand what they’re working with? While the code in question may well execute and deliver the needed functionality, there’s the chance that it might also carry a surreptitious cyber time bomb.

Also, consider the fact that developers entering the workforce have grown up in an environment where many of their friendships are online – many of them with people they’ve never met in person. They just share similar interests or may have worked together on previous projects. 

For the most part, these are innocuous relationships. But when it comes to work that matters, where they’re freely conversing with other engineers, they might find themselves becoming targets of socially-engineered attacks to elicit information related to their work. 

What’s more, data often winds up getting shared much more freely than a security-minded CISO might like. I'm not talking only about designs. I'm also referring to source code or keys that wind up sitting around in the development environment. The operational live environment always gets the most attention. But we don’t see the same level of attention being paid to development environments, where coveted information may lay around so it can be shared and used.

Finally, many development organizations source their test data from live data, putting it into test models without doing sufficient anonymization. They say there's nothing better to vet an application than to see what happens in the real world. But that doesn't mean we should use the actual real live data of customers to put that to the test.

So what can CISOs do to mitigate the problem?

For starters, inject a red team into the organization’s development environment to scout around with an attacker's mindset and look at access points. See what information they're able to find, and what they can do to attack the organization.  

It’s a more strategic than tactical challenge. No matter which group offers all the fantastic innovation in a co-development project, organizations need to apply the same rigor in terms of vetting security privileges. Adopt the Security 101 Rule of least privileged access for data and information. Supplying only a username and a password no longer cuts it.

Don’t do co-development work with organizations that don’t treat security as importantly as the company does. Since they’re essentially functioning as an extension of the organization, security teams are entitled to demand that business partners provide the right kind of recruitment policies in terms of background checks and processes regarding the onboarding and off-boarding of employees. Also, make sure these partners aren’t lax about granting privileges, allowing their employees to amass unjustified security privileges that could compromise operations.

Think carefully about securing the confidentiality and integrity of the code base. Confirm and maintain their integrity while making sure they aren’t getting polluted with vulnerable or bad code.

Boiling all this down into a single sentence: security teams must ensure quality control over anything deployed and used in a live environment. For those who might consider this new information, it’s time to smell the coffee.

Steve Benton, vice president, threat research, Anomali.

Get daily email updates

SC Media's daily must-read of the most current and pressing daily news

By clicking the Subscribe button below, you agree to SC Media Terms and Conditions and Privacy Policy.