Cloud Security

Six common developer security fails that lead to cloud risks

Six cloud security tips

Baking security controls early on and throughout the development process – a.k.a., shifting security left – has led to systemic improvements in enterprise security.

Even so, cloud security has become a mind-boggling, complex endeavor that’s created new risks and let attackers put a new spin on old tactics, techniques, and procedures. Between the unprecedented scale of cloud resources and the fact that security teams are highly dependent on developers to implement security policies in the cloud, complexity-at-scale has emerged as the basic nature of cloud security.

While most cloud developers are security conscious, even the best of them are sure to do something that leads to an exposure they could never have anticipated. Here are some of the more common security mistakes developers make:

  • Social engineering: I’m constantly amazed at how clever, tenacious and strategic attackers are when it comes to social engineering. For example, the wild story behind the recent discovery of a backdoor in the popular open source compression utility XZ Utils reveals how frighteningly close attackers came to unleashing a potentially devastating software supply-chain attack. That the stealthy, slow-to-unfold was frighteningly close to not being discovered adds drama that makes it the stuff of bingeable Netflix documentaries.
  • Burnout/human error: While less dramatic and interesting as an elaborate social engineering campaign, people make errors and mistakes all the time that create cyber risk and exposure. Something as simple as a typo can create a broken dependency, that buried in a sea of dependencies, are often extremely hard to spot and therefore easily exploited by a hacker.
  • Secrets in code, in a file, or on a disk: If a company had a single developer that reused passwords and/or keys instead of using a vault, there are exposed secrets somewhere in its environment. Even if developers know better now, most companies have exposed legacy secrets in their cloud before. The task of “cleansing” exposed secrets and making sure such bad practices do not creep in again is daunting.
  • It’s too easy to break the assumptions the system was designed on: Applications are ever-evolving, and new developers come and go. It’s very easy for a new developer to push some piece of code that might look innocent and pass all code-review checkpoints, but actually break a hidden assumption that the system was built on, which renders the entire system insecure. It’s hard, time-consuming and impractical to track every single assumption. The smarter play: actively scan for risks -- including those caused by broken assumptions -- and remediate them as soon as they are detected.
  • Lack of fine-grained access controls: Fine-grained access controls are a powerful security lever, but maintaining and updating them over the course of building an application represents another labor-intensive task. In fact, managing fine-grained access controls over time has become so impractical that a developer may forgo them for coarse-grained controls. While it’s easy to understand why a developer might decide to go that route, coarse-grained controls almost always result in too many people having too much access. AI could help here, but AI has its own set of cloud security risks.
  • AI-assisted code generation: As developers become more and more dependent on AI code generation assistants like Copilot, a new kind of threat emerges: rogue AI.  Developers come to trust their AI assistants and will often accept any piece of code suggested by them, without actually understanding the implications. A threat actor who gains control of such AI assistants could easily introduce malicious code into codebases all over the world, which would look totally legitimate as it was pushed to production by real developers, but would compromise countless services.

Moving forward, security leaders must assume a breach will happen and be ready to respond. The irony of mitigating cloud risks created by humans is that even though we know the attack vectors, the specifics are random – there are few, if any patterns. If there were, we could easily automate incident response with AI.

While there’s plenty of room for automation to make cloud detection and response better, faster and more targeted, cloud risks are created by humans, but present as machine anomalies. It’s a scenario that many security teams are still acclimating to, but once those anomalies are found, security teams can fix them.

It’s not often we see “cloud security” and “good news” in the same sentence. Let’s take the win!

Tomer Filiba, chief technology officer, Sweet Security

Get daily email updates

SC Media's daily must-read of the most current and pressing daily news

By clicking the Subscribe button below, you agree to SC Media Terms and Conditions and Privacy Policy.