- 1. Autodiscovering the Great Leak
In a series of unfortunate events, researchers discovered how to gather plaintext credentials from an Exchange autodiscover mechanism. The autodiscover begins with a design intended to improve usability, its documentation offers familiar cautions against misuse or mistakes, and yet it still suffers from dangerous failure modes and a lack of controls to make that misuse more difficult. Consequently, insecure clients end up connecting to attacker-controlled domains and following downgrade attacks to expose credentials instead of OAuth tokens.
It's a good lesson for appsec teams who solely rely on written guidance or standards to elevate security -- you need something to inspect code and configurations to ensure that guidance is being met. As an industry, we have to do better than just saying, “Before you send a request to a candidate, make sure it is trustworthy."
- 2. RCE is back: VMware details file upload vulnerability in vCenter Server
This article covers vulns in two different apps: VMware's vCenter Server and Nagios. What stands out is how simple they are in terms of bring well known bug classes with high impacts. The vCenter Server suffers from a file upload vuln, path traversal (yesss!), and even a DoS via XXE. On the Nagios side the list includes command injection and, shockingly for modern app design, a SQL injection as well. A challenge here would be identifying what part of a secure SDLC failed, from tools to identify these kinds of vulns, to a design phase to make them harder to introduce, to using frameworks that make them near-impossible to introduce.
More details at:
- 3. 100M IoT Devices Exposed By Zero-Day Bug
Here's a dead-simple bug that can be triggered by a relatively simple payload that's in a software component likely present in millions of IoT devices. While the potential impact might not be so dramatic as the article states, a flaw this easy to exploit in devices with notoriously poor patching practices is likely going to be around for a long time. It's the kind of thing that will show up on a CISA top vulns list in a few years.
On the technical appsec side, it's a simple example of signed vs. unsigned type mismatches. It's also the type of flaw that we'd hope a compiler would warn about or a fuzzer would be able to discover (after all, user-influenceable payload lengths are a fruitful attack vector). And of those two tools, seeing compiler warnings about this type of flaw and being able to correctly identify it as a potential vuln would be a huge time saver. Fuzzing would be a great next step, but that requires more time investment to set up and maintain, whereas DevOps teams work with compilers on a daily basis.
- https://github.com/nanomq/nanomq/issues/203?fbclid=IwAR0dfQrgHknG6ZsEv5WDJnpzaxyjUdQ-BtLC0ON4RkJHQm6dnB1HA4Bu1w8 -- for the brief writeup
- https://mqtt.org/ -- for background on MQTT
- 4. Developers fix multitude of vulnerabilities in Apache HTTP Server
Nothing too exciting about this article other than how uncommon it's been to see high risk vulns in Apache HTTP Server. The point releases continue to have handfuls of low to moderate items and demonstrate the kinds of memory safety flaws you'd expect from a C-based project. What might be interesting is doing an analysis to see how the risk has gone down over time or to measure whether these vulns are coming from newly written code (as in, are developers still making mistakes?) or newly discovered in old code (as in, are analysis tools getting better?).
It's also cool to see ClusterFuzz show up in the acknowledgements. Even if the vuln it identified was low risk, it's nice to know that automation is demonstrating value. You can find more details from the security release notes at https://httpd.apache.org/security/vulnerabilities_24.html.
- 5. An update on Memory Safety in Chrome
A clear theme this week is compiled code, its consequences, and its chances for better controls. And, of course, that means we drag out the magic phrase of "memory safety" -- the bane of C and C++. The Chrome developers know this, having seen this in roughly 70% of their severe security bugs last year. This article shows how they're thinking about addressing the class of bugs that fall under this memory safety umbrella. They've settled on two options: further harden how raw pointers are manipulated throughout the code base and re-implement parts of the code base into another language like Rust.
For a complex and large code base like Chrome, neither of the approaches is trivial and neither comes without costs. but the cost of insecure software can be even higher, especially for software as ubiquitous as Chrome. So it's also educational to see how they're approaching both the performance costs of hardened pointers and the operational costs to developers for dealing with even more complexity or a completely new programming language.
Check out more details at:
- 6. 2021 Accelerate State of DevOps report addresses burnout, team performance
Here's another article from Google about their report this year on DevOps Research and Assessment. It's a review of the maturity of DevOps practices within orgs and how that maturity positively impacts software quality -- and therefore security. There's a section on security. It involves a lot of security reviews in a lot of SDLC phases. It also states that "teams with high-quality documentation were 3.8 times as likely to integrate security throughout their development process." So that's a win for documentation, but likely also requires followup practices and controls like linters, scanners, or other means of automating security recommendations rather than just relying on manual reviews.
- 7. OWASP’s 20th Anniversary Celebration
OWASP turned 20 this month and celebrated with a free streaming conference. While recordings aren't yet available, it's always a good time to check out their dozens of projects and cheatsheets -- find one you'd like to participate in!
Find them at
- 8. HackerOne expands Internet Bug Bounty project to tackle open source bugs
Open source projects can always benefit from attention, participation, and budgets for security. Seeing more opportunities to reward researchers for bugs discovered in open source software is good, but it also brings us back to the discussion of where to prioritize security investments. Sometimes we don't need to be reminded how prevalent software flaws are, sometimes we need more assistance in designing and rearchitecting software so those flaws are harder to introduce or less impactful overall.