Application security, Cloud security, DevOps

CWE Top 25, Bugs in Inconstancies, Sequoia Vuln, Twitter Transparency, & Cloud Risks – ASW #159

This week in the AppSec News: CWE releases the top 25 vulns for 2021, findings bugs in similar code, Sequoia vuln in the Linux kernel, Twitter transparency for account security, a future for cloud security, & more!

Full episode and show notes

Announcements

  • In an overabundance of caution, we have decided to flip this year’s SW Unlocked to a virtual format. The safety of our listeners and hosts is our number one priority. We will miss seeing you all in person, but we hope you can still join us at Security Weekly Unlocked Virtual! The event will now take place on Thursday, Dec 16 from 9am-6pm ET. You can still register for free at https://securityweekly.com/unlocked.

  • Join us June 29th for a webcast with Tyler Robinson and Beau Bullock to learn how to pivot into the world of Crypto security. Visit https://securityweekly.com/webcasts to register with only your name and email! Don't forget to check out our library of on-demand webcasts & technical trainings at securityweekly.com/ondemand.

Hosts

Mike Shema
Mike Shema
Security Partner at Square
  1. 1. Finding Bugs Using Your Own Code: Detecting Functionally-similar yet Inconsistent Code - The 30th USENIX Security Symposium is coming this August (details at https://www.usenix.org/conference/usenixsecurity21). One of the papers has an appealing premise and results, albeit with a clunky name of Detecting Functionally-similar yet Inconsistent Code (FISC). The idea is to apply machine learning techniques that only rely on your own code in order to identify inconsistencies that can indicate bugs like memory leaks and mishandling pointers. The appeal is that the ML training relies on the codebase itself -- no need for a massive corpus that may be unavailable, unwieldy, or unconnected to your apps. Even if you don't have the means to dive into the underlying techniques, the basic principle is something that an appsec team should strive for. When you identify a vuln, such as through a scanner or a bug bounty, don't stop at the fix, but look for similar patterns in your code. Of course, the catch is being able to express what "similar" means, having an effective tool to search for that similarity, and producing hits that are meaningful bugs... Some other USENIX presentations with an interesting appsec angle include: - "Android SmartTVs Vulnerability Discovery via Log-Guided Fuzzing", https://www.usenix.org/conference/usenixsecurity21/presentation/aafer - "Saphire: Sandboxing PHP Applications with Tailored System Call Allowlists", https://www.usenix.org/conference/usenixsecurity21/presentation/bulekov - "ALPACA: Application Layer Protocol Confusion - Analyzing and Mitigating Cracks in TLS Authentication", https://www.usenix.org/conference/usenixsecurity21/presentation/brinkmann - "Why Eve and Mallory Still Love Android: Revisiting TLS (In)Security in Android Applications", https://www.usenix.org/conference/usenixsecurity21/presentation/oltrogge
  2. 2. CWE – 2021 CWE Top 25 Most Dangerous Software Weaknesses - Ok. So, the "Common Weakness Enumeration" (aka CWE) has some of the most unexciting and snooze-inducing naming conventions. After all, there's a lot of "improper neutralization of special elements used" for blah, blah, blah -- when the shorthand like XSS, command injection, SQL injection, and the like are easier to remember. However, it's a useful way to classify software weaknesses so that tools can reference a shared language. When CWEs can be tracked over time we benefit from some basic insights into the trends where appsec teams may want to focus. The latest list of the top weaknesses for 2021 hits very familiar items in compiled code (all those out-of-bounds reads and writes, use after free) and the boringly repetitive offenders of the web, from XSS to SQL injection to CSRF. And, if you look closely, you'll find ASW's favorite type of vuln coming in at number 8. A terrible way to use this list is adding CWE numbers to your secure coding training -- no one cares and it's not useful. Instead, pick a weakness or two that's relevant (and hopefully not prevalent) to your code base and set that as a vuln class to eliminate. Add a framework to prevent it, switch to a different coding pattern that makes the weakness harder to implement and easier to spot, remove potentially vulnerable code that's not even used anymore. Find something more effective that just grepping for the problem. The Root Cause Analysis page from Project Zero has a good list of vulns that fall into the top items on this list. You can find it at https://googleprojectzero.github.io/0days-in-the-wild/rca.html. For example, they describe an out-of-bounds write in IE that can be triggered by three simple lines of JavaScript (https://googleprojectzero.github.io/0days-in-the-wild//0day-RCAs/2021/CVE-2021-33742.html). You can also find a slew of use-after-free bugs that they've documented. If you're looking for examples of high-impact vulns with good explanations, any of these could be part of a secure coding discussion.
  3. 3. Unpatched iPhone Bug Allows Remote Device Takeover - Here's a quick followup on the format string bug that hit iPhone's Wi-Fi connections. It was a fun bug with a dead simple implementation -- throw a few format identifiers (like "%s") into the SSID -- that lead to a gnarly DoS that prevented the device from rejoining any Wi-Fi. Apparently, the DoS is now an RCE. It's a good example of an appsec (and cryptography) premise that "attacks only get better". Fortunately, another appsec premise is helpful here: "patch your stuff".
  4. 4. Sequoia: A Local Privilege Escalation Vulnerability in Linux’s Filesystem Layer (CVE-2021-33909) - Here's another vuln disclosure from Qualys, this time in a size_t-to-int type conversion in the Linux kernel's filesystem. What's curious about this (or frustrating, depending on your point of view) is that this type of unsigned to signed integer conversion is known to have potential security side-effects and has known patterns for correctly handling these types of comparisons. It's something that compilers can warn on and is relatively straightforward to identify with semantically aware code analysis tools. And don't write off this type of vuln as something specific to Linux or the C and C++ languages. This isn't a memory-safety issue (although it could be a precursor to one) and it's a type of vuln that even Rust could fall prey to. Fortunately, its compiler (along with modern C/C++ compilers) can loudly warn you about these issues. So, before you assume that values will never be large enough to trigger integer overflows, pay attention to what the compiler is suggesting and take the time to harden those areas of code. Here are some more links if you're interested in integer overflows - https://doc.rust-lang.org/book/ch03-02-data-types.html - https://clang.llvm.org/docs/UndefinedBehaviorSanitizer.html#silencing-unsigned-integer-overflow - https://cwe.mitre.org/data/definitions/190.html -- since we're talking about CWE this episode - https://github.com/dcleblanc/SafeInt -- Dave LeBlanc did a lot of early work in this area to harden Microsoft code - http://phrack.org/issues/60/10.html#article -- an article in Phrack from 2002
  5. 5. Twitter Transparency Report on Account Security - We like to highlight efforts at transparency, whether through postmortems, open source tools, or reports on what works well (or doesn't!). While it's good to keep a skeptical eye out for information masquerading as marketing, it's just as good to read docs like this to see what can be learned from them. In this case, a takeaway for the appsec community is that on 2.3% of Twitter users have enabled any form of 2FA and of that cohort 80% are relying on SMS. So, next time your advice to a product team is "make 2FA available", take the time to plan out how to measure and drive that adoption so the userbase hits a percentage that's closer to 2.3% *haven't* enabled 2FA yet. And for bonus points, make sure the account recovery process doesn't become the new weak link once you have mass adoption of 2FA.
  6. 6. Google Cloud CISO Phil Venables on the future of cloud security - In the past few episodes we've been talking about the cloud and risk in terms of risks that the cloud makes easier to manage and what risks it might introduce. Unsurprisingly, we'd say most of the risk is more correlated with the maturity of your DevOps team and appsec program as opposed to just being "the cloud". Here we have Phil Venables talking about paving a way forward for cloud security that leverages modern tools and approaches to address risk in more automated, consistent fashions. As an aside to last episode's discussion about banks, clouds, and availability, Phil comes from a banking background, having served as CISO for Goldman Sachs. Read the original article at https://cloud.google.com/blog/products/identity-security/cloud-ciso-perspectives-july-2021 We covered another article from Phil in episode 151 about his views on complexity, security, and using good design to keep complex systems secure. You can find the episode and article at https://securityweekly.com/asw151
prestitial ad