Patched doesn’t always mean secure
Keeping software patched has long been a fundamental best practice in all areas of cybersecurity. When you are running third-party software, immediately patching to the latest versions is the only way to stay protected against known security flaws. While this is always recommended (though not always realistic due to limited resources or compatibility issues), it is no secret that security patches don’t always work as expected, especially with high-profile vulnerabilities that need to be patched quickly.
There are usually multiple ways to exploit a single vulnerability, and a buggy or incomplete patch might only mitigate some attacks while leaving the application vulnerable to others. Due to the increased attention they suddenly get from the security community, high-profile vulnerabilities also tend to sprout new disclosures, requiring yet more patches in a short time.
Log4j itself was a case in point, with version 2.15.0 that fixed the original RCE vulnerability quickly found to have a different one. Hot on its heels came 2.16.0, which fixed the previous issue but had two other vulnerabilities. As of this writing, 2.17.1 is the recommended safe version – see our post on Log4Shell for a technical analysis of the original flaw. To give you one more example from recent months, in October 2021, a path traversal vulnerability was discovered in Apache web server 2.4.49. The fix rushed out in version 2.4.50 proved to be incomplete, and it took 2.4.51 to finally address the root cause.
Far from being rare, incomplete security patches are a basic fact of life in cybersecurity.
Fixing security defects is never easy
Faced with a vulnerable third-party product or component, most organizations have no alternative but to wait for a patch, install it, and hope that it works as advertised. But what about web applications developed in-house? You have your own developers on hand, so surely fixing a security issue in your own application is quicker, easier, and more effective than waiting for a patch? It certainly should be, but fully remediating security issues and doing so on schedule and without affecting other aspects of the application is always a tricky balancing act.
There are at least a dozen reasons why developers can struggle to nail the right solution to a security defect, including skill gaps, inefficient workflows, immature tools, and time pressures. One overarching theme is that, all too often, security issues are reported and handled separately from non-security bugs. Each security ticket pulls developers out of their streamlined work environments without providing the guidance they need to identify and remediate the root cause. And even assuming you have all the right skills, resources, and tools, security is never easy, and there will always be vulnerabilities that simply need time and hard work to investigate and fix.
Whatever the specific reasons, it is common for vulnerability fixes to require more than one attempt, though with the right tooling and workflows, you can at least iterate through that process far faster when working in-house. Even so, implementing a fix is one thing, but quickly yet thoroughly testing if it truly addresses the vulnerability is a challenge in its own right.
Applying the zero-trust mentality to web application security
In the wider scheme of things, security patches and vulnerability fixes are merely special cases of changes to the application environment. From a security standpoint, every single change, be it a minor patch, a configuration tweak, or a major new release, could potentially introduce a new vulnerability or fail to fix an existing one. You cannot afford to blindly trust that you are still secure – the only way to be sure is to test everything, and test it often.
The concept of zero-trust is gaining traction with organizations worldwide, especially with CISA pushing for the adoption of zero-trust architecture (ZTA) in US federal agencies over the past year. While ZTA relates specifically to authenticating and authorizing all access to computer networks, systems, and resources, the basic idea of zero-trust is as old as cybersecurity itself: trust nothing, suspect everything. Applied to web application security, this means not only distrusting every access attempt and HTTP request but also distrusting every part of your application environment until it has passed your security testing process – and treating every change as insecure until proven otherwise.