Attendees at HackerOne at Collision 2017 in New Orleans on May 2, 2017. (Photo by Diarmuid Greene / Collision / Sportsfile | "DG2_4955" by collision.conf is licensed under CC BY 2.0)

The Log4j vulnerability is going to be a persistent problem, with experts expecting new discoveries of the bug to keep occurring for years. Log4j is so pervasive within Java programming that, in organizations with imperfect asset management, undocumented instances will be burrowed deep within many enterprises systems, waiting for future penetration testers to discover.

Alex Rice is the chief technology officer of HackerOne, which connects crowdsourced hackers to companies to audit security. And, while he said he expects his armies to find the bug plenty of times in the future, he thinks many enterprises have built themselves problems that could be avoided before HackerOne came in.

SC Media spoke with Rice about how changing development practices could reduce the next Log4j's woes.

SC Media: We're anticipating Log4j being around for a while. What does that kind of look like to the crowdsourced hacker community?

Alex Rice: Let me comment on the broader security industry here first. The answer to this is great vulnerability management backed by a really solid asset inventory. It's as simple as that right? Have a good asset inventory, and then just put a patch cadence together to get it get it all updated. Right?

The reality is, that program, no matter how advanced it is, is always going to have gaps across a number of different areas.

We don't see bounty programs as the total solution to finding Log4j in your environment. You should have a vulnerability management program and scanners in place to identify hopefully 99% or more of the vulnerable instances, and you're already hopefully starting remediation activities on them. You don't need any help from hackers to do that. But in practice, no enterprise of any reasonable consequence has a vulnerability management program for an asset inventory that they're confident or comfortable with. And that's where the hacker community really fills the gap in.

It's one of those things that has a temporal value to it. In the first day of response, having a bunch of hackers incentivized with bounties to go find newly released CVE is not useful. And the vast amount of our programs won't reward or will incentivize that behavior. But once you've patched everything that you're aware of, maybe that's 24 hours later, maybe it's seven days later, maybe it's 30 days later, suddenly incentivizing hackers to tell you about instances that you missed, is insanely valuable.

I think we've already started to see the first bounty programs, get through that window and say, "Hey, we're done with our response. Now we'll reward for anything that we've missed." HackerOne's own bounty program did this about 24 hours after the response started. We've had other customers like Grammarly that have now published they finished their response and now want you to go look for anything that they've missed. That timeline will be a little bit different for every program.

But that's how I expect to see this playing out in the bounty industry is. Once you're done with your vulnerability management program, ask hackers to despite what's best.

You mentioned the need for asset management. For enterprises that have not already set it up, that's not always the easiest lift —

AR: I don't have any sympathy for folks like that. It is clear that getting a handle on what technology is being developed, what assets you own, should be at the top of every security programs list. And I think where we've gone wrong here as an industry is we've outsourced that task to the CISO instead of the R&D budgets that are building software to start with, the organizations that do asset inventory and vulnerability management — it's a core responsibility of the technology or IT organizations that are building those assets.

I both have a lot of sympathy for CISOs that had this dumped on them today. But if I'm speaking more broadly to enterprises and non-security executives that are paying attention to this, there are countless examples of enterprises that have done this well. And the commonality across all of them is it's the responsibility the folks building software to keep track of what they've built, what dependencies they use, where they've deployed it. And it's that there's not a security team following them around after the fact trying to reverse engineer what they built, why, and where it's running.

I don't think in 2021, it's excusable for an enterprise CIO or head of engineering to not have a reliable asset database.

That actually ties in to the question I was about to ask. Will the process of remediating, and all the pain that comes with it, convince organizations to invest in asset management?

AR: The remediation puts too much focus on infosec. The people I would love to get to pay attention to it are not the security team, not the CISOs, not anyone who has security in their job title. They have known this was coming. Everyone who's in that role was just kind of waiting for something like this to happen again.

The people we need to really pay attention to this are the folks that put Java there in the first place, which was almost never the security team. And so if I would have one wake-up call to the industry, it's — let's pretend you didn't have a CISO, you didn't have a security organization: How would engineering organizations and digital teams respond to this? The right place to tackle it is every enterprise with a multi-year-long Digital Transformation Initiative and massive investments into software development and the like? This needs to make its way to those roadmaps and those budgets because that's the solution.

One of the comments I've heard a lot is the need for [software bills of materials], but those sort of presume that asset management is already in place.

AR: It's a very useful tool. It assumes you know what you've built. It is impossible to ask a security team after the fact to catalogue everything that's getting built across an organization. And so one of the flaws in SBOM is how much of it is driven by the infosec community instead of the DevOps or the software engineering community. If you look at it through that lens, it's not a tool that's developed designed with developers in mind.

How do we get to developer adoption then? Is it a matter of education or does it mean reimagining the tool entirely?

AR: I don't think it needs to be reimagined. I think we need to involve builders and developers more heavily in that process. "Reimagined" is a bit too strong. It's a question of who are we building the tool for? If we're building the tool for developers to manage and update their dependencies, there are different design decisions that we would take that don't involve starting from scratch.

If you look at an example of this done well, look at GitHub Dependabot, look at Snyk. That's not an enterprise-wide, comprehensive solution. But that's the type of thing that shifts the responsibility to developers, so they can quickly say and get an assessment of where is everywhere in my infrastructure that Log4j is running and let's get it up to date.