If ransomware and data exfiltration attacks that targeted hospitals and vaccine researchers during the pandemic signaled a cyber hygiene crisis in health care, the SolarWinds supply chain attack demonstrated just how deep the problem goes.
After all, health care facilities are especially reliant upon third-party software and medical devices to operate on a day-to-day basis, but also save lives. Yet the more partners a facility uses, the greater the risk of a system breach or attack.
A new report issued this week by the CyberPeace Institute seeks to illustrate the human impact that relentless cyberattacks have on health care staffers, patients and society. Featuring a compilation of interviews, outside research and recent news stories, the report offers key recommendations for various stakeholders. Among them: “Develop certification and labeling schemes across the sector to enhance trust and security in products and services, thereby protecting the complex health care supply chain which relies heavily on third-party vendors for its day-to-day operations.”
In the meantime, however, Tony Cook, head of threat intelligence on GuidePoint Security’s consulting team, sees another approach growing in popularity. In the wake of the SolarWinds incident, an increasing number of health care institutions are embarking on threat-hunting missions to seek and destroy exploitable vulnerabilities across third-party applications.
A former Navy Cyber Operations Command division officer, Cook has collaborated and consulted with health care companies along several fronts during his career, including as director of incident response firm the Crypsis Group, as principal security consultant at RSA Security, and now at GuidePoint. Cook spoke to SC Media recently about this burgeoning threat-hunting trend.
As a frame of reference, give me a timeline in terms of when you starting seeing this major uptick of threat hunting among health care institutions.
Cook: I can almost 100 percent – with some variability – point towards the SolarWinds breach as the number one driving factor of why people want to get their networks looked at, especially in the health care industry, [which prior to SolarWinds saw] a lot of ransomware hits.
So when we sit down and talk to the CISOs, most of the time in these organizations they’re worried about one, getting hit with ransomware and what the effect will be, and then two, the supply chain attacks.
So it's been a huge trend twist or pivot from trying to get the basics done… and get your network hygiene right, making sure that you're segmenting things the right way: “What on my network can I not trust?”
We're definitely now going down the path of leading them towards zero trust models and trying to get them to understand that; it’s been a big switch from just getting the basics down to understanding… how these third parties are in my processes, and how can I get the type of logging I need to know if something were bad to happen.
So the ransomware attacks weren’t even as much of an incentive to initiate threat hunting as the SolarWinds supply chain incident? What about other breaches that were enabled via a third party?
Cook: There have been a couple of follow-up [incidents], like Accellion… There was just a vulnerability, and now people are getting data exfiltrated inherently. A couple of those have hit the health care stuff that unfortunately we’ve had to work on. But yeah, [there’s now] this sliding trend of not being able to trust anything that you haven't been able to build yourself.
That has to be a significant pain point particularly for hospitals and health care organizations, when you think about the countless number of third-party systems and medical IoT devices, all of which represent third-party risk.
Cook: And that’s the part that’s tough for a lot of people to even wrap their mind around. A lot of organizations just struggle with visibility in their environment anyway, whether there's dark IT going out or shadow IT. And you don't even know the servers that are in your environment or the IoT devices – something as simple as a TV that is just open to the environment.
It's really making sure that you have complete visibility... And that involves a lot of things like making sure that you can sweep across the environment and there are no outliers. What are on these systems? What are the vulnerabilities? What are the services that they're offering? And, by and large, what artifacts can we pull off of these to see if something bad has already happened?
This kind of threat hunting is something that organizations across many verticals and sectors are doing. Aside from the aforementioned wealth of devices and systems in a hospital setting, can you explain to me what else is unique about the challenges of threat hunting within a health care environment?
Cook: There are… the regulations that may come along with [using] a medical device: You have to have certain approvals from certain agencies to even put on an endpoint detection and response capability on one of these hosts. That could take up to six months to a year just to be able to get visibility. And that goes for even making the slightest changes for Windows logging. Obviously, sometimes people go rogue and they just do their own thing there, but there comes a lot of scrutiny when you get to medical devices, about even making the smallest configuration changes.
Now hopefully these things are segmented off the normal network and they've done the right things to make it hard for attackers. That being said, with the interconnectedness of most of these devices nowadays – whether it's Bluetooth or there's some other network connectivity – you could pivot within a lot of these environments relatively easily if there hasn't been network hygiene done first.
What does this uptick in threat hunting look like? What form is it taking?
Cook: To answer your question I'll go back to what we used to see. We used to see a lot risk management frameworks that would come in and essentially try to wrap around every risk… in an organization, prioritize it and get everything right. It was such a complicated process – this report that these people would be given – you'd need to have three full-time employees reading this report and trying to relay it to the right organization or the right entities inside the organization to get movement. Even something as easy as “You need to have a password reset policy” [was complicated].
That was the big emphasis: trying to make sure you have a risk management framework on everything. Don't get me wrong, that should still be a thing. But what we’ve seen is prioritizing doing actual threat hunting, where you're taking in those indicators of compromise that we've seen in the past, and making the right hypothesis in your environment. “Here are the threats that would be [found in] it.” And really getting that threat modeling down to an exact science, so that you can do the right threat hunts in your environment and not just waste your time thinking that you’re secure, because you enabled some threat feed from some random organization.
Would you be able to give me a specific example of a health care organization you’ve worked with recently that wanted to initiate more red teaming or threat hunting to root out threats illustrated by the recent ransomware and SolarWinds incidents?
Cook: I definitely have a recent case study… It was a ransomware hit. We found the dwell time to be about two-and-a-half months. where they were able to move around in the environment, get the credentials that they needed, and then just move around laterally, grabbing a couple of key things that they wanted.
We actually believe that... initial access to the environment was brokered. And then after that, they sold it to a ransomware actor. Luckily, this health care organization had air gaps on everything that would be potentially horrible to have [knocked] out, like [electronic health record] systems. Most of their entire lab was not online, or at least had a gap in between.
So, once we got through all of the analysis and showed them what had occurred, we came back with an entire recommendation portion. [We said:] "Even your IR plan isn't looking up to snuff. Let's start there and start working through some of these scenarios... This was ransomware, but what happens if this was a SolarWinds?"
Where we're at with them right now is trying to get them to understand that that repetitive consistent testing -- whether it's pen-testing, purple teaming, things of that nature -- need to get done in your environment so that you keep a constant thumb on the pulse of your entire environment, understanding when new things are introduced into your system.
What would you say is the maturity level of most health care organizations' threat hunting programs?
Cook: I would guess most of them are at level one. It's above just alerting. They probably have bought a threat-hunting feed and it's being put into a SIEM of some kind that maybe people look at, maybe they don't.
Trying to get them to understand how to get to two, three, four, up from where they're at – the biggest issue is showing them that they don't have the proper visibility. That gap analysis of, "You wouldn't even be able to detect this if you saw this in your environment, because you don't have these tools in place, or this logging in place, or there's just no segmentation here."
Are you at least seeing signs that this newfound interest in threat hunting will pay dividends down the line?
Cook: What I've seen so far this year is a lot of buy in, from beginning of this year moving forward after the SolarWinds stuff. A lot of buy in from the c-level down, where before they might have just been like, "We don't have the budget," or "there's just no way that we're going to be able to do these things because of staffing."
Almost all the organizations that we're working with right now have really put their money where their mouth is – whether it's been hiring new people to help out, whether it's buying new products, or even just trying to get a deeper understanding of how operations work in the health care organization.
I have high hopes for it right now, mainly because I think there could be some kind of action taken against c-level management if people come to find out that there was a lot of lax security [that led to a successful attack].
What about the third-party device manufacturers and software vendors working with these health care institutions?
Cook: There's a ton of communication back and forth, where a lot of vendors are trying to be as transparent as possible right now, letting people know: "We're working on all of our processes and making sure that there is no SolarWinds issues in our environment." But what I see the future coming down to is specific industries having some sort of a framework that says that they meet this level of security checks before it can be enabled in our environment.
Now is that really going to fix the long-term issue? Are you always going to do a complete review of every code that comes into your appliance? Probably not, but I think that the idea of... making sure that you even have the level of baselining in your environment to see if that appliance is doing something a little bit weird... [and] really locking down that idea of zero trust will be the future.