Generative AI, SIEM, AI benefits/risks

AI in cybersecurity: How defenders are prepping for the future

Share

As generative AI rapidly becomes an important tool in cybersecurity, industry leaders believe use cases for the technology within their own businesses will spur positive change and drive new opportunities.

Santiago Bassett, founder and CEO of Wazuh, said generative AI’s ability to create content is the foundation for the technology to become a vital part of the infosec world. The natural language capabilities of AI allow it to both parse dense data into cogent actionable analysis that humans can use to save time and money.

“We are a SIEM [security information and event management] platform, an open source SIEM solution, and we’re trying to use AI to better generate threat detection content that can help protect our users. I’ve actually gone through that process with GPT-4 (ChatGPT Plus’ language model), and it works surprisingly well,” he said.

(Editor's Note: This feature is part of SC Media's special 2023 SC Awards coverage. You can view the full list of winners here.) 

“You need to validate the content, because it’s like a junior engineer that’s trying to create threat detection rules. But if the quality of the training data is good enough, you can get really good content out of the AI models,” Bassett said.

An AI model could be taught to create threat detection content based on feeding it threat telemetry data from multiple sources, he said.

“We are developing a new engine for our SIEM we’re planning to release next year, and we’re actually testing all these new capabilities that we’re implementing, very much relying on AI models that can help us test.”

BlackCloak founder and CEO Chris Pierson said he sees generative AI providing a boost for anomaly detection.

“Feed it in a bunch of known knowns and see how it addresses the unknown unknowns to identify when an anomaly occurs.”

Given the huge volume of alerts and data that security teams had to deal with, AI would help transform the role of the analyst, he said.

“This is the exciting area of anomaly detection and fraud and anti-money laundering … that’s going to be the Holy Grail. Not replacing analysts, but making them more effective, based off of the deep knowledge that those individuals, that company, that product, that solution has. That’s going to be revolutionary for our industry,” Pierson said.

Greg Elin, senior principal engineer and evangelist at RegScale, also sees major AI-based transformations ahead for his organization, a governance and risk compliance company that is exploring AI-based compliance models and cybersecurity.

“I’ve always thought that there were applications for machine learning, when it comes to producing these large compliance documents we need for the federal government and different organizations. But effectively, we haven’t been able to do machine learning because nobody wants to share their compliance information with anyone outside of the regulators,” Elin said.

“You can’t go get a curated collection of system security plans, and have big data, to do machine learning. So, we haven't been able to apply these tools in our space. But suddenly, we now have generative AI that’s capable of creating content.”

Bassett said AI also has an important role to play in testing, for example with threat detection.

“I’ve personally used ChatGPT to actually test threat detection rules that I was preparing for analyzing Windows event data. And surprisingly, it works really well to test your own work.”

As an open-source solution, Wazuh said he was set to benefit from its customers’ use of AI.

“I see our users’ community developing use cases around AI, mostly related to enrichment of security alerts, providing context to security analysts around certain threats.

“That’s actually going to help security analysts understand better what the meaning of the alerts that we generate are. With security tools, where you can get thousands of alerts, it’s really hard for the users to consume that amount of data, and to understand the meaning [of particular vulnerabilities].”

Elin said he and the developers he worked with had incorporated generative AI into their daily work stream. “[It can] help me figure out some hard bit of code that I know how to do, but it would take me two hours to get it right. Suddenly, I’m being much more productive.”

How does the industry prepare for an AI future?

As the technology matures, the question becomes: what risks and challenges do organizations need to consider as they integrate AI into their business processes?

It’s a topic the National Institute of Standards and Technology (NIST) has been contemplating. In January it issued the Artificial Intelligence Risk Management Framework to help organizations manage different AI technology risks.

Pierson said one of the first considerations for organizations was to establish an ethical framework for dealing with AI.

“Let’s make sure that we’re ethically figuring out who’s going to be doing the designing, what data sets [will go] in and out. How can we go ahead and have some ethical guidelines and principles here that we all try to abide by? And that there is situational awareness within the companies that are doing it, so we can build it right from the ground up.”

In a similar vein, it was important for organizations to set ground rules for transparency around AI use, he said.

“Let’s actually get a little more granular in how we make this a little more transparent. We’re not talking about intellectual property, trade secrets. We don’t have to do that. But how can we describe, transparently, what the algorithms are designed to do, or are doing?”

Elin agreed: “When you really come down to compliance, transparency is a key thing. Where is the data coming from? How do we know how decisions are being made? I think one of the reasons that ChatGPT has been very successful is good limiters. Any system needs good governance and limiters to perform well. A fast car without good brakes is generally a problem.”

Security teams were not the only ones excited about the potential benefits of AI. Their adversaries were also intent on making the most of it.

“This technology is also accessible to malicious actors, and malicious actors are also going to figure out use cases for generative AI to be applied for compromising or attacking a company. At the same time, we are going to use this technology to protect our assets in our companies,” Bassett said.

The growing presence of AI meant a need to adjust our mindset around the outputs we get from technology, Elin said.

“The relationship that humans have had with machines for a while now is that machines can be, in many situations, trusted to reproduce the same results each time, a consistency that can be hard for a human to achieve. And so we have an expectation — whether it’s with our calculators, our computers, or our factory lines — that they’re going to be fairly consistent, if they’re working.

“And suddenly, we have a technology which is actually designed to be not necessarily consistent, but a bit creative. Right now, a lot of us treat these LLMs (large language models) as junior team members. They do work, but their work needs to be checked, just like other people’s work needs to be checked. If you suddenly say, well, I don’t need to check it, I trust it the same way I trust my calculator, you’re going to be in a world of pain.”

Pierson said effort needed to be put into educating the public about this new reality.

“How do we educate, make the population that’s using this [AI technology] a little bit more aware around [its] strengths and weaknesses, pitfalls, and that it isn’t 100% truthful, and you have to take a little bit of a grain of salt and examine it more?”

How should businesses respond?

With these challenges in mind, how should organizations manage AI deployments in order to gain benefits from the technology, while maintaining ethical standards and the support of their employees, many of whom would be fearful of its impacts?

“It starts with open communication. Let people know what you’re thinking about using and how you’re thinking about doing it. Let people know so they can be part of it,” Pierson said.

“What we should be doing is communicating up to the board, to shareholders, across the executives, and all the rest, and down to every single person at the company, so everyone knows what are we doing. What does the plan look like? Where are we at in that journey? What are some successes we’ve had, some failures we’ve had? And thinking about how does this actually help our employees or customers. It’s all really about transparency and trust.”

Elin said clear company communication should also include setting policies as well as being transparent about deployment plans.

“Part of what to communicate is really clear policies of how people can use it within the organization, what they can do, what they shouldn’t do.”

Pierson said an organization’s clear and transparent communication strategy would pay off by ensuring staff were enthusiastic about the AI journey ahead.

“What you’ll probably find is that a bunch of people are going to want to really smartly raise their hand and say, 'I want to be involved in that, it seems like that’s going to be something important over the next 10 to 20 years. Put me on that project, I want to help mold it and shape it and take the concerns of other people with me as we go through it.'”

(Editor's Note: This feature is part of SC Media's special 2023 SC Awards coverage. You can view the full list of winners here.) 

Simon Hendery

Simon Hendery is a freelance IT consultant specializing in security, compliance, and enterprise workflows. With a background in technology journalism and marketing, he is a passionate storyteller who loves researching and sharing the latest industry developments.

Get daily email updates

SC Media's daily must-read of the most current and pressing daily news

By clicking the Subscribe button below, you agree to SC Media Terms of Use and Privacy Policy.