The continual increase in security threats combined with an overwhelming amount of data and false positives is creating major headaches for IT security teams. Additionally, the cybersecurity industry faces a colossal shortage of talent, making it nearly impossible to stay on top of the latest threats.
Enter artificial intelligence (AI).
According to data from ESG research, 12 percent of enterprise organizations have already deployed AI-based security analytics extensively, while another 27 percent have deployed AI-based security analytics on a more limited basis.
The relationship between AI and Machine Learning (ML) is often poorly articulated. Artificial Intelligence is simply concerned with causing machines to perform tasks characteristic of human intelligence. And ML is simply a way in which that AI can be achieved. Because ML provides a mechanism of learning where systems do not need to be explicitly programmed, we now have a chance to achieve AI with the enormously wide and high-fidelity data sets on which modern security systems must function if they wish to be effective.
In this sense, AI can simplify the work of the security operations center (SOC) by aiding with the coordination of many different forms of analysis. It can clarify the intelligence landscape and help weed out noise and false positives. It also holds promise to alleviate cybersecurity staffing woes by potentially automating everything. But there’s much to consider before AI can become the cornerstone of your IT security framework.
The Challenges of AI
At the present, the state of the art in AI is all about performing very narrow and specific tasks. However, sophisticated and advanced attacks cross many different surfaces and knowledge areas, some of which are technological and many of which are simply organizational. These require a highly generalized intelligence which is a widely unrealized goal in AI.
For example, one of the most misleading claims in the market is all the hype around AI transforming the way threat actors work. Realistically, AI is not being used much on the offense side for very simple reasons: the most sophisticated attacks are deeply human, working with strong organizational knowledge gained through existing employees, social engineering, rogue actors, etc. When coupled with knowledge of the most effective communication patterns, these human-led attacks are more likely to succeed. And although AI may play a role in automating attacks as well as defense in the future, most major risks will come from a non-AI approach. AI is simply not yet advanced enough, nor does it have easy access to all the required data, to outperform humans on this front.
On the defensive side, AI has become a marketing buzzword—often used interchangeably with ML—causing considerable confusion, especially in early adoption.
Although we’re nowhere close to the point the point where AI solutions have total autonomy and can replace highly-skilled security staff, there are aspects of AI and ML that can be used to help enhance the humans who use this technology. For example, the same ESG study notes that 29 percent of respondents indicated that they were interested in using AI-based cybersecurity to accelerate detection—curating, correlating and enriching security alerts, to create a more complete detection story across various expert systems. Additionally, 27 percent see value in using AI-based cybersecurity technology to improve and speed up incident response—prioritizing serious incidents and even automating remediation tasks.
Another significant role for AI in security is to advance threat research. Intelligence is still largely a human research effort. It combines knowledge of current threat actors, tactics, techniques and procedures. It is coupled with a sense for how attacks can leverage vulnerabilities and work across numerous surfaces and is ideally augmented with information sharing within working groups. AI can play a very serious role in accelerating research, automatically generating new indicators of compromise, and identifying future research opportunities. But only if it has the data.
So at the end of the day, AI really is not about replacing humans. It’s about serving them better and helping them focus on the things at which they are best: being creative, executing on high-level reasoning, managing for context, adapting quickly, and sorting through what does and does not matter. Machines are great at speed, repetition, automation and scale: things for which humans would be really inefficient.
Therefore, when it comes to AI, truly successful solutions will be human focused and will blend AI and ML techniques with the skills of expert analysts. Taking this approach, security teams can create “machine-accelerated” humans—cybersecurity professionals who work in conjunction with AI and ML to proactively identify and mitigate threats faster and more reliably, primarily through freeing up humans to focus on strategic initiatives.
Where Do We Go From Here?
All this is to say that we should emphatically embrace AI. If we employ it toward ends that are focused on the success of the people involved, we will go very far indeed.