IBM has been warning about the cybersecurity skills gap for several years now and has recently released a report on the lack of artificial intelligence (AI) skills across Europe.

The company said in a Friday email to SC Media that cybersecurity has been experiencing a significant workforce and skills shortage globally, and AI can offer a crucial technology path for helping solve it.

“Given that AI skillsets are not yet widespread, embedding AI into existing toolsets that security teams are already using in their daily processes will be key to overcoming this barrier,” IBM stated in the email. “AI has great potential to solve some of the biggest challenges facing security teams — from analyzing the massive amounts of security data that exists to helping resource-strapped security teams prioritize threats that pose the greatest risk, or even recommending and automating parts of the response process.”

Oliver Tavakoli, CTO at Vectra, said the potential of machine learning (ML) and AI materially helping in the pursuit of a large set of problems across many industries has created an acute imbalance in the supply and demand of AI talent. Tavakoli said cybersecurity companies have to deal with this shortage as they compete with Google (search), Netflix (recommendations) and financial institutions (algorithmic trading) for talent.

“What this has meant is that many cybersecurity companies have resorted to AI-as-a-sidecar, solving a small number of peripheral problems through the application of AI, rather than AI-as-the-engine: building the core of their offerings around AI and solving peripheral problems with conventional techniques,” Tavakoli said. “Predictably, the former approach has resulted in a large gap in what they deliver versus the value customers think the AI should deliver.”

Christopher Prewitt, chief technology officer at MRK Technologies, said new products often contain elements of ML or AI. Existing products are being redeveloped with this in mind, but Prewitt said that the small pool of talent isn’t affecting the development of new products, but rather the quality and efficacy of them. Prewitt added that the industry should consider and treat AI as a precision tool, but it’s sometimes used like a hammer and its uses aren’t always effective. He added that while AI can identify anomalous behavior, if it’s not trained well or immature, the expected outcomes will not be as strong.

“It would seem that the AI model needs more training prior to being employed, making it more mature out of the gate,” Prewitt said. “If the industry had better AI people working on security products, they could focus on the maturity of use cases, pinpoint accuracy, and noise reduction. These are all important outcomes in the security world, but with poorly developed engines, the outcomes would likely have Type I and Type II failures. Many of the market-leading security products have AI/ML already and are continuing to mature. ... Better refined AI should result in better visibility, reduced labor efforts and improved response times.”

Andrew Hay, COO at LARES Consulting, said AI operates as a specialized sub-discipline of software engineering and, as such, security expertise in that area will lag behind as it has in other new disciplines. Hay pointed out that this has become a common trend that the security industry has been dealing with for decades.

“As a short-term solution, AI engineers will likely be tasked with taking on security responsibilities for their or their peers' code until such time as a dedicated resource is required,” Hay said. “By that time, we'll likely have a small cadre of security specialists that focus almost exclusively on secure AI engineering principles and testing — but we're a long way off from that.”

John Bambenek, principal threat hunter at Netenrich, said the shortage of AI/ML talent has certainly become a problem in many fields, but there are unique challenges in cybersecurity.

“We don’t need more data scientists, per se, because the ML/AI libraries do much of the heavy lifting — we need cybersecurity researchers who have a basic knowledge of AI/ML,” Bambenek said. “The linear algebra isn’t what protects you. It’s the deep knowledge and experience needed to create training data and define features. With some few exceptions, AI/ML security tools simply aren’t working due to this lack of talent. Fundamentally, we are also making automated decisions based on data created by criminals who are decades ahead of us on fooling automated systems.”