HiddenLayer started its Synaptic Adversarial Intelligence team to increase awareness of threats facing machine learning and artificial intelligence systems. (Photo by Andrea Verdelli/Getty Images)

HiddenLayer on Tuesday formed its new Synaptic Adversarial Intelligence (SAI) team to raise awareness surrounding the threats facing machine learning (ML) and artificial intelligence (AI) systems.

The SAI aims to educate data scientists, MLDevOps teams, and cyber security pros on how to evaluate the vulnerabilities and risks associated with ML/AI so they can make more security-conscious implementations and deployments.

Tom Bonner, senior director of adversarial machine learning research at HiddenLayer, pointed out that until recently, most adversarial ML/AI research has focused on the mathematical aspect, making algorithms more robust in handling malicious input. Now, security researchers are exploring ML algorithms and how models are developed, maintained, packaged, and deployed, hunting for weaknesses and vulnerabilities across the broader software ecosystem.

“Alongside our commitment to increasing awareness of ML security, we will also actively assist in the development of countermeasures to thwart ML adversaries through the monitoring of deployed models, as well as providing mechanisms to allow defenders to respond to attacks,” said Bonner. “There has been a tremendous effort from several organizations, such as MITRE and NIST, to better understand and quantify the risks associated with ML/AI. We look forward to working alongside these industry leaders to broaden the pool of knowledge, define threat models, drive policy and regulation, and most critically, prevent attacks.”

Mike Parkin, senior technical engineer at Vulcan Cyber, added that AI in its many forms has become ubiquitous in everything from entertainment to security. Unfortunately, until fairly recently, Parkin said security for the AI systems themselves has usually been an afterthought at best.

“But there are a whole range of attacks against AI that are often overlooked, from manipulating the AI’s results to attacking the host systems,” Parkin said. “That’s not even counting cases where threat actors use machine learning algorithms of their own to develop their attacks. This is a welcome development and, used in conjunction with other tools and a solid risk management program, should help tighten security in the space.”