Security Program Controls/Technologies

Three best practices for AI/ML security

AI and ML

Corporations, governments, and academic institutions all understand the immense opportunity artificial intelligence (AI) and machine learning (ML) bring to their constituents and are increasing their investments. PwC expects the AI market to grow to just under $16 trillion by 2030, or about 12% of global GDP. Given the size of the market and the intellectual property involved, one would think appropriate investments have been made to secure these assets. But that’s wrong.

AI and ML has become the largest cybersecurity attack vector. The Adversarial AI Incident Database provides thousands of examples of AI attacks across multiple industries and corporations, including Tesla, Facebook, and Microsoft. Yet, the cybersecurity industry lags behind the bad guys. There are few dedicated cybersecurity protections.

Gartner scoped the magnitude of the problem in its October 2022 report, “AI in Organizations:  Managing AI Risk Leads to Improved AI Results. Among the leading findings: this year alone, 30% of AI cyberattacks will have used training data poisoning, AI model theft, or adversarial samples to attack AI-powered systems. Two-in-five organizations will have experienced an AI security or privacy breach, with one in four of those being adversarial attacks.

Warnings fall on deaf ears

Security industry watchdogs have been ringing alarm bells for years. MITRE’s 2020 Adversarial Threat Landscape for Artificial-Intelligence Systems (ATLAS) framework identified 12 tactics (the “why”) for adversarial attacks and more than 60 specific attack types. It’s important to note that the Gartner and MITRE research was specific to adversarial AI and ML and not related to what has commonly received most of the attention in this space: model bias, drift and/or integrity. While those concerns remain very real, Gartner and MITRE call specific attention to cybersecurity risks associated with AI and ML.

Adding to the potential scope and scale of the cyber risk to AI and ML is the current availability of over 20 free attack tools, including Microsoft’s Counterfit and Meta’s Augly. These tools are to ML what Metasploit has been to servers and networks and just as powerful. ML attacks that took over a month to complete in 2019 take 10 to 15 seconds to complete today.

Hardening AI/ML security

The implication of continued investment and deployment of machine learning, an accelerating regulatory environment, and easy-to-use attack tools means that now’s the time to understand the organization’s risk and determine the steps needed to ensure how to protect the environment. The MITRE ATLAS framework referenced above maps out the technique’s attackers are still using today to assist organizations in defining testing methodologies for their AI and ML pre-release.

Additionally, the U.S. Government’s Office of Science and Technology Policy in October 2022, published the AI Bills of Rights to offer guidance on hardening AI/ML security. The guidance says systems should undergo pre-deployment testing, risk identification and mitigation, and ongoing monitoring that demonstrate they are safe and effective based on their intended use, mitigation of unsafe outcomes, including those beyond the intended use, and adherence to domain-specific standards.

Best practices

Here are some simple steps companies can take today to assess the organization’s risk profile:

  • Proactive threat discovery: investigate pre-trained and in-house built models ahead of deployment for evidence of tampering, hijacking, or abuse.
  • Securely evaluate model behavior: models are software. If the teams doesn’t know where it came from, don’t run it in the enterprise environment. Carefully inspect models, especially pre-trained models, inside a secure virtual machine prior to considering them for deployment.
  • External security assessment: Understand the organization’s risk level, address blindspots and see what the team could improve on. It makes sense to conduct an external security assessment of the ML pipeline, given the level of sensitive data that ML models receive.

Heading into 2023, now’s a good time to evaluate whether investments in zero-trust and defense and depth are undermined by the risk unsecured ML models present. By taking a proactive stance, organizations can more effectively leverage the potential of AI/ML.

Abigail Maines, chief revenue officer, HiddenLayer

Get daily email updates

SC Media's daily must-read of the most current and pressing daily news

By clicking the Subscribe button below, you agree to SC Media Terms and Conditions and Privacy Policy.