AI/ML, Supply chain, Third-party code

4 key takeaways from new global AI security guidelines

AI(Artificial intelligence) concept.

AI security guidelines developed by the United States' Cybersecurity and Infrastructure Security Agency (CISA) and the United Kingdom’s National Cyber Security Centre (NCSC) were published Monday with endorsements from 16 other nations. The 20-page document was written in cooperation with experts from Google, Amazon, OpenAI, Microsoft and more, and is the first of its kind to receive global agreement, according to the NCSC.

“We know that AI is developing at a phenomenal pace and there is a need for concerted international action, across governments and industry, to keep up,” said NCSC CEO Lindy Cameron, in a public statement. “These guidelines mark a significant step in shaping a truly global, common understanding of cyber risks and mitigation strategies around AI to ensure that security is not a postscript to development but a core requirement throughout.”

Here are four key takeaways from the publication:

1. "Secure-by-design" and "secure-by-default" take priority

Emphasized throughout the document are the principles of “secure-by-design” and “secure-by-default” — proactive approaches to protect AI products from attack. The authors urge AI developers to prioritize security alongside function and performance throughout their decision-making, such as when choosing a model architecture or training dataset. It is also recommended that products have the most secure options set by default, with the risks of alternative configurations clearly communicated to users. Ultimately, developers should assume accountability for downstream results and not rely on customers to take the reins on security, according to the guidelines.

Key excerpt: “Users (whether ‘end users,’ or providers incorporating an external AI component) do not typically have sufficient visibility and/or expertise to fully understand, evaluate or address risks associated with systems they are using. As such, in line with ‘secure by design’ principles, providers of AI components should take responsibility for the security outcomes of users further down the supply chain.”

2. Complex supply chains require greater diligence

AI tool developers frequently rely on third-party components like base models, training datasets and APIs when designing their own product. An extensive network of suppliers creates a greater attack surface where one “weak link” can negatively impact the product’s security. The global AI guidelines recommend developers assess these risks when deciding whether to acquire components from third parties or produce them in-house. When working with third parties, developers should vet and monitor the security posture of suppliers, hold suppliers to the same security standards as one’s own organization and implement scanning and isolation of imported third-party code, the guidelines state.  

Key excerpt: “You are ready to failover to alternate solutions for mission-critical systems, if security criteria are not met. You use resources like the NCSC’s Supply Chain Guidance and frameworks such as Supply Chain Levels for Software Artifacts (SLSA) for tracking attestations of the supply chain and software development life cycles.”

3. AI faces unique risks

AI-specific threats such as prompt injection attacks and data poisoning call for unique security considerations, some of which are highlighted by CISA and NCSC in their guidelines. A component of the “secure-by-design” approach includes integrating guardrails around model outputs to prevent leaking of sensitive data and restricting the actions of AI components used for tasks such file editing. Developers should incorporate AI-specific threat scenarios into testing and monitor user inputs for attempts to exploit the system.

Key excerpt: “The term ‘adversarial machine learning’ (AML), is used to describe the exploitation of fundamental vulnerabilities in ML components, including hardware, software, workflows and supply chains. AML enables attackers to cause unintended behaviors in ML systems which can include:

  • Affecting the model’s classification or regression performance
  • Allowing users to perform unauthorized actions
  • Extracting sensitive model information”

4. AI security must be continuous and collaborative

The guideline document outlines best practices throughout four life cycle stages: design, development, deployment, and operation and maintenance. The fourth stage spotlights the importance of continuous monitoring of deployed AI systems for changes in model behavior and suspicious user inputs. The “secure-by-design” principle remains key as a component of any software updates made, and the guidelines recommend automated updates by default. Lastly, CISA and the NCSC recommend developers leverage feedback and information-sharing with the greater AI community to continuously improve their systems.

Key excerpt: “When needed, you escalate issues to the wider community, for example publishing bulletins responding to vulnerability disclosures, including detailed and complete common vulnerability enumeration. You take action to mitigate and remediate issues quickly and appropriately.”

Get daily email updates

SC Media's daily must-read of the most current and pressing daily news

By clicking the Subscribe button below, you agree to SC Media Terms and Conditions and Privacy Policy.