Security Program Controls/Technologies

NIST risk management framework for AI issued

Share

The National Institute of Standards and Technology has issued the Artificial Intelligence Risk Management Framework along with a companion playbook in an effort to help organizations manage different AI technology risks, VentureBeat reports. Such a framework, which has been developed as a result of the National Artificial Intelligence Act of 2020, details not only the traits of trustworthy AI systems but also how organizations govern, map, measure, and manage AI risks. "Congress clearly recognized the need for this voluntary guidance and assigned it to NIST as a high priority," said NIST Director Laurie Locascio. While Information Technology Industry Council Senior Director of Policy, Trust, Data, and Technology Courtney Lang has regarded the AI RMF to provide a "holistic" approach to managing AI risk, Epstein Becker Green's Bradley Merrill Thompson has noted the framework to be generic. "It is so high-level and generic that it really only serves as a starting point for even thinking about a risk management framework to be applied to a specific product. This is the problem with trying to quasi-regulate all of AI. The applications are so vastly different with vastly different risks," Thompson said.

Get daily email updates

SC Media's daily must-read of the most current and pressing daily news

By clicking the Subscribe button below, you agree to SC Media Terms of Use and Privacy Policy.