The National Institute of Standards and Technology has issued the Artificial Intelligence Risk Management Framework along with a companion playbook in an effort to help organizations manage different AI technology risks, VentureBeat reports.
Such a framework, which has been developed as a result of the National Artificial Intelligence Act of 2020, details not only the traits of trustworthy AI systems but also how organizations govern, map, measure, and manage AI risks.
"Congress clearly recognized the need for this voluntary guidance and assigned it to NIST as a high priority," said NIST Director Laurie Locascio. While Information Technology Industry Council Senior Director of Policy, Trust, Data, and Technology Courtney Lang has regarded the AI RMF to provide a "holistic" approach to managing AI risk, Epstein Becker Green's Bradley Merrill Thompson has noted the framework to be generic.
"It is so high-level and generic that it really only serves as a starting point for even thinking about a risk management framework to be applied to a specific product. This is the problem with trying to quasi-regulate all of AI. The applications are so vastly different with vastly different risks," Thompson said.
T-Mobile has denied being impacted by a cyberattack in April that compromised employee information after VX-Underground reported that it had been notified by threat actors of the attack, which occurred immediately after the telecommunications provider was breached in March, according to The Record, a news site by cybersecurity firm Recorded Future.
Get daily email updates
SC Media's daily must-read of the most current and pressing daily news