AI/ML, AI benefits/risks, Government Regulations

New Five Eyes AI security guidelines unveiled

An aerial view of the Pentagon.

Mounting artificial intelligence-related cybersecurity risks have prompted cybersecurity agencies from the Five Eyes countries, which includes the U.S., to release new joint guidelines regarding the safe deployment and operation of AI systems, according to SecurityWeek.

Organizations looking to launch AI systems have been urged to manage environment governance, strengthen their architecture and configurations, and properly defend their deployment network from various threats. The guidance also recommended systems validation before and during usage, API security, and model behavior monitoring.

Moreover, proper AI systems operation requires stringent access controls, persistent penetration testing and audits, robust logging and monitoring processes, continuous patches, extensive user training, and disaster recovery preparations, the guidelines noted.

"AI systems are software systems. As such, deploying organizations should prefer systems that are secure by design, where the designer and developer of the AI system takes an active interest in the positive security outcomes for the system once in operation," said the guidance.

Get daily email updates

SC Media's daily must-read of the most current and pressing daily news

By clicking the Subscribe button below, you agree to SC Media Terms and Conditions and Privacy Policy.