Malware, Phishing, Threat Management

Organizations concerned about offensive AI

The threat of offensive artificial intelligence is becoming a rising concern for organizations, VentureBeat reports.

Threat actors leverage AI mainly due to coverage, speed and success, with the technology facilitating credential theft and machine learning model poisoning, according to researchers from Microsoft, Ben-Gurion University and Purdue.

Surveyed organizations, including IBM, Huawei and Airbus, regarded exploit development, information gathering and social engineering as the most dangerous offensive AI technologies, with special concern expressed over the use of AI for spoofing in phishing attacks, as well as reverse-engineering for proprietary algorithm theft. Researchers also said that improved bot capabilities in executing more convincing deepfake phishing calls are poised to increase phishing campaigns.

"[As] adversaries begin to use AI-enabled bots, defenders will be forced to automate their defenses with bots as well. Keeping humans in the loop to control and determine high-level strategies is a practical and ethical requirement. However, further discussion and research is necessary to form safe and agreeable policies," said researchers.

Get daily email updates

SC Media's daily must-read of the most current and pressing daily news

By clicking the Subscribe button below, you agree to SC Media Terms and Conditions and Privacy Policy.