Security Program Controls/Technologies

Adversaries may be poisoning your machine learning engine. Here’s how to undo the damage. 

Share

Machine learning has many useful cybersecurity applications — provided the technology behind it isn’t unduly influenced or tampered with by malicious actors, such that its data integrity is sabotaged.

Donnie Wendt, principal security researcher at Mastercard, calls this the “Uncle Ronnie” effect: “When my son was little, before he’d go over to visit my brother — his Uncle Ronnie — I’d say, ‘Please, please, don’t learn anything new from him.’ Because I know he’s going to teach my son something bad,” Wendt told SC Media in an interview at the CyberRisk Alliance’s 2022 InfoSec World Conference in Orlando, Florida.

Likewise, an adversary can compromise a machine learning system — teaching it bad habits that its operators must then undo — if they can even detect the breach in the first place.

On the cyber front, properly trained machine-learning systems can help with such tasks as classifying malware, identifying phishing attempts, intrusion detection, behavioral analytics, and predicting if and when a vulnerability will be exploited. But there are ways to skew the results.

“Our adversaries will try to figure out how to circumvent [machine learning] classification often times by injecting adversarial samples that will poison the training,” explained Wendt, who presented on this very topic earlier this week at the InfoSec World conference. Alternatively, bad actors could launch an inference attack to gain unauthorized access to the data based to train the machine learning system.

To protect against attacks launched on machine learning models, Wendt recommended conducting proper data sanitization and also ensuring that you have “proper version control access control around your data… so that if there is an attack… you can go back to prior versions of that data and rerun that model and look for drift.” If you find evidence of wrongdoing, then you can at least undo whatever it is that troublemaking “Uncle Ronnie” taught your machine learning system.

For more insight on how machine learning it can be maliciously influenced, watch the embedded video below.

Bradley Barth

As director of multimedia content strategy at CyberRisk Alliance, Bradley Barth develops content for online conferences, webcasts, podcasts video/multimedia projects — often serving as moderator or host. For nearly six years, he wrote and reported for SC Media as deputy editor and, before that, senior reporter. He was previously a program executive with the tech-focused PR firm Voxus. Past journalistic experience includes stints as business editor at Executive Technology, a staff writer at New York Sportscene and a freelance journalist covering travel and entertainment. In his spare time, Bradley also writes screenplays.

Get daily email updates

SC Media's daily must-read of the most current and pressing daily news

By clicking the Subscribe button below, you agree to SC Media Terms of Use and Privacy Policy.