Content

The dark side of AI

For all the good that machine learning can accomplish in cybersecurity, it's important to remember that the technology is also accessible to bad actors.

While writers and futurists dream up nightmarish scenarios of artificial intelligence turning on its creators and exterminating mankind like Terminators and Cylons – heck, Stephen Hawking and Elon Musk have warned AI is dangerous – the more pressing concern today is that machines can be intentionally programmed to abet cybercriminal operations.

Could we one day see the benevolent AIs of the world matching wits with malicious machines, with the fate of our IT systems at stake? Here's what experts had to say…

Derek Manky, global security strategist, Fortinet

“In the future we will have attacker/defender AI scenarios play out. At first, they will employ simple mechanics. Later, they will play out intricate scenarios with millions of data points to analyze and action. However, at the end of the day – there is only one output, a compromise or not.”

“In the coming year we expect to see malware designed with adaptive, success-based learning to improve the success and efficacy of attacks. This new generation of malware will be situation-aware, meaning that it will understand the environment it is in and make calculated decisions about what to do next. In many ways, it will begin to behave like a human attacker: performing reconnaissance, identifying targets, choosing methods of attack, and intelligently evading detection.”

“Autonomous malware operates much like branch prediction technology, which is designed to guess which branch of a decision tree a transaction will take before it is executed… [This] malware, as with intelligent defensive solutions, are guided by the collection and analysis of offensive intelligence, such as types of devices deployed in a network segment, traffic flow, applications being used, transaction details, time of day transactions occur, etc.”

“We will also see the growth of cross-platform autonomous malware designed to operate on and between a variety of mobile devices. These cross-platform tools, or “transformers,” include a variety of exploit and payload tools that can operate across different environments. This new variant of autonomous malware includes a learning component that gathers offensive intelligence about where it has been deployed, including the platform on which it has been loaded, then selects, assembles, and executes an attack against its target using the appropriate payload.”

Ryan Permeh, founder and chief cyber scientist, Cylance

“Bad guys will use AI… not just to create new types of attacks, but to find the limits in existing defensive approaches… Having information on the limits of a defender's defense is useful to an attacker, even if it isn't an automatic break of the defenses.”

Justin Fier, director of cyber intelligence and analysis, Darktrace

“I think we're going to start to see in the next probably 12- 18 months… AI moving into the other side. You're already starting to see polymorphic malware that [infects a] network and then changes itself, or…automatically deletes itself and disappears. So in its simplest form it's already there.”

“Where I think it could potentially head is where it actually sits dormant on a system and learns the user and then finds the most opportune time to take an action.”

Diana Kelley, global executive security adviser, IBM

“Malware is getting very, very situationally aware. There's some malware for example… that can get onto the system and figure out, ‘Is there AV on here? Is there other malware on here, shut it down so they're the only malware. Or even, ‘Oh look, I've landed on a point-of-sale system rather than on a server, so I'm just going to shut down all of my functions that would work on a regular server and just have my ram scraper going cause that's what I want on the point of sale.”

Staffan Truve, co-founder and CTO of Recorded Future

Truve said that AI will be used to automatically craft effective spear-phishing emails that contain victims' personal information, leveraging powerful data resources and natural-language generation capabilities to sound convincing.

“I'm sure it will be…very hard to identify phishing emails in the future.”

Additionally, “We'll definitely be seeing AI that can analyze code and figure out ways to find vulnerabilities.”

“It's going to be an arms race between the good and bad guys… The good side is a bit ahead right now and mostly I think the reason for that is that the bad guys are successful enough with old methods… You can find enough targets that are who unsophisticated enough to be vulnerable to current technologies.”

Bradley Barth

As director of multimedia content strategy at CyberRisk Alliance, Bradley Barth develops content for online conferences, webcasts, podcasts video/multimedia projects — often serving as moderator or host. For nearly six years, he wrote and reported for SC Media as deputy editor and, before that, senior reporter. He was previously a program executive with the tech-focused PR firm Voxus. Past journalistic experience includes stints as business editor at Executive Technology, a staff writer at New York Sportscene and a freelance journalist covering travel and entertainment. In his spare time, Bradley also writes screenplays.

Get daily email updates

SC Media's daily must-read of the most current and pressing daily news

By clicking the Subscribe button below, you agree to SC Media Terms and Conditions and Privacy Policy.