Content

Weaponised AI. Davey Winder asks the industry – is that a thing yet?

According to research announced during the recent Black Hat conference in Vegas, some 62 per cent of infosec pros reckon weaponised AI will be in use by threat actors within 12 months. 

That artificial intelligence was on the agenda at Black Hat should come as no surprise. The promise of AI, from machine learning through to automation, in cyber security has become a major marketing tool amongst vendors.

The good guys are clearly investing heavily in AI-defence research, but what about the bad guys? 

Itsik Mantin, director of research at Imperva, points to a demonstration at Defcon last week as to how "AI can be used to design malware utilising the results of thousands of hide and seek attempts by malware to sneak past anti-virus solutions by changing itself until it finds the right fit that allows it to sneak below the anti-virus radar."

Sam Curry, chief security officer at Cybereason, told SC that as we are in a cold war with arms escalation on both sides "using AI technologies for attack or defence shouldn't be a surprise to anyone" but warned we should "be careful not to exaggerate."

Something that Alex Hinchliffe, Threat Intelligence Analyst at Unit 42, Palo Alto Networks, was quick to pick up on when questioned by SC Media. "Truly autonomous artificial intelligence is very much in its infancy" Hinchliffe explained "and isn't fully realised by either attackers or defenders yet."

And Javvad Malik, security advocate at AlienVault, agreed that "we are still many years away from seeing true AI implementations that could be used in this way." Which doesn't mean that we won't see more automation at play, and machine learning (ML) in the attacking mix.

Darren Anstee, CTO at Arbor Networks, reckons that social engineers will have an eye on ML telling us it "could also be used to improve spear-phishing attempts, allowing attackers to more closely mimic the style of emails and documents, so that they appear even more like those from real colleagues."

Ilia Kolochenko, CEO of web security company High-Tech Bridge went even further "at High-Tech Bridge, we already see a growing trend of ML usage for data classification by cyber-gangs to facilitate, accelerate and better target their attacks against websites and web applications."

Etienne Greeff, the CEO and Co-founder of Secure Data thinks weaponised AI for threat actors is a shoe-in "Asking whether AI will be weaponised is a bit like asking whether the wheel will be used for warfare." But Dr Zulfikar Ramzan, RSA CTO isn't so sure about the 12 month timeline. Simply because "today's threat actors are driven by necessity". So as the attack surface continues to increase and as we move towards a perimeterless world, Ramzan says "there are too many easy avenues for attack to warrant the need for advanced techniques."

Not to forget that AI might be a security technology that is a better organic fit for defence. "The feedback loop that drives learning and automatic improvement of algorithms isn't really there for attackers" Eric Ogren, senior analyst with the Information Security team at 451 Research told SC Media, continuing "Once an action is initiated, how do you know if it works? How do you know how it fails? How do you get better the next time?"

Which brings us to a secondary point. Will the use of AI in an offensive role slow the development or effectiveness of AI in a defensive one?

"Once it gets fully implemented, defensive AI will likely render offensive AI obsolete" argues Bogdan Botezatu, senior e-threat analyst at Bitdefender speaking to SC. Upcoming generations of anti-malware solutions will offer tailored security to perfectly cater to one particular user, Botezatu reckons, and ML algorithms will be able to know the user's normal behaviour so intimately that they won't allow the execution of something that is not likely to be run or installed by the user. "These algorithms will make shotgun attacks targeted to a large pool of potential victims useless" he insists "and will increase the cost of attack by an order of magnitude."

Prakasha Mandagaru Ramachandra, AVP Technology & Solutions Architect at Aricent, adds that in order to minimise the damage from weaponised AI, the AI used in defensive roles need to have closed-loop interaction with the virtualised security functions of a data centre. "This can also be done with the security control measures in workload/micro-services run-times in a data centre" Ramachandra told SC Media, concluding "AI in offensive roles, will keep AI in defensive roles on its toes; especially in multi-vendor and hybrid cloud environments in data centres."

Get daily email updates

SC Media's daily must-read of the most current and pressing daily news

By clicking the Subscribe button below, you agree to SC Media Terms and Conditions and Privacy Policy.