Network Security, Phishing, Threat Management

Artificial intelligence becomes more automated for security applications

Cutting-edge applications of artificial intelligence are seen on display at the Artificial Intelligence Pavilion of Zhangjiang Future Park during a state organized media tour on June 18, 2021, in Shanghai, China. (Andrea Verdelli/Getty Images)

Financial service institutions have been increasingly embracing machine learning and artificial intelligence to boost their IT security. But, so too have the bad actors who are banging at the digital gates of these financial service institutions (FSIs). 

Ninety-seven percent of cybersecurity professionals find the prospect of “AI-powered attacks troubling” according to a 2021 study from MIT Technology Review, cited by Smita Nalluri, cybersecurity executive at Darktrace during a presentation Tuesday at the SC Finance eConference. While nearly all of these organizations are preparing for the AI onslaught (96%), Nalluri pointed out that this “perfect storm” of a cybersecurity arms race, combined with a growing attack surface and overstretched financial industry resources presents the ideal environment for cyber-criminals to gain hold. 

“As the complexity of businesses increases, so does the risk,” Nalluri said, adding that while FSIs have been using machine learning for a long time, so have cyberattackers. Even five years ago, Nalluri pointed out, there was an automated bot that tracked online users’ likes, dislikes and preferences to target spear-phishing attacks. And, with the recent pandemic, and the huge uptick in remote workers, there is the potential for several new attacks. (Case in point: Zoom added 300 million new users in recent months, as many more people are still using this videoconferencing platform to connect.)

While the traditional AI approach is “good at catching known threats,” it is still difficult and potentially more important to catch that which is unknownin order to mitigate these new attacks,” Nalluri said.

She believes “self-learning AI can bridge that gap,” helping FSIs determine their potential risks even when they are not completely on top of the security application. “Think of a magician, the first time you see his trick is incredible, but the more times you watch it, the less magic it has,” Nalluri said. “And when you figure out how they run this trick, it really isn’t impressive at all.”

Similarly, she explained, cybercriminals’ attacks may seem impressive at first, but having the rules and signatures to capture such threats, they are not nearly as vexing. 

Chris Sprague, security chief from TruWest credit union, which works with Darktrace, said the complexity of the security landscape has made for a number of concerns for his and all FSIs. He said his FSI saw near-immediate value in seeing actionable intelligence and potential anomalies from Darktrace.

Self-learning AI is more useful than “supervised machine learning” because it does not rely on a data set “teaching it what bad is,” Nalluri said. Further, the promise of autonomous response technology is that it will take proportionate action based on how anomalous and disruptive an attack is.

“We are taking action with surgical precision to make sure we’re taking the least disruptive action possible to be able to neutralize the threat,” she said.  

Get daily email updates

SC Media's daily must-read of the most current and pressing daily news

By clicking the Subscribe button below, you agree to SC Media Terms and Conditions and Privacy Policy.