AI may still seem like a far-flung concept, but in cybersecurity it’s already a reality.
AI may still seem like a far-flung concept, but in cybersecurity it’s already a reality.
The dark side of AI

Could we one day see the benevolent AIs of the world matching wits with malicious machines?
Here's what experts had to say... 

Remember those late-night cram sessions you suffered through in college whenever a big test was approaching? As stressful as those times were, they pale in comparison to the mental workout that IBM's famous artificial intelligence engine Watson has been put through.

With the assistance of eight universities across the U.S., Watson last May enrolled in “cybersecurity school,” digesting thousands upon thousands of documents each month, collecting data that will eventually allow it to pass the ultimate test – helping security experts better comprehend emerging cyberthreats and stop them in their tracks.

Last November, Watson even began an “internship,” a beta program involving approximately 40 companies that are leveraging its capabilities in the real world. As part of the program, Watson will determine if an attack on a participant's network is by a known piece of malware; if it is, Walson will offer actionable background on the threat.

“After that, Watson will be ready to graduate completely,” said Diana Kelley, global executive security adviser at IBM, in an interview with SC Media. (Update: Indeed, on Feb. 13, 2017, IBM announced the successful conclusion of its beta test and the general availability of its AI offering.)

When IBM originally trained Watson in medicine, the promise was clear: doctors could one day rely on the Jeopardy! champion to parse through millions of documents that medical professionals would never have time to review themselves. Watson could then use the data within to accurately diagnose patients based on their medical profiles and symptoms, and then recommend customized treatments.

The question is: what is the equivalent to this medical breakthrough in the cybersecurity world? Not just as it relates to Watson, but also to an array of other cutting-edge AI technologies breaking onto the scene.

The stakes are serious, as CISOs come to grips with the reality that human intelligence is drowning in threat data, log files and alerts. To stem the tide, manpower and machine power might just need to work together.

“You look at the skills shortage that we have right now… If you can have an analyst who was spending days trying to get educated about what a particular attack meant to the organization, and can now in hours or less get that at their fingerprints, that's really very powerful,” said Kelley.

“Humans make mistakes. They can become alert blind, and often have pressures other than being ‘right' that inform their decisions,” said Ryan Permeh, founder and chief cyber scientist at Cylance, whose advanced threat protections offerings are built on an AI engine. “The scope of decisions that need to be made has long been past where we can find enough qualified humans to make them.”

“Having a machine that can consistently and correctly make decisions on behalf of an operator, and do so in real-time at scale, is imperative for the next generation of defenses,” Permeh continued, speaking with SC Media.

cognitive, with a little “c”

In some corners, AI may still seem like a far-flung concept borne out of science fiction. But in cybersecurity it's already a reality, even if it has a long ways to go to reach its full potential.

Machine-learning tools are already being used to replace traditional threat detection software solutions that no longer adequately defend against dynamic cyberattacks whose patterns and indicators of compromise evolve too quickly to keep up with them.

Rather than focus on attack signatures, these AI solutions look for anomalous network behavior, flagging when a machine goes rogue or if user activity or traffic patterns appear unusual. “A really simple example is someone with high privilege who attempts to get onto a system at a time of day or night that they never normally log in and potentially from a geolocation or a machine that they don't log in from,” said Kelley.

Another example would be a “really rapid transfer of a lot of data,” especially if that data consists of the “corporate crown jewels.”

Such red-flags allow admins to quickly catch high-priority malware infections and network compromises before they can cause irreparable damage.

IBM calls this kind of machine learning “cognitive with a little ‘c'” – which the company was already practicing prior to Watson. Despite its diminutive designation, “little c” can have some big benefits for one's network.

“A network really in its simplest form, is a data set,” one that changes with every millisecond, said Justin Fier, director of cyber intelligence and analysis at U.K.-based cybersecurity company Darktrace, whose network threat detection solution was created by mathematicians and machine-learning specialists from the University of Cambridge. “With… machine learning, we can analyze that data in a more efficient way.”

“We're not looking for malicious behavior, we're looking for anomalous behavior,” Fier continued, in an interview with SC Media. “And that can sometimes turn into malicious behavior and intent, or it can turn into configuration errors or it could just be vulnerable protocols. But we're looking for the things that just stand out.”

An advantage of these kinds of AI solutions is that they often run on unsupervised learning models – meaning they do not need to be fed scores of data in advance to help its algorithms define what constitutes a true threat. Rather, they tend to self-learn through observation, making note of which machines are defying typical patterns – a process that Fier said is the AI determining its own “sense of self” on the network.

While Fier said that basic compliance failures are the most commonly detected issue, he recalled one particular client that used biometric fingerprint scanners for security access, only to discover through anomaly detection that one of these devices had been connected to the Internet and subsequently breached.

To cover up his activity, the perpetrator modified and deleted various log files, but this unusual behavior was discovered as well. The solution even found irregularities in the network server that suggested the culprit moved fingerprint data from the biometric device to a company database, perhaps to establish an alibi. “My belief is that somebody on the inside was probably getting get help from somebody on the outside,” said Fier, noting that it was a significant find because “insider threats are one of the hardest things to catch.”

Another client, Catholic Charities of Santa Clara County, an affiliate of CatholicCharities USA that helps 54,000 local clients per year, used anomaly detection to thwart an attempted ransomware attack only weeks after commencing a test of the technology. The solution immediately flagged the event, after a receptionist opened a malicious email with a fake invoice attachment. “I was able to respond right away, and disconnected the targeted device to prevent any further encryption or financial cost,” said Will Bailey, director of IT at the social services organization.

Little “c's” benefits extend beyond the network as well. Kelley cited the advent of application scanning tools that seek out problematic lines of code in websites and mobile software that could result in exploitation. And Fier noted a current Darktrace endeavor called Project Turing, whereby researchers are using AI to model how security analysts and investigators work in order to make their jobs more efficient.