AI may still seem like a far-flung concept, but in cybersecurity it’s already a reality.
AI may still seem like a far-flung concept, but in cybersecurity it’s already a reality.

From “cognitive” to “Cognitive”

Some technologists believe that to truly fulfill its promise, AI solutions must graduate from little “c” to what IBM calls “Cognitive with a big ‘C.'”

Such solutions will be able to comprehend a mix of both structured data (e.g. data plugged into relational databases and spreadsheets) and unstructured data, including text-heavy reports written in natural-language, in order to make informed recommendations, diagnoses and even predictions.

Of course, this involves supervised training – a painstaking process during which AIs like Watson must process thousands of documents and essentially be taught how to contextualize terms as a human would. For instance, said Kelley, the term IP could mean “Internet Protocol” in one natural-language document, and intellectual property in another – but Watson needs to distinguish the difference.

“Training correctly requires a very focused team dedicated to finding the ‘truth' on a specific problem,” said Cylance's Permeh. “Each problem has different approaches, but a few key elements are necessary. Having enough realistic data to train is very important, as is having enough to test. Having a deep enough understanding of your data in a way to create effective representations is necessary as well.”

With that said, training your AI to solve a problem won't be very effective if you haven't properly defined the problem in the first place. “Overly generalized or fuzzy problems get weak answers,” Permeh cautioned. “Complex real world problems rarely fit into simple models, and so AI systems that are overly simplistic fail in undefined ways.”

Derek Manky, global security strategist at Fortinet, similarly cited the need for ample, high-quality intelligence as a key challenge for AI programmers.

“Cyberthreat intelligence today is highly prone to false positives due to the volatile nature of the Internet of Things,” Manky told SC Media. “Threats can change within seconds, a machine can be clean one second, infected the next, and back to clean again full cycle in very low latency. Enhancing the quality of threat intelligence is critically important as we pass more control to artificial intelligence to do the work that humans otherwise would do."

Despite the laborious prep work that building an AI platform entails, many believe the end result is worth the Herculean effort.

Indeed, a study published in December 2016 by Recorded Future offered a tantalizing glimpse of AI's future. The threat intelligence firm developed a supervised machine learning model that is able to predict future cybercriminal activity on certain IP addresses by combining historical data from threat lists and other technical intelligence with current-day information gleaned from open-source intelligence (OSINT) sources, including reports of neighboring IP addresses that exhibit malicious behavior.

In a 2016 trial of this cognitive learning technology – also known as a support vector network – more than 25 percent of 500 previously unseen IP addresses that the AI flagged as risky ultimately turned up reported by open-source intelligence as malicious within seven days.

For instance, Recorded Future's predictive model flagged the IP address 88.249.184.71 with a high-risk score on Oct. 4. It took until Oct. 14 – a full 10 days later – before that address finally appeared on a threat list as the host of a command-and-control server linked to the DarkComet remote access trojan.

A second study that looked at historical IP address data covering the entire IPv4 space was able to predict 74 percent of future threat-listed IPs while maintaining a 99 percent precision rate.

“The predictions we make are good enough that… you may want to use that information to automatically block those addresses in your firewall,” said Staffan Truve, co-founder and CTO of Recorded Future, in an interview with SC Media.

If such foreknowledge is possible, then one can't help but wonder what other exciting breakthroughs AI is capable of in the cybersecurity space.

To that end, Truve did some predicting of his own, claiming that as the quality and quantity of historical datasets increase, AI will one day be able to prognosticate which cyber targets are most likely to be attacked, and what vulnerabilities are likely to be exploited.

“Predictive is something that CISOs really, really want to get to as we [develop] more advanced analytics,” said Kelley. “Not just getting an alert… but to be able to predict, ‘Hey, this employee may be about to go route two weeks before they go rogue.”

Truve can also foresee AI helping create self-healing systems that don't just recognize that an anomaly has occurred, but also know how to repair themselves. “The systems of the future need to be able to diagnose themselves and understand if they have been manipulated,” so they can choose the best course of action, he said.

Of course, in a simple sense, self-healing technology is already here: When a machine on a network is acting abnormally, some threat detection solutions are programmed to automatically perform mitigation through limited, preapproved actions. Rather than immobilize an entire company server, it might shut down the one troublesome endpoint, stopping a potential malware infection without impacting network productivity.

But as “Big Cognitive” evolves and becomes more reliable, will users be willing to remove the reins and let networks fully defend themselves?

The 2016 Def Con conference in Las Vegas offered the world a sneak preview of this scenario when it hosted the DARPA Cyber Grand Challenge, where the winning programmers from team ForAllSecure created a fully automated cybersecurity defense system capable of reverse-engineering an unknown binary.

“In the future, AI in cybersecurity will constantly adapt to the growing attack service,” said Manky. “Today, we are connecting the dots, sharing data, and applying that data to systems,” but eventually, “a mature AI system could be capable of making decisions on its own. Complex decisions.”

Still, Manky cautioned that 100 percent automation is not an attainable aspiration. “Humans and machines must co-exist,” he noted. Indeed, many experts prefer to let human security analysts have the final word.

Finally, it would be almost impossible to examine the future of machine learning without taking a moment to ponder perhaps the biggest cybersecurity holy grail of all: attribution. SC Media asked the experts in this feature if they could see a future in which AI provides investigators with the helping hand desperately needed to unearth hidden clues in code and confirm, with near certainty, the culpable hacking group's identity.

Perhaps Truve answered best: “Algorithms should be able to do attribution as good as humans are doing it,” he said. Then again, he laughed, that's not a very high bar.