Security Strategy, Plan, Budget

With AI, promises still outpace reality

AI's value on the endpoint still a work in progress, but it's improving

AI is great for solving yesterday’s endpoint attacks, but the jury is still out on solving tomorrow’s. Esther Shein explains.

Today it is almost impossible to talk about cybersecurity without someone turning the discussion to artificial intelligence (AI). Sometimes it is appropriate, sometimes not. The trouble is, AI has become the go-to acronym for everything from threat intelligence to data protection to picking your next password. The problem is, when so many security pros bandy about AI as the end all, be all of security, the waters get muddy and the truth becomes harder to see.

Ask Tufts Medical Center CISO Taylor Lehmann about his use of AI platforms to protect cloud-based systems and he will tell you he is both ahead of the curve and behind it compared to other hospitals.

“It’s sort of unavoidable right now — anyone looking to improve their security posture, which is everyone — is inundated with products and services selling AI solutions,’’ Lehmann notes. “You can’t buy anything today without AI embedded.” But, he adds, “Responsible security officials don’t buy products but form a strategy” first. For Lehmann, that means striking a balance between the need to keep costs low while implementing security and threat protection offerings “that don’t require us to hire a bunch of people to run.”

Tufts Medical Center, part of a seven-hospital consortium in eastern Massachusetts, has a solid security infrastructure and Lehmann’s team has visibility into what is running on the network, he says. Right now, Tufts is “investing heavily in building an insights-out capability for security. Where we’re behind is in getting a better hold on third parties we share information with.”

The challenge, Lehmann says, has been identifying insights from within the data: Where is it going, to whom, the volume and the role of vendors in the care delivery process as it moves off the network. With an increasing amount of data being moved to the cloud and third-party providers, can AI help secure endpoints? Although the medical system is only in the early stages of using AI in the cloud, so far, he says, the answer is yes.

“We see the value in investing in AI, and we think there’s more opportunities for us to increase our use of AI that will make our lives easier and reduce the costs of the medical system and improve the security of our medical system,” he says. When your endpoints extend beyond the network and into the cloud, however, the obligation for securing data and applications becomes a shared responsibility, Lehmann stresses.

“When you put data in the cloud you’re sharing responsibility with someone else to protect it,” he says. “Where it’s our role, we’re using network-based and endpoint-based AI to do that. It’s important that our vendors do the same.”

AI on the endpoints today

Many others are also banking on AI to secure endpoints. The cloud endpoint protection market size was $910 million in 2017, and is projected to exceed $1.8 billion by 2023, at a compound annual growth rate of 12.4 percent, according to Markets and Markets Research. “The growing need for effective protection against cyberattacks on endpoints is expected to drive the market,” the firm notes.

Antivirus and malware detection technologies remain a moving target and the volume of new malware and attack techniques continues to grow. Couple that with the increasing volume of data being moved to endpoints like the cloud, and “and it’s clear that scaling these products to deal with such speed and volume requires a heavy investment in AI-like capabilities,” notes the Gartner report Lift the Veil on AI’s Never-Ending Promises of a Better Tomorrow for Endpoint Protection.

Nearly every day there are eye-catching headlines about how AI will transform everything from data management and backups to customer service and marketing, not to mention every single vertical industry. Heck, it even promises to change the economy — and deliver a better cup of coffee.

But in the rush to use AI components for endpoint protection, it is important to look beyond the hype, security experts insist.

Almost all endpoint protection platforms today use some data analysis techniques (such as machine learning, neural networks, deep learning, Naive Bayes Classifiers or natural language processing), the Gartner report states. They are easy to use and “require little to no understanding of or interaction with their AI components … However, it is critical that SRM (security and risk management) leaders avoid dwelling on specific AI marketing terms and remember that results are what counts.”

The Forrester report Mobile Vision 2020 is projecting that many organizations will be using AI and cognitive computing to generate business and security insights from unified endpoint data by 2020.

Forty-six percent of respondents to a 2017 survey said they anticipate the amount of endpoint data they collect will increase between 1 percent and 49 percent over the next three years, while 50 percent are bracing themselves for growth of 50 percent or more, according to the Forrester study.

“Organizations can gain significant intelligence from endpoint data, particularly for threat detection and remediation purposes,” the report says.

Security experts and enterprises that have started utilizing AI systems to protect data and apps in the cloud say that the technology certainly has merit but is not yet the panacea for defending endpoints. 

“I think the hype is very, very dangerous and … I’m really worried, and don’t believe the hype will live up to everything it promises, but [AI is] very good for certain things,” observes Johan Gerber, executive vice president of the Security and Decision Products for Enterprise Security Solutions team at Mastercard. Gerber is based in St. Louis.

The credit card company acquired an AI software platform in 2017 to help it expand its ability to detect and prevent fraud and monitor the network, to enhance the security of customer information, Gerber says.

Since then, “we’ve been able to increase our fraud detection by 50 percent and decrease our false positives by 40 percent, so the application of advanced AI has really helped us in this use case.”

Gerber says he is “very excited about the potential of AI, and we’re using it every day and, in my world, it’s living up to promise and doing a tremendous amount for us.”

Mastercard is building models using a combination of neural networks and decision trees, as well as some AI open libraries. But Gerber says the “hybrid approach” is best when it comes to securing endpoints.

“I don’t believe in silver bullets; you need to have a multilayered approach … and we have an interesting mix of true machine learning supervised and unsupervised learning to help us know when it’s an attack we’ve seen before and an attack we haven’t seen before,’’ he says. “You need to look at the specific problem you’re going to solve and figure out whether AI will get there. The notion it will solve everything is dangerous .”

For AI and machine learning to be effective at securing endpoints, you have to have the right data and the right model, he says. “Machine learning learns from previously known patterns so [there is a] risk of it not being able to find anything it hasn’t seen yet. You teach the model and then say, ‘Figure it out using algorithms.’ I will not trust AI around securing data in the cloud; I will rely on a layered approach.”

That sentiment is shared by Zachary Chase Lipton, an assistant professor of business technologies at Carnegie Mellon University, who says a lot of people discuss AI without knowing what they are actually talking about. “The term is being used like an intellectual wild card,’’ he says.

People get excited about using machine learning algorithms to recognize suspicious traffic patterns that are predictive of previous security incidents, Chase Lipton says. The model has potential, he adds. But the catch with using pattern recognition is that “you make a giant assumption.”

When people make what Chase Lipton calls an “inductive assumption;” utilizing different types of data to say, “This is unkosher traffic on your network,” there is a chance they might not have all the information they need, or even the right information, he notes.

While machine learning might predict a pattern in one instance accurately, “that machine learning model could break” in another, he continues.

“With security, you’re dealing defensively with an adversary who’s actively trying to circumvent the system,’’ he says, when you rely on machine learning to do pattern recognition to try and protect a system. “People writing malware have a strong incentive to change what they’re doing and screw with things to fool the machine learning system.”

In that case, you can no longer say a system is 99 percent accurate; it is 99 percent accurate on what was in the past; it is not guaranteed to be correct in the future, he says.

Taking that into account, Chase Lipton thinks there will be “incremental usefulness” of AI systems to secure endpoints. “But what people have to watch out for is a machine learning system can potentially be gamed.

“Obviously, it’s very exciting technology and the capabilities are pretty amazing; the fact that we can [do] high-quality translations between languages and recognize images and generate believable audio and video using generative models,’’ are great use cases of machine learning, he says. “But the problem is, people use general excitement about AI and machine learning to make untethered kinds of [statements] like ‘It’s going to solve security. You don’t have to worry when you use our product.’ That kind of stuff is hooey. But there’s danger of people buying into that because of general excitement about AI,” he says.

AI is being used today to prevent spam and spear phishing attacks and many people are hoping that use of these platforms will mature rapidly, says Paul Hill, a security consultant with at SystemExperts Corp. of Sudbury, Mass. Echoing Chase Lipton, he says “this approach is just as likely to make the attackers step up their game. I worry that the result will be that attackers will develop tools that will make spam that is stylistically identical to the author that they are attempting to impersonate.”

 In all cybersecurity AI tools, the learning algorithms need to be more transparent, Hill believes. To fully gain a customer’s trust, “it should be possible for independent third parties to examine the learning model and data. Furthermore, a lot more work needs to be done to understand how an adversary might affect the learning model.”

By manipulating the learning model and/or data used to teach it, it may possible to subvert the AI tools, he says. “Before AI cybersecurity tools enjoy widespread adoption these issues and how they will impact various customer deployments need to be better understood.”

 AI in action

Tufts Medical Center is moving an increasing amount of data into the cloud. One of its electronic medical records systems is almost entirely cloud-based and IT is planning to move other clinical systems off premises, says Lehmann.

As the center expands its investigation of using AI to protect endpoints, officials are looking at whether their third-party vendors have appropriate protections in place in their data centers to leverage modern security technologies, he says. Their service level agreements will incorporate language indicating a “high expectation for their security program and mandating they implement certain controls like behavior and deterministic software solutions that protect data well.”

The medical center is also utilizing machine learning to monitor network traffic flowing off premises and protect its connection to the cloud, he says.

“For example,” he continues, “we often see certain spikes in traffic that could indicate an anomaly and … where the promise of AI is, is when we can turn AI on to correct a behavior. We’re getting to this point; not there yet.”

The goal is when there’s a “high fidelity hit on something we think looks bad, telling the AI [platform] to turn it off,” Lehmann says, explaining the medical center is looking at doing this to learn more about what could be threatening.

“Our next step will be to use that same AI to take action about a knowing threatening thing we’ve discovered,” he says. “That’s the nirvana; that’s where the value of AI exponentially increases. Now I don’t have to send a team to investigate that anomalous thing. The system knows what to do immediately if that occurs.”

The bleeding edge

The goal for Lehmann is to be able to walk into any surgical unit at the medical center and know a doctor has “relative assurance” that the equipment, services and procedures will be safe.

“That’s ultimately what we’re trying to do with any spend,” he says. As AI and machine learning technologies mature, he believes IT will be better able to secure endpoints in ways they were previously unable to do — or could only do if they “deployed a team of 50 people to figure it out.”

But when it comes to patient safety, Lehmann is leerier about using AI to secure data being exchanged between their internal systems and systems in the cloud. Although AI holds real value, “Can we say, ‘Is that wireless infusion pump operating normally and delivering drugs in the right frequency and what is has been programmed to deliver?’” Lehmann’s not sure. It becomes a lot trickier for a hospital if an infusion pump gets compromised and starts sending too high a dosage of medicine, he observes.

“These are patients’ lives we’re dealing with and I’m not sure we’re at the point where we can trust AI for [patient care,]” he opines.

For years, people have been recommending that organizations understand their baseline level of network activity in order to deploy a security information and event management system [SIEM] and create useful alerts, notes Hill. “However, many organizations don’t have the resources to really understand what their correct baseline traffic should be. AI should help solve this problem.”

Machine learning has already made available technologies we did not have even five years ago, Chase Lipton notes. “But the kinds of promises being made and way [the technology is] being thrown out vaguely like, ‘We can solve security with AI,’ is a little bit unhinged.”

There are a lot of small victories “probably happening every day,’’ he says. It is easy to train a machine learning system based on data from last year and have it work, “but the problem is, how do you keep it working accurately and develop best practices for auditing it? Those are huge challenges.”

That, for Chase Lipton, would make AI systems more palatable. “I’m sure progress will be slow and steady, but I don’t think it’s an overnight silver bullet that AI will solve in security.”

As endpoint protection evolves, it will need to use data from across multiple endpoints to AI recognize and react to threats, the Gartner report states. To cull all this data, endpoint detection and response (EDR) offerings are starting to emerge. These systems record all the technical and operational data of an organization’s endpoints as well as event, application state and network information.

This gives security response management teams a large pool of data that they can use to search for known indicators of compromise (IoC) or indicators of attack, Gartner says. Already, machine learning is a data analytics technique being used successfully “in areas where lifting signals from noise and removing false positives are problems,” the report says. “A well-trained [machine learning] algorithm can help identify IoCs in large, complex datasets that humans might miss.”

Along these same lines, the Gartner report says user and entity behavior analytics (UEBA) techniques can identify behaviors that applications display that are anomalous to standard baselines.

Yet, the technology is not there yet. “Unfortunately, AI is only beginning to make progress in [endpoint detection response.] However, it seems to be following the same pattern we have seen other technologies (such as SIEM management and network analytics) follow,’’ the report states.

“The technology comes on the market quickly but generates amounts of data that quickly overwhelm human users and contain false positives that limit its attractiveness. AI and advanced analytics are applied, and the tools become easier to use and yield more valuable insights,” the Gartner report says.

The bleeding edge will likely be the day when security administrators can quickly query their environments and take coordinated action across their endpoint environment in a unified manner, maintains Forrester, saying, “Furthermore, new analysis capabilities will present opportunities for endpoint security and management teams to pull deeper and more meaningful business insights from their increasing amounts of endpoint data while lowering operational friction and TCO (total cost of ownership).”

Get daily email updates

SC Media's daily must-read of the most current and pressing daily news

By clicking the Subscribe button below, you agree to SC Media Terms and Conditions and Privacy Policy.