Network Security, Security Strategy, Plan, Budget

2019 Cybersecurity Predictions: Artificial Intelligence

WatchGuard Threat Lab research team

AI-driven chatbots go rogue In 2019, cyber criminals and black hat hackers will create malicious chatbots on legitimate sites to socially engineer unknowing victims into clicking malicious links, downloading files containing malware, or sharing private information.

Candace Worley, Chief Technical Strategist, McAfee

There are myriad decisions that must be made when a company extends their use of AI. Implications exist for privacy regulation but there are also legal, ethical, and cultural implications that warrant the creation of a specialized role in 2019 with executive oversight of AI usage. In some cases, AI has demonstrated unfavorable behavior such as racial profiling, unfairly denying individuals loans, and incorrectly identifying basic information about users. CAOs and CDOs will need to supervise AI training to ensure AI decisions avoid harm. Further, AI must be trained to deal with real human dilemmas and prioritize justice, accountability, responsibility, transparency and well-being while also detecting hacking, exploitation and misuse of data.

Jason Rebholz, Senior Director, Gigamon

Offloading decision-making to AI software Current security solutions largely rely on signature-based detections (“I have seen this before and I know it is bad”) and analytic-based detections (“this pattern of activity leads me to believe this activity is suspicious”). The analyst then reviews the activity to perform basic triage analysis in an effort to determine whether it is something truly malicious or simply a false positive. With the emergence of AI, the basic decision making will be offloaded to software. While this isn’t a replacement for the analyst, it will provide more time for them to perform more advanced decision making and analysis, which is not easily replaced with AI.

Morey Haber, CTO, and Brian Chappell, sr. director, enterprise & solutions architecture, BeyondTrust

AI on the attack. Skynet is becoming self-aware!  2019 will see an increasing number of attacks coordinated with the use of AI/Machine Learning. AI will analyze the available options for exploit and develop strategies that will lead to an increase in successful attacks. AI will also be able to take information gathered from successful hacks and incorporate that into new attacks, potentially learning how to identify defense strategies from the pattern of available exploits. This evolution may potentially lead to attacks that are significantly harder to defend against.

Malwarebytes Labs Team

Artificial Intelligence will be used in the creation of malicious executables. While the idea of having malicious artificial intelligence running on a victim’s system is pure science fiction at least for the next 10 years, malware that is modified by, created by and communicating with an AI is a very dangerous reality. An AI that communicates with compromised computers and monitors what and how certain malware is detected can quickly deploy countermeasures to create a new generation of malware. AI controllers will enable malware built to modify its own code to avoid being detected on the system, regardless of the security tool deployed. Imagine a malware infection that acts almost like “The Borg” from Star Trek, adjusting and acclimating their attack and defense methods on the fly based on what they are up against.

Mark Zurich, senior director of technology, Synopsys

There is definitely excitement and hope around what ML/AI could do for software security and cybersecurity, in particular.  A significant aspect of cybersecurity is data correlation and analytics. The ability to find individual threats, threat campaigns, and perform threat actor attribution based on multiple disparate sources of data (i.e., finding needles in haystacks) is a large part of the game.  ML/AI provides the ability to increase the speed, scale, and accuracy of this process through data modeling and pattern recognition. However, many of the articles that I’ve been reading on this topic are expressing skepticism and concern that companies will be lulled into a false sense of security that their detection efficacy is acceptable through the application of ML/AI when that may not actually be the case.  The reality of the situation appears to be that more time and investment will be required to hone the data models and patterns to make ML/AI a highly effective technology in software security and cybersecurity. We should expect to see large companies continue to invest in this technology and startup companies touting ML/AI capabilities to continue to crop up in 2019. However, it may still be a few more years until the real promise of ML/AI can be fully realized.

Ari Weil, Global VP of Product and Industry Marketing, Akamai

Disillusionment with the over-promising of AI and ML will grow in the face of longer time to value. Vendors have been in a marketing arms race to leverage the terms Artificial Intelligence (AI) and Machine Learning (ML). In 2019, businesses will begin to realize that the current capabilities of the technology can solve for the simple, routine problems that are noisy but less valuable to the business, but leave the custom logic and complex corner cases to individuals to solve. Whether the catalyst comes from forensic tools that miss detecting advanced threats until significant damage has been done, or monitoring and analytics software that fails to detect the root cause of an issue in a complex deployment environment, the industry will reawaken to the value of evolving specialists vs. purchasing intelligence.

Gilad Peleg, CEO, SecBI

AI will power cyberattacks more and more.  In fact, it is reasonable to assume that armies of AI hackers will have greater, faster penetration with more automation, allowing hackers to achieve greater success in executing cyberattacks. Cyberdefense must look to AI for the faster analytics needed to find malicious activities. With machine learning and AI-driven response, security teams can automate triage and prioritization while reducing false positives by up to 91%. Enterprises will seek innovative solutions that enable them to stay ahead of the next unknown threat. They can’t simply look at what they have and just upgrade. Nor can they rely on homegrown solutions. They require out-of-the box, automated solutions based on AI.

Jason Rebholz, Senior Director, Gigamon

Automation and AI play a larger role. With the recent push for machine learning, AI and automation, the security industry will see a significant push and, more importantly, reliance on this technology. Organizations may attempt or consider augmenting or replacing security analysts for these technologies. When we weigh the security talent shortage now, it can seem appealing, but it may only serve to increase the knowledge gap as tools become more specialized, marketing campaigns become more ambiguous, and assumptions on what products are protecting against becoming inflated.

Malcolm Harkins, Chief Security and Trust Officer, Cylance

AI-Based technology will distinguish sensitive from non-sensitive data.Currently, parsing through data to determine what is sensitive versus non-sensitive is a manual process. Users have to classify data themselves, but users are lazy. In 2019, AI-based technology will gain the ability to learn what’s sensitive and automatically classify it. This development will necessitate increased consideration of how to manage this data, and furthermore how to control it.

Rajarshi Gupta, head of AI at Avast:

Artificial intelligence will play a significant role in ending the practice known as clone phishing, in which an attacker creates a nearly identical replica of a legitimate message to trick people into thinking it’s real. The email is sent from an address resembling the legitimate sender, and the body of the message looks the same as a previous message; the only difference is that the attachment or link in the message has been swapped out with a malicious one. I predict AI will become effective in dealing with these clone phishing attacks by detecting short-lived websites built for phishing. AI can move faster than traditional algorithms when identifying fake sites in two ways: 1) by accurately identifying domains that are new and suspicious, and 2) by utilizing visual detection to match the layout of phishing pages to popular sites. And because it can learn over time, AI will follow the trends and ways attackers try to improve.


Get daily email updates

SC Media's daily must-read of the most current and pressing daily news

By clicking the Subscribe button below, you agree to SC Media Terms and Conditions and Privacy Policy.