Malware, Network Security, Phishing

Sapphire Software’s Nicholas Takacs asks: Is self-aware malware possible yet?

“Two can play at this game…”

Cybersecurity is a non-stop arms race between white hats and malicious hackers, and the three “A’s” -- automation, analytics and artificial intelligence -- are among the more powerful defensive tools that CISOs can implement to defend their organizations. But cybercriminals can also potentially employ them to magnify their attacks and inflict more damage.

This never-ending duel will be examined in a series of sessions next week at InfoSec World 2020 – some under the digital conference’s Hackers & Threats track and others under the Security Strategies Track.

Nicholas Takacs, DevOps manager at Sapphire Software, will delve into the current and future possibilities of malware leveraging AI in order to proliferate. Takacs’ session, “Malware and Machine Learning: A Dangerous Combination?” will ask whether a self-aware malware program is yet possible – one that continually develops itself by evolving its own code and payloads in an ongoing attempt to survive defense mechanisms.

Such a program might dynamically change attack vectors, adopt new evasion techniques on the fly or even structurally change as needed – “polymorphism on steroids,” Takacs calls it. Perhaps it could even learn to eliminate a kill switch typically designed to stop the spread of malware.

With that in mind, Takacs’ presentation will envision the potential impact of autonomous viruses in the wild, while pondering the ethical issues that could impede security teams from executing a proper response.

According to Takacs, malware’s ongoing evolution is dependent on multiple factors, including increases in computing power, base language development, developers’ skill for building new algorithms, availability of reasonable tests sets and access to funds.

“These drivers have rapidly increased the complexity of malicious code, necessitating the same increase in resource-constrained defensive organizations,” according to Takacs’ presentation, which SC Media has reviewed in advance. For that reason, anti-malware organizations may want to turn to anti-malware vendors that “leverage ML techniques to drive ‘next generation’ heuristics and defenses,” or they want to further leverage traffic analytics and big data to weed out and eliminate threats.

Neil Wyler, principal threat hunter at RSA, agrees that ML- and AI-based cyber threats are a growing problem -- one that could give attackers the tools to create powerfully convincing social engineering and impersonation schemes.

For example, attackers could use AI to ingest large dumps of breached corporate email accounts and then analyze the content in order to craft more authentic looking phishing emails that copy the writing styles and idiosyncrasies of those individuals compromised in the breach.

Wyler's presentation, Foes, Fixes, & Foundations: Trending Threats and Proper Responses for 2020, will also cite the potential use of deepfakes technology to simulate the voices of top executives in order to trick employs into executing fradulent financial transactions.

AI can also assist botnets with executing large-scale DDoS attacks, according to Wyler, whose session will also cover other cybercriminal trends including ransomware, business email compromise schemes and more.

Wyler will also present potential solutions to combat such threats -- automation among them. "The number of attacks is not shrinking, so give your analysts time to focus on the real threats," states Wyler's presentation, which SC Media has previewed. "Data is growing at a rate faster than budgets and staffing. Automation can help direct your limited resources toward the most actionable data."

Wyler's presentation will advocate that organizations use automation to "collect disparate data sources under the umbrella of single incidents. If analysts are gathering data, they’re not investigating." It also will recommend SOAR [Security Orchestration, Automation, and Response] solutions to "enable you to respond to scenarios that would otherwise overwhelm your SOC team."

Another session, “Rise of the Machines – The Importance of Security Automation,” will similarly explore the ways that attackers can utilize automation and AI to conceal their activity and motives, and how cybersecurity professionals can implement the same technologies to mitigate such tactics.

Led by Laurence Pitt, technical security lead at Juniper Networks, the lecture will cover tools for improving visibility and automating attack mitigations, as well as cognitive automation technologies that allow systems to self-protect.

Pitt will also examine common challenges associated with automated threat response, including how to communicate with many different vendors’ equipment, how to respond to correlated threats from multiple data sources, and how to prevent false positives from wasting one’s response effort.

Dan Fein, director of email security products at Darktrace, will explain how organizations can leverage self-learning AI to cut down on the overwhelming number of email-borne threats that can inundate inboxes and swamp security teams.

As attackers leverage domain spoofing and social engineering techniques to target businesses with phishing and business email compromise scams, users need to be able to distinguish malicious emails from legitimate communications. And that’s more true than ever, with the flood of fraudulent Covid-19 emails filling up employees’ inboxes. Darktrace calls this phenomenon “fearware” because attackers are using email lures and subject lines that prey on victims’ uncertainties and concerns over the global pandemic.

This past April, 60 percent of all advanced phishing attacks that were blocked by Darktrace’s own commercial email security solution were either related to Covid-19 or remote working, according to Fein, who will lead the session “How to Stop Fearware: Using Cyber AI to Defend the Inbox.”

Fein will explain how analytical email security solutions can help automate the detection of fraudulent email by learning senders’ IP addresses, communication histories and past behaviors, and then looking for deviations from the norm. Other key clues that might trigger a scam detection include suspicious links, an unusual time that the email was sent, or the use of lookalike domains that appears to spoof those of genuine employees.

InfoSec World 2020 takes place from June 22-24, 2020.

Bradley Barth

As director of multimedia content strategy at CyberRisk Alliance, Bradley Barth develops content for online conferences, webcasts, podcasts video/multimedia projects — often serving as moderator or host. For nearly six years, he wrote and reported for SC Media as deputy editor and, before that, senior reporter. He was previously a program executive with the tech-focused PR firm Voxus. Past journalistic experience includes stints as business editor at Executive Technology, a staff writer at New York Sportscene and a freelance journalist covering travel and entertainment. In his spare time, Bradley also writes screenplays.

Get daily email updates

SC Media's daily must-read of the most current and pressing daily news

By clicking the Subscribe button below, you agree to SC Media Terms and Conditions and Privacy Policy.