Security Strategy, Plan, Budget, Threat Management

AI vs. AI: Can security pros outsmart AI-powered adversaries?

U.S. Secretary of Defense Lloyd Austin delivers remarks at the National Security Commission on Artificial Intelligence Global Emerging Technology Summit this past July in Washington, D.C. Today’s columnist, Ofer Gayer of Exabeam, discusses ways for defenders to prevail against the hackers in the race to manage the power of AI. (Photo by Kevin Diets...

It’s a duel between gunslingers: A cyber faceoff between bad actors and security analysts. Whoever adapts to new data faster wins, a game of playing smarter, not harder.

While many news stories paint a picture of threats growing in sophistication, many cybercriminals are actually lazy because security teams have made it easy for the hackers. The most damaging attacks aren’t a result of some unheard-of attack vector, they’re almost always some form of phishing, malware, or other social engineering.

The situation has changed somewhat because adversaries can now use data to easily determine soft spots within an organization, getting more bang for their buck. Using artificial intelligence (AI) lets them put this data to work, sweeping thousands of data points to narrow down the odds without breaking a sweat.

According to IDG, nearly 80% of senior IT workers and IT security leaders believe their organizations lack sufficient defenses against cyberattacks.

So, does this mean that enterprises are doomed against cyber adversaries? Not exactly.

How adversaries leverage AI

Cybercriminals know that today’s users will click on just about anything. That’s why complicated attacks aren’t the most popular method for stealing data.

AI offers the ability to get feedback and scale quickly— so when an adversary finds a way in, they can exploit that attack surface quicker without getting detected. When they find a dead end or get discovered, they can quickly pivot and adapt their MO until it finds a new sweet spot.

Consider an AI-powered campaign that sweeps email signatures and company websites to determine the highest-value targets within an organization or learning end-user habits. Using this information, an adversary can augment malware to easily move through an organization without being detected.

AI also helps defenders

Fortunately, AI just as much enables the SOC analyst to stay ahead of adversaries as it helps the enemy.

We know that no matter how smart or how many hours an analyst can monitor their SIEM, they can’t catch everything. Solutions may attempt to make things easier and more efficient by adding signals, but without knowing which signal has become an actual threat versus a legitimate forgotten password, fatigue will set in and thus, error.

We can’t talk about security automation without a nod to the MITRE ATT&CK framework. MITRE catalogues all known tactics, techniques, and procedures (TTPs) and is accessible to the public. While it’s useful information, many analysts become overwhelmed by all the data.  

Cybersecurity professionals can combine MITRE’s research on commonly-used TTPs with technology to help them do the heavy lifting in automating threat detection. By adding an intelligent element to MITRE’s framework, analysts are given context to a possible threat. For example, a denied login could come from a hacker trying to gain access, but it could also be an employee who simply forgot his password. AI-enabled technology can look at both the incident and the context to separate malicious intent from innocent mistakes.

AI in practice

When we talk about AI-enabled technology, we’re talking about solutions that give analysts a big-picture view of the attack surface. Security pros can do this by analyzing behavioral signals in an organization’s environment.

In analyzing behavior, we can monitor the potential attack surface to determine baseline behavior and more easily identify when there’s abnormal activity. When an analyst triages an alert, they’ll have the full picture.

This also ensures that analysts receive complete and useful information. Once an analyst establishes a baseline, they can more accurately identify threats, which means the system will send alerts not only when it catches something suspicious, but when there’s such an unusual event, it merits further attention.

An analyst can then leverage the MITRE ATT&CK to figure out how an abnormal behavior fits in the attacker’s kill-chain. This would include the TTPs for that particular event, and comparisons to behavioral profiles of various scopes, such as a user’s peer-group or entire organization to determine the likelihood of an attack. Another AI that learns from analyst input can then offer even the most novice analyst with insights based on years of data collection on the probable next steps for investigation and remediation. 

There’s a race happening in cyberspace, and data revolves around the center of that feud. Security teams are depending more on automated visibility research to stay ahead of cyber-adversaries. By using a modern approach that leverages AI-based behavioral analytics to identify activity as anomalous and risky, and automatically mapping them to the techniques identified in the MITRE ATT&CK framework, defenders can detect, trace, and respond to the steps an attacker has taken before they cause significant damage.

Ofer Gayer, group product manager, Exabeam

Get daily email updates

SC Media's daily must-read of the most current and pressing daily news

By clicking the Subscribe button below, you agree to SC Media Terms and Conditions and Privacy Policy.