Ransomware, AI/ML

How to combat AI-produced phishing attacks

Credit: Getty Images

Artificial intelligence (AI), as far as security teams are concerned, is known to be both a foe and friend. As much as AI is used to help defend enterprise systems, bad actors are using it to enhance their attacks. This is increasingly true when it comes to phishing attacks.

The changing face of phishing

Historically, phishing emails were easy to spot for most people. They were littered with grammatical mistakes, poor vocabulary and spelling, and a page layout that would make a first-grade art teacher wince. However, AI has enabled attackers to enhance significantly this output and create text that is quite convincing in their phishing emails. These emails are much more professional and will increasingly trick even the most careful into clicking on what they shouldn't.

Not only does AI help attackers build more convincing messages, but the algorithms also help attackers to create content that looks more like actual human communication. This makes it even more challenging for email filters to spot these attacks.

BEC attacks

Because of the effectiveness of GenAI, or large language models (LLMs), attackers can better impersonate influential (or at least the right) people within organizations, such as the CEO or someone from the IT or finance departments. This is helpful for scams that typically start with an email, such as Business Email Compromise (BEC) attacks. A BEC attack is where the attacker impersonates the CEO, some other executive, or even a business partner to trick employees into making a wire transfer. These attacks have historically taken place as email phishing attacks. Increasingly, you should expect AI-driven social media and text messaging, deep fake videos, and deep fake voice mails. Attackers are even using virtual meeting platforms.

According to SC Magazine, BEC attacks have significantly increased in recent years. In 2022, BEC attacks surged by 81% and 175% over the past two years. The median open rate for text-based BEC emails during the second half of 2022 reached 28%, with 15% of employees responding to these attacks.

BEC scams have resulted in substantial losses. From 2016 to 2021, BEC scams led to $43 billion in losses worldwide, marking a 65% increase. In 2021 alone, attackers made $2.4 billion globally from BEC attacks reported to the FBI, which is 49 times as much as the reported ransomware's yield ($49.2 million) and takes up a third of total cybercrime gains of $6.9 billion.

Fighting AI fire with more AI fire

Increasingly, enterprises will need to turn to fight AI with AI. Advanced machine learning algorithms, anomaly detection, and real-time monitoring can help to identify and respond to malicious communications.

AI-powered email protection systems should analyze email content and subject lines for their tone and precise wording so that suspicious conversations can be flagged. AI-powered anti-phishing tools can also scan inbound messages for key indicators that a phishing attack is underway. These indicators include brand spoofing and impersonation attempts in real-time using SPF, DKIM, and DMARC authentication techniques and email header anomaly analysis.

Another key to combating AI-powered phishing attacks is to provide ways for employees to report suspected attacks from right within their email client or web browser. Not all phishing attacks will be recognized, whether powered by effective machine learning algorithms or not, by anti-phishing technologies. By relying in part on employee reports, the organization will gain optimal insights into what attacks are underway and who is being targeted in these attempts.

Classifying URLs in real-time is also essential, and properly trained AI systems do an excellent job. So many malicious URLs are being created today that human analysts could never triage them fast enough. With the help of machine learning algorithms, brand-new malicious URLs can be identified as the threats they are.

Finally, virtual sandboxing plays a crucial line of defense; in addition to attempting to block these phishing emails and associated URLs at the Internet gateway as well as on the endpoint, virtual sandboxing — which quarantines malicious attachments in a virtual system far away from the endpoint. This way, obvious malicious links and attachments can be automatically removed. At the same time, unknowns can be accessed virtually, and if there is a malicious payload, it never reaches the endpoint or any other system where it can do damage.

The human factor remains

In the age of GenAI and AI-enhanced phishing threats, the human factor plays a critical last line of defense. Should maliciously crafted phishing emails slip past the set layers of protection — and some small percentage will undoubtedly do so — a well-trained staff will be better prepared to not click on the malware-laced attachments or malicious URLs. Enterprises want to continue to educate their staff about phishing attacks and test their ability to spot them through automated phishing simulators and comprehensive reporting.

While AI is indeed both a friend and foe to security teams — the good news is AI can be used as much to defend users as it can be used to attack them. The key is putting AI to good use to combat the bad.

George V. Hulme

An award winning writer and journalist, for more than 20 years George Hulme has written about business, technology, and IT security topics. He currently freelances for a wide range of publications, and is security blogger at InformationWeek.com. From

Get daily email updates

SC Media's daily must-read of the most current and pressing daily news

By clicking the Subscribe button below, you agree to SC Media Terms and Conditions and Privacy Policy.