Phishing, Cloud Security

AI-as-a-service tools craft spear-phishing emails with minimal human input

Researchers from Singapore’s Government Technology Agency found that the AI pipeline was very effective at getting test subjects to not only click on a link, but also fill out a form field. (CEphoto, Uwe Aranas)

Researchers from Singapore demonstrated that they could leverage AI-as-a-service applications and APIs to craft convincing spear-phishing emails with little human effort or intervention — offering a glimpse into very possible future tactics by malicious scammers.

The researchers, from Singapore's Government Technology Agency (GTA), designed what they have described as a phishing process pipeline that replaced traditionally manual steps with automated AI services that would allow malicious actors to develop new campaigns with much less human effort. They then sent both manually created and AI-created phishing emails to volunteer human test subjects to see which were more effective.

Eugene Lin, associate cybersecurity specialist at GTA, said at last week's Black Hat conference that the AI pipeline "significantly outperformed the [manual] workflow for two out of three engagements" with human test subjects who volunteered for the study. (The third engagement was a very narrow victory for the manual campaign.) "When we added personalization, the AI pipeline performed even better, reaching up to 60% clicks in the first engagement," Lin added.

Moreover, the researchers found that the AI pipeline was very effective at getting test subjects to not only click on a link, but also fill out a form field — with conversion rates of up to 80%.

A diagram of the researchers' pipeline showed how real-life adversaries could potentially misuse various AI tools. In order to perform recon on the volunteer phishing targets, the researchers leveraged Humantic AI, a service that provides personality and behavioral insights for job candidates based on publicly available information such as LinkedIn profiles. This allowed them to perform phishing context generation, resulting in instructions on how to approach a target.

"We passed the [Humantic] API output into plain text instructions, describing a target and how to approach them," said Lin, noting that Humantic AI "is only one of the many sales and recruitment personalization APIs, as a service that [are] out there. Many of these companies have a free demo that allows anyone to register the API right away. As such, in a realistic scenario, any of these could have been accepted or adapted to use our pipeline."

After that, the researchers fed the plaintext instructions into another service, OpenAI’s Generative Pre-trained Transformer 3 (GPT-3), which is a language model that uses deep learning to create human-sounding text. This resulted in a generated email that was “coherent and fairly convincing,” said Lin. “It even extrapolated from the fact that a target was in Singapore to cite a Singapore-specific law: the Personal Data Protection Act.”

It wasn’t perfect, however. Some human edits were still needed. “For example, it generated a realistic, but fake link, as well as a date that had already passed. It also somehow redacted its own email, which could actually be realistically interpreted as an unintended mistake by a human writer.” Still, correcting these issues in a spear phishing email requires a lot less work on the cybercriminal’s part than conducting all of the research and generating the content manually.

Bottom line: “the AI pipeline led to qualitative improvements by saving manpower and time, speeding up our rate of operations,” said Lin. And for context and content generation, “integrating AI helps to streamline and standardize operations. No longer is the input and output dependent on individual operator’s skill sets and predispositions.”

This set-up also allowed the researchers to integrate their infrastructure with other existing tools such as the Gophish open-source phishing framework, Lin continued. “This highlights how AI-as-a-service offers a step up in accessibility from open-source language models,” he noted.

The presentation also covered potential defenses against such threats, especially as they continued to evolve.

Timothy Lee, associate cybersecurity specialist at the GTA, said that the companies providing AI-as-a-service offerings must act as a first line of defense, developing terms of usage and screening guidelines that hopefully deter misuse and abuse. Additionally, he suggested that solutions suppliers ensure that usage of its products can be audited and traced.

Meanwhile, businesses may need to further emphasize anti-phishing training as part of its security awareness programs, Lee continued.

GTA researcher Glenice Tan and Tan Kee Hock also contributed to the project, but did not present at the conference.

Get daily email updates

SC Media's daily must-read of the most current and pressing daily news

By clicking the Subscribe button below, you agree to SC Media Terms and Conditions and Privacy Policy.