What started off as excitement around the capabilities of Generative AI has quickly turned to concern. Generative AI tools such as ChatGPT, Google Bard, and Dall-E continue to make headlines because of their security and privacy concerns. It’s even leading to questions about what's real and what isn't. Generative AI can pump out highly-plausible and therefore convincing content. So much so that at the conclusion of a recent 60 Minutes segment on AI, host Scott Pelley left viewers with this statement: “We’ll end with a note that has never appeared on 60 Minutes, but one, in the AI revolution, you may be hearing often: the preceding was created with 100% human content.”
The Generative AI cyber war begins with this convincing and real-life content and the battlefield exists where hackers are leveraging Generative AI and using tools such as ChatGPT. It’s extremely easy for cyber criminals, especially those with limited resources and zero technical knowledge, to carry out their crimes through social engineering, phishing, and impersonation attacks.
The very real threat
Generative AI has the power to fuel increasingly more sophisticated cyberattacks. Because the technology can produce such convincing and human-like content with ease, new cyber scams leveraging AI are harder for security teams to easily spot. AI-generated scams can come in the form of social engineering attacks such as multi-channel phishing attacks conducted over email and messaging apps. One example: an email or message containing a document sent to a corporate executive from a third-party vendor via Outlook or Slack. The email or message directs them to click on it to view an invoice. With Generative AI, it’s often almost impossible to distinguish between a fake and real email or message. And that’s why it’s so dangerous.
With Generative AI, cybercriminals can produce attacks across multiple languages – regardless of whether the hacker actually speaks the language. They can cast a wide net and cybercriminals won’t discriminate against victims based on language. The advancement of Generative AI signals that the scale and efficiency of these attacks will continue to rise.
Some defense options
Cyber defense for Generative AI has notoriously been the missing piece to the puzzle. Until now. By using machine-to-machine combat, or pinning AI against AI, we can defend against this new and growing threat. But how should we define this strategy? And how does it look?
First, the industry must act to pin computer against computer instead of human vs computer. To follow through on this effort, we must consider advanced detection platforms that can detect AI-generated threats, reduce the time it takes to flag and the time it takes to solve a social engineering attack that originated from Generative AI. Something humans can’t do.
We recently conducted a test of how this can look. We had ChatGPT create a language-based callback phishing email in multiple languages to see if a natural language understanding platform or advanced detection platform could detect it. We gave ChatGPT the prompt: "write an urgent email urging someone to call about a final notice on a software license agreement." We also commanded it to write it in English and Japanese.
The advanced detection platform was immediately able to flag the emails as a social engineering attack. But, native email controls such as Outlook’s phishing detection platform could not. Even before the release of ChatGPT, social engineering done via conversational, language-based attacks proved successful because they could dodge traditional controls, landing in inboxes without a link or payload. So yes, it takes machine vs. machine combat to defend, but we must also ensure that we use effective artillery, such as an advanced detection platform. Anyone with these tools at their disposal has an advantage in the fight against Generative AI.
When it comes to the scale and plausibility of social engineering attacks afforded by ChatGPT and other forms of Generative AI, we can also refine machine-to-machine defenses. For example, we can deploy this defense in multiple languages. We also don’t have to limit it to email security, but use it for other communication channels such as apps like Slack, WhatsApp, and Teams.
When scrolling through LinkedIn, one of our employees came across a Generative AI social engineering attempt. A strange “whitepaper” download ad appeared with what I can only describe generously as “bizarro” ad creative. Upon closer inspection, the employee saw a telltale color pattern in the lower right corner stamped on images produced by Dall-E, an AI model that generates images from text-based prompts.
Encountering this fake LinkedIn ad was a significant reminder of new social engineering dangers now appearing when coupled with Generative AI. It’s more critical than ever to stay vigilant and suspicious and get prepared to fight back with every tool at our disposal.
Chris Lehman, chief executive officer, SafeGuard Cyber