Generative AI, Phishing, Email security

AI drives holiday phishing scams, as well as email defenses

Share
Small gift boxes arranged on the laptop keyboard. Cybersecurity of online shopping.

Holiday phishing scams in 2023 will come with a new twist: generative artificial intelligence.

The booming technology, inbox defenders warn, gives phishing attackers a new weapon in their arsenal to rob companies and consumers busily tracking holiday packages and spending a record $270 billion in online shopping.

The good news is that cybersecurity professionals are also flexing their AI muscles, building on innovative machine-learning technology to protect against phishing fraud.   

A lot is on the line.

Online holiday fraud is a Super Bowl-like event for cybercriminals. In the United States alone, scammers raked in more than $73 million during the 2022 holiday season, according to the FBI. That number is likely going up.

Researchers warn AI likely played a part in the 1,265% uptick in phishing email scams (PDF) in the past year.

ChatGPT and its ‘evil twins’ put new twist on scam emails

Cybercriminals’ use of ChatGPT is nearly as old as the chatbot itself, and malicious spinoffs like FraudGPT are making phishing and spear-phishing faster and easier. IBM X-Force researchers found that ChatGPT was nearly as effective at writing convincing phishing emails as human social engineering experts while taking just a fraction of the time.

“What is interesting about today’s tools such as ChatGPT is that they can create better spear-phishing content with a much more relevant reference to things that are very personal to the intended targets,” noted David Raissipour, chief technology and product officer at Mimecast.

This is especially dangerous during the holidays, as previous research by Barracuda Networks found spear-phishing attacks spiked by more than 150% the week before Christmas.

AI tools can smooth out spelling and grammar mistakes that are usually dead giveaway for scam emails and texts, Raissipour said. AI can also create more convincing mimics of legitimate websites, making them “indistinguishable from the real thing,” Pikes Peak State College AI Policy Chair Dennis Natali told KOAA.

Natali listed “12 Frauds of Christmas” for 2023 such as fake delivery tracking links, charity scams and gift card scams, most of which are traditional tactics that scammers have held onto for decades. The difference this year is the generative AI twist that could boost both volume and success rates.

Legitimate GenAI applications like ChatGPT have some safeguards against malicious use, although threat actors will always find a way to get around them with carefully worded prompts. SlashNext’s State of Phishing report (PDF) notes how hackers use forums to trade tips on how to “jailbreak” the app to craft scam emails.

As for WormGPT, FraudGPT and similar copycat applications, SlashNext found a majority of these tools simply provide an interface to connect to jailbroken versions of ChatGPT. WormGPT, however, uses its own custom language model built on GPTJ and is trained on data sources that make it especially useful for cyberattacks.

SlashNext researchers who tested WormGPT stated: “The results were striking, as WormGPT generated an email that was not only highly persuasive but also strategically cunning, highlighting its potential for sophisticated phishing and BEC attacks.”

Google, Norton among vendors playing defense in holiday AI arms race

Thanks to generative AI, typical holiday scams such as brand spoofing, spear-phishing and driving people to malware downloads are now more cunning and convincing. But defenders are also raising their AI game.

On Nov. 29, Google released its open-source text vectorizer called RETVec (Resilient & Efficient Text Vectorizer). Google explained the technology trains spam filter AI models to better classify text manipulations such as:

  1. Homoglyphs: Text characters that may appear identical but have different meanings, such as the capital letter O and the number zero (0).
  2. Invisible characters: The blank spaces that attackers may use to pad and space out suspicious text in order to bypass spam filters.
  3. Keyword stuffing: The use of hidden text included in the body of an email to make it appear more legitimate to spam classifiers.

Google claims RETVec improved spam detection by 38%, reduced false positives by 19.49% and reduced false negatives by 17.71% when tested within Gmail over the past year.

RETVec includes a novel UTF-8 character encoder and a small embedding model that is pretrained using pair-wise metric learning. The embedding model is trained on a typo-augmented data set encompassing 1.9 billion tokens spanning 157 languages, which is key for spotting text manipulations like homoglyph substitutions.

Additionally, the RETVec model’s small size (about 200,000 parameters instead of millions) boosts model efficiency, reducing Tensor Processing Unit (TPU) usage by 83%, Google researchers said.

Gmail users may see fewer holiday phishing emails now that RETVec has been implemented and its open-source release provides a low-cost opportunity to step up AI-based email defenses. It can also be implemented to protect against the ever-popular SMS phishing (smishing), Google researcher Elie Bursztein told SC Media.

Norton is another player putting AI-powered scam detection in the hands of everyday users of email, social media and text applications. The company’s free, early access Norton Genie app, first announced July 2023, is part text analyzer and part AI chatbot. Available in-browser and as an iOS and Android app, Norton Genie scans uploaded messages for signs of phishing, and can also generate answers to users’ questions about suspicious content.

Genie is trained on “millions of scam messages” and is “always learning” from the content that users upload, Norton said. To determine if an email, message, or website is phishing-related, users can copy and paste text, upload a screenshot, or send a link to Genie, which will automatically extract and scan text and check links for safety.

Security industry veterans are not the only ones working to arm consumers and businesses with AI-driven phishing defenses. New York-based startup Jericho Security raised $3 million in a pre-seed funding round this August; the company is developing AI tools specifically tailored to help organizations defend against phishing attacks powered by generative AI.

The company said it “hopes to pioneer a new sector in cybersecurity focused on defending customers from AI-driven attack vectors.”

Get daily email updates

SC Media's daily must-read of the most current and pressing daily news

By clicking the Subscribe button below, you agree to SC Media Terms of Use and Privacy Policy.