Security Program Controls/Technologies, Threat Management, Identity

Artificial intelligence offers swindlers a new tool for romance scams

Romance fraud, scam or deceit with smartphone.

Scammers have a powerful new weapon to wield this Valentine's Day when drafting love letters to whisper sweet nothings at scale: artificial intelligence such as ChatGPT.

The tactic of using fake online identities to gain victims' affection and trust for financial gain is known as romance scams. While it sounds skeptical that humans can be tricked into machine-generated love messages, security company McAfee's Modern Love Research Report showed that 7 in 10 people failed to distinguish if AI wrote a love letter.

Specifically, to conduct the research, the team presented a ChatGPT love letter to more than 5,000 people worldwide, and it turns out that 33% of the people thought that a person wrote the letter, while 36% said they could not distinguish one way or another. 

My Dearest,

The moment I laid eyes on you, I knew that my heart would forever be yours. Your beauty, both inside and out, is unmatched and your kind and loving spirit only adds to my admiration for you.

You are my heart, my soul, my everything. I cannot imagine a life without you, and I will do everything in my power to make you happy. I love you now and forever...

With the ease of use around artificial intelligence, particularly tools such as ChatGPT that anybody with a web browser can easily access, scammers can use the technology to leverage malicious activity, especially during Valentine's Day to target those people who have dropped their guard while looking for love, said Steve Grobman, chief technology officer at McAfee.

Besides ChatGPT, deepfake images, AI-generated conversations, and emotion analysis will soon be performed at a large scale, making current romance scams into a mild warm-up to the AI-driven onslaught, Bud Broomhead, chief executive of Viakoo, added.

AI-assisted social engineering tactics

The Federal Trade Commission (FTC) and Federal Bureau of Investigation (FBI) issued romance scam warnings last week as Valentine's Day approached.

Romance scams have caused one of the highest amounts of financial losses  compared with other online crimes. According to the FTC's latest data, consumer-reported losses hit a record high of $1.3 million in 2022, increasing by nearly 138% from 2021.

This massive spike in losses comes as scammers continue advancing their attacking techniques. For example, the FBI has warned of a rise in a new tactic called 'pig butchering' over the past few months, in which scammers convince the victims to invest in fraudulent cryptocurrency platforms after developing long-term relationships with victims and winning their trust on social media and dating applications.

Over the past three years, there has also been an eightfold surge in sextortion — a practice of convincing victims to share explicit photos and then threatening to reveal them with victims' social media contacts, according to FTC.

Fight scams with modern identity verification

Michael Jabbara, vice president of fraud services at Visa, said awareness and detection are the most critical tools in mitigating the threat of romance scams, and it takes all parties, from consumers, businesses, and financial services, to participate.

"In addition to consumers taking action to protect their personal information online, financial services organizations and other companies should invest in cutting-edge technologies such as AI and ML-powered solutions to detect and thwart fraudulent trends, like romance scams. For example, Visa’s Advanced Authorization uses various AI and ML techniques to determine the likelihood that a given transaction is fraudulent within 300 milliseconds," said Jabbara.

Bala Kumar, chief product officer at Jumio, added that dating sites and apps should apply modern identity verification technologies, such as document-centric identity proofing, to leverage the power of biometrics and AI and better protect their users.

"Many dating sites still leverage traditional identity verification methods, like knowledge-based authentication (KBA), where users are asked to answer specific security questions. Such methods are no longer considered secure since so much of our personal data can be accessed through hacking and legally through data aggregators," said Kumar.

Some applications have upgraded their identity verification methods — Hinge has leveraged video selfie verification to confirm user identity, while Meta has tested similar tools for Facebook Dating to verify users' ages.

Menghan Xiao

Menghan Xiao is a cybersecurity reporter at SC Media, covering software supply chain security, workforce/business, and threat intelligence. Before SC Media, Xiao studied journalism at Northwestern University, where she received a merit-based scholarship from Medill and Jack Modzelewski Scholarship Fund.

Get daily email updates

SC Media's daily must-read of the most current and pressing daily news

By clicking the Subscribe button below, you agree to SC Media Terms and Conditions and Privacy Policy.