As one of the fastest-growing consumer applications ever, it’s no surprise ChatGPT has caught the attention of cybercriminals, as well as a curious public.
Scammers have been quick to find ways to cash in on the hype around OpenAI’s large-language-model-powered artificial intelligence tool.
In a research report published on Wednesday, IT security vendor Sophos said one mobile app developer had raked in $1 million in a month, charging users $7 a week for the same ChatGPT service OpenAI provides for free.
During its research, Sophos uncovered several ChatGPT-related "fleeceware" apps in the Google Play and Apple App Store. These "FleeceGPT" apps are so named because the free versions have near-zero functionality, constantly display ads, and coerce unsuspecting users into signing up for a subscription that can cost hundreds of dollars a year.
“Scammers have and always will use the latest trends or technology to line their pockets. ChatGPT is no exception,” said Sophos principal threat researcher Sean Gallagher in a statement.
The developers of fleeceware deliberately bombard users with ads until they sign up for a subscription, Gallagher said.
“They’re banking on the fact that users won’t pay attention to the cost or simply forget that they have this subscription. They’re specifically designed so that they may not get much use after the free trial ends, so users delete the app without realizing they’re still on the hook for a monthly or weekly payment.”
The $1 million-in-a-month app highlighted in Sophos’ report is “Genie AI Chatbot,” an app with “some fleeceware-like behaviors” available in the Apple App Store.
During installation, there are prompts to allow the app to track activities across other apps and websites, and to rate the app before it’s even fully launched. Genie also asks for permission to send notifications. These prompts are followed by one encouraging enrollment in a free trial or immediate enrollment in a longer subscription: $7 a week or $70 a year.
“Unlike some of the other [FleeceGPT apps], Genie actually works at something approaching full advertised functionality without the trial or subscription — but only accepts four queries per day. It then prompts users with the trial offer again,” the Sophos report said.
The researchers advised anyone who discovered they had installed a fleeceware app to be aware that just deleting the app will not end the subscription if they were already paying. Users needed to take specific steps to stop the subscription through their app store account or they will continue to be charged.
Sophos’ findings align with a group of security research companies who have been tracking a wave of fraudulent activity around OpenAI’s large language model since it was made available to the broader public late last year.
Earlier this month, Meta warned of the emergence of “aggressive and persistent” new strains of malware targeting business users of popular platforms including Facebook, Gmail and Outlook. The new malware included some posing as ChatGPT browser extensions, as well as productivity tools.
In a report last month, Palo Alto Networks’ Unit 42 outlined an increase in malicious activity related to websites impersonating ChatGPT and OpenAI. The fake sites were created with the intention of tricking users into sharing personal information, or paying for what they thought was the ChatGPT service.
From November last year through to early April, Unit 42 researchers observed a 910% increase in web domains that were ChatGPT-related. Over the same time frame, they observed a 17,818% growth of related squatting domains from DNS Security logs.
Around 118 ChatGPT-related malicious URLs were caught daily by Palo Alto Networks’ URL filtering system.
“Typically, scammers create a fake website that closely mimics the appearance of the ChatGPT official website, then trick users into downloading malware or sharing sensitive information,” the Palo Alto report said.
Before the release of the ChatGPT API, there were several open-source projects that allowed users to connect to ChatGPT via various automation tools.
“Given the fact that ChatGPT is not accessible in certain countries or regions, websites created with these automation tools or the API could attract a considerable number of users from these areas. This also provides threat actors the opportunity to monetize ChatGPT by proxying their service.”
To mitigate the risk of being scammed, Unit 42 said ChatGPT users should exercise caution with suspicious emails or links related to ChatGPT, and ensure they only access the service through the official OpenAI website.