Security Program Controls/Technologies, Application security, Threat Management

Employees are entering sensitive business data into ChatGPT

ChatGPT screen

Employees may be putting confidential business information at risk by entering sensitive data into ChatGPT, the wildly popular artificial intelligence chatbot. 

In the meantime, bad actors are looking to take advantage of its popularity by creating a fake Chrome extension to hijack Facebook accounts and install backdoors. The security firm Guardio reported about the malicious extension recently and said it has since been removed from the Google Play store.

According to a recent report by the security firm Cyberhaven, the content people input into ChatGPT is used by the chatbot’s maker, OpenAI, as training data to improve the technology. While only 5.6% of employees have used the technology in the workplace, according to the 1.6 million workers using Cyberhaven’s products, Cyberhaven Labs data shows that 4.9% of those workers have tried at least once to paste company data into ChatGPT since it was launched three months ago.

According to the company, firms such as JP Morgan and Verizon have blocked access to the ChatGPT over such concerns, and an attorney with Amazon warned employees in January to not input confidential information into the chatbot. 

On March 1, Cyberhaven said it detected a record 3,381 attempts to paste corporate data into ChatGPT per 100,000 employees, which is defined as “data egress events.”

The cybersecurity firm also said that fewer than 1% of employees (0.9%) are responsible for 80% of data egress events.

Fake ChatGPT extension harvested browser info

To make matters worse, some prospective users may be giving their data straight to scammers. In February, Guardio reported a new variant of a malicious fake ChatGPT browser extension that was abusing the chatbot’s brand by hijacking high-profile Facebook business accounts to create paid media campaigns at the expense of the businesses to spread. 

The stealer extension, called “Quick access to Chat GPT,” was promoted on Facebook-sponsored posts and used ChatGPT’s API to simply connect to the chatbot, However, the malicious extension also harvested all of the information it could from browsers, including cookies of authorized active sessions of any service used tailored tactics to takeover Facebook accounts.

“Now, once the victim opens the extension windows and writes a question to ChatGPT, the query is sent to OpenAIs servers to keep you busy — while in the background it immediately triggers the harvest,” Guardio wrote on its blog March 8.

As Nati Tal, head of Guardio Labs, noted in the post, the extension also abused Facebook’s APIs in a way that should have triggered the social-media giant’s policy enforcers. 

Before it was removed from the official Chrome web store, the extension was being installed more than 2,00 times a day after appearing on the scene March 3. 

Stephen Weigand

Stephen Weigand is managing editor and production manager for SC Media. He has worked for news media in Washington, D.C., covering military and defense issues, as well as federal IT. He is based in the Seattle area.

Get daily email updates

SC Media's daily must-read of the most current and pressing daily news

By clicking the Subscribe button below, you agree to SC Media Terms and Conditions and Privacy Policy.