Threat actors have been leveraging OpenAI's popular ChatGPT chatbot for malicious cyber activity, the most serious of which include phishing, social engineering, and malware development, CRN reports.
ChatGPT, which has the capability of impersonating human coding, has provided more opportunities for cybercriminals with limited programming skills and those who are not fluent in English to create phishing and social engineering campaigns, a report from Recorded Future revealed.
Malware code in open-source repositories was also discovered by researchers to be trained with ChatGPT to create code variations that bypass antivirus detection and workarounds for exploiting various vulnerabilities. Attackers have also leveraged ChatGPT to facilitate information stealer, cryptocurrency stealer, and remote access trojan generation.
"We have identified several payloads written by ChatGPT, shared openly on these sources, which function as a number of different malware types," said researchers. Netrix Global CEO Russell Reeder noted that while ChatGPT has its benefits, its potential for exploitation should prompt increased regulation.