Europol reported that threat actors have already been leveraging ChatGPT to facilitate cybercrimes and other fraudulent activity, with malicious usage of the AI chatbot expected to further increase, according to The Register. Aside from enabling phishing and disinformation operations, ChatGPT could also be exploited to generate malicious code, reducing the barrier to entry for malware development, noted Europol in its report. "For a potential criminal with little technical knowledge, this is an invaluable resource. At the same time, a more advanced user can exploit these improved capabilities to further refine or even automate sophisticated cybercriminal modi operandi," Europol said. Europol also warned that further development of AI features could prompt the emergence of illicit large language models on the dark web. "Finally, there are uncertainties regarding how LLM services may process user data in the future will conversations be stored and potentially expose sensitive personal information to unauthorized third parties? And if users are generating harmful content, should this be reported to law enforcement authorities?" said Europol.