AI/ML, Generative AI

Malicious AI tools flourish, put pressure on lawmakers

Malicious AI tools flourish, put pressure on lawmakers

As U.S. lawmakers and the White House pursue plans to keep artificial intelligence in check, hackers are busy breaking generative AI ethical guardrails and bending the technology for their cybercriminal purposes.

On Tuesday, researchers at SlashNext published a report outlining how criminals use and abuse AI tools. It said communities of enthusiasts were hard at work jailbreaking tools like ChatGPT so they could be used for nefarious purposes.

This type of jailbreaking of AI involves identifying vulnerabilities in the generative AI models and exploiting them to evade the inherent safety measures and ethical guidelines put in place for their use. The result is “the creation of uncensored content without much consideration for the potential consequences,” SlashNext said.

The idea of abusing generative AI has attracted the attention of cybercriminals and led to the development of malicious AI tools such as WormGPT and FraudGPT, which are marketed on illicit web forums with claims they leveraged unique large language models (LLMs) especially developed for criminal purposes.

WormGPT has proven adept at creating convincing messages for use in attacks such as business email compromise (BEC) campaigns, but SlashNext said most of the other malicious AI tools currently on the market did not appear to utilize custom LLMs as they claimed.

“Instead, they use interfaces that connect to jailbroken versions of public chatbots like ChatGPT, disguised through a wrapper.”

That meant the only real advantage the tools offered cybercriminals was anonymity, SlashNext said. “Some of them offer unauthenticated access in exchange for cryptocurrency payments, enabling users to easily exploit AI-generated content for malicious purposes without revealing their identities."

AI good guys

Meanwhile, the Biden administration on Sept. 12 announced eight additional technology companies signed on to a set of voluntary commitments promoting responsible AI development.

With Adobe, Cohere, IBM, Nvidia, Palantir, Salesforce, Scale AI and Stability coming onboard, the White House now has 15 tech heavyweights taking part in a program it said was about ensuring safety, security and trust remained fundamental to AI’s development.

On the same day, Microsoft President Brad Smith told a Senate Judiciary subcommittee considering AI regulation that the technology’s “potential perils” could not be ignored by lawmakers.

“Industry plays an essential role in promoting the safe and responsible development of AI. But laws and regulations have a vital role to play as well,” Smith said. “At their core, these laws should require AI systems to remain subject to human control at all times, and ensure that those who develop and deploy them are subject to the rule of law.”

Simon Hendery

Simon Hendery is a freelance IT consultant specializing in security, compliance, and enterprise workflows. With a background in technology journalism and marketing, he is a passionate storyteller who loves researching and sharing the latest industry developments.

Get daily email updates

SC Media's daily must-read of the most current and pressing daily news

By clicking the Subscribe button below, you agree to SC Media Terms and Conditions and Privacy Policy.