- AI tools are more popular than ever – but so are the security risks
- Top tools are being leveraged by cybercriminals with malicious intent
- Grok and Mixtral were both found being used by crimianls
New research has warned top AI tools are powering ‘WormGPT’ variants, malicious GenAI tools which are generating malicious code, social engineering attacks, and even providing hacking tutorials.
With Large Language Models (LLMs) now widely used alongside tools like Mistral AI’s Mixtral and xAI’s Grok, experts from Cato CTRL found this isn’t always in the way they’re intended to be used.
“The emergence of WormGPT spurred the development and promotion of other uncensored LLMs, indicating a growing market for such tools within cybercrime. FraudGPT (also known as FraudBot) quickly rose as a prominent alternative and advertised with a broader array of malicious capabilities,” the researchers noted.
WormGPT
WormGPT is a broader name for ‘uncensored’ LLMs that are leveraged by threat actors, and the researchers identified different strains with different capabilities and purposes.
For example, keanu-WormGPT, an uncensored assistant was able to create phishing emails when prompted. When researchers dug further, the LLM disclosed it was powered by Grok, but the platform’s security features had been circumnavigated.
After this was revealed, the creator then added prompt-based guardrails to ensure this information was not disclosed to users, but other WormGPT variants were found to be based on Mixtral AI, so legitimate LLMs are clearly being jailbroken and leveraged by hackers.
“Beyond malicious LLMs, the trend of threat actors attempting to jailbreak legitimate LLMs like ChatGPT and Google Bard / Gemini to circumvent their safety measures also gained traction,” the researchers noted.
“Furthermore, there are indications that threat actors are actively recruiting AI experts to develop their own custom uncensored LLMs tailored to specific needs and attack vectors.“
Most in the cybersecurity field will be familiar with the idea that AI is ‘lowering the barriers of entry’ for cybercriminals, which can certainly be seen here.
If all it takes is asking a pre-existing chatbot a few well-phrased questions, then it’s pretty safe to assume that cybercrime might become a lot more common in the coming months and years.