Cybercriminals are abusing LLMs to help them with hacking activities




  • New research shows AI tools are being used and abused by cybercriminals
  • Hackers are creating tools that exploit legitimate LLMs
  • Criminals are also training their own LLMs

It’s undeniable that AI is being used by both cybersecurity teams and cybercriminals, but new research from Cisco Talos reveals that criminals are getting creative. The latest development in the AI/cybersecurity landscape is that ‘uncensored’ LLMs, jailbroken LLMs, and cybercriminal-designed LLMs are being leveraged against targets.

It was recently revealed that both Grok and Mistral AI models were powering WormGPT variants that were generating malicious code, social engineering attacks, and even providing hacking tutorials – so it’s clearly becoming a popular tactic.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *