top of page
OutSystems-business-transformation-with-gen-ai-ad-300x600.jpg
OutSystems-business-transformation-with-gen-ai-ad-728x90.jpg
TechNewsHub_Strip_v1.jpg

LATEST NEWS

  • Tech Journalist

WormGPT: What you need to know about this ChatGPT's evil brother


Generative artificial intelligence (AI) has gained popularity recently, but unfortunately, malicious actors have also found ways to exploit this technology for accelerated cybercrime.

A new cybercrime tool called WormGPT, based on generative AI, has emerged on underground forums. Advertised as a blackhat alternative to legitimate GPT models, WormGPT enables criminals to launch advanced phishing and business email compromise (BEC) attacks.



Security researcher Daniel Kelley highlights the dangers of this tool, which automates the creation of persuasive fake emails personalized to the recipient. This increases the chances of successful attacks. The software utilizes the open-source GPT-J language model from EleutherAI. Its author has described it as the "biggest enemy of the well-known ChatGPT'' and a tool for "illegal stuff."


In the hands of cybercriminals, WormGPT becomes a potent weapon unlike legitimate large language models (LLMs) like OpenAI’s ChatGPT and Google Bard that have safeguards preventing their use in generating malicious code and phishing emails.


Not that cybercriminals can’t bypass the safety controls though. In a previous disclosure, Israeli cybersecurity firm Check Point revealed how cybercriminals circumvent ChatGPT's restrictions by using its API and selling brute-force software to hack into accounts.


Moreover, bad actors are promoting ChatGPT "jailbreaks" as a tool for generating harmful outputs like disclosing sensitive information or producing inappropriate content. Generative AI's ability to create emails with impeccable grammar makes them seem legitimate and reduces suspicion.


Separately, Mithril Security researchers have demonstrated a technique called PoisonGPT, surgically modifying an existing open-source AI model called GPT-J-6B to spread disinformation. They uploaded it to a public repository, Hugging Face, to integrate it into other applications, leading to LLM supply chain poisoning. To succeed, the modified model must be uploaded under a name that impersonates a known company.


These developments underscore the risks posed by generative AI in the wrong hands. There is a need for increased vigilance to prevent the misuse of this technology for cybercrime and disinformation campaigns. As AI advances, addressing the ethical and security implications of generative AI becomes crucial to protect users and systems from potential harm.





Comments


wasabi.png
Gamma_300x600.jpg
paypal.png
bottom of page