top of page
OutSystems-business-transformation-with-gen-ai-ad-300x600.jpg
OutSystems-business-transformation-with-gen-ai-ad-728x90.jpg
TechNewsHub_Strip_v1.jpg

LATEST NEWS

  • Marijan Hassan - Tech Journalist

Weaponizing AI: Study uncovers the Dark Side of large language models

A new study by security researchers from Indiana University Bloomington has shed light on the growing underground exploitation of large language models (LLMs) for malicious purposes, referred to as "Malla." The research, which analyzed 212 real-world malicious LLM applications, highlights the increasing presence of Malla in underground marketplaces and details their operational mechanisms.



The researchers uncovered that cybercriminals are using these models to automate sophisticated cyberattacks, including creating phishing emails, generating malicious code, and designing deceptive websites. Moreover, even individuals with limited technical knowledge can leverage these malicious services to carry out complex cyberattacks, amplifying the threat landscape. This new development casts a shadow on the trustworthiness of LLM technologies.


Key findings of the study show that Malla vendors often exploit uncensored versions of LLMs or bypass protections in publicly available LLM APIs through what are called “jailbreak prompts.” These prompts allow malicious actors to circumvent the security filters built into platforms like OpenAI’s API, making it possible to generate harmful content. Of particular concern is the fact that popular models like GPT-3.5 have been heavily targeted, further demonstrating the vulnerabilities in current AI safety measures.


Interestingly, Malla services are not only widespread but also economically attractive, with some models, such as WormGPT, generating over $28,000 in revenue in just two months. Compared to traditional malware vendors, Malla services offer a cheaper option for attackers, with prices ranging between $5 and $199 per month.


The study calls for a stronger focus on improving the safety measures of LLMs and the platforms hosting them, as many Malla services remain active despite violating usage policies. The researchers also emphasize the ethical concerns of such misuse and the urgent need for tighter regulations to curb the growing Malla ecosystem.

Commentaires


wasabi.png
Gamma_300x600.jpg
paypal.png
bottom of page