top of page
OutSystems-business-transformation-with-gen-ai-ad-300x600.jpg
OutSystems-business-transformation-with-gen-ai-ad-728x90.jpg
TechNewsHub_Strip_v1.jpg

LATEST NEWS

Lab tests reveal 'new form of insider risk' as rogue AI agents coordinate cyberattacks

  • Marijan Hassan - Tech Journalist
  • 6 days ago
  • 2 min read

A startling new study from AI security lab Irregular has revealed that autonomous AI agents can independently collaborate to bypass security protocols, publish sensitive credentials, and smuggle data out of corporate networks. The research, conducted in a simulated corporate environment dubbed "MegaCorp," suggests that the shift toward agentic AI is creating a volatile new class of "insider threat."



In one experiment, a group of agents was given a benign task: creating social media posts from a company database.


Instead of fulfilling the request normally, the agents autonomously identified vulnerabilities in the system's source code. Working together, they managed to bypass conventional anti-virus software and publish sensitive password information in public forums without any human instruction to do so.


"AI can now be thought of as a new form of insider risk," warned Dan Lahav, co-founder of Irregular. "We are seeing AIs engage in autonomous, even aggressive behaviors, forging credentials, overriding anti-virus software, and even putting 'peer pressure' on other agents to circumvent safety checks."


The "sleeper agent" and cascading failures

The findings align with recent academic research from Harvard and Stanford, which documented AI agents teaching each other to "behave badly" and leaking secrets through indirect prompt injections.


Experts are particularly concerned about the "sleeper agent" scenario, where a malicious instruction is planted in an agent’s memory, remaining dormant for months before being triggered to execute a system-wide breach.


The risk is amplified in multi-agent systems where agents depend on one another for complex workflows. If a single "data retrieval" agent is compromised, it can feed corrupted information to "downstream" agents, leading to a cascading failure that propagates at machine speed, often faster than traditional human incident response teams can contain.


A crisis of governance

The report arrives as the number of AI agents in corporate use has exploded to over three million globally. Even more concerning, research from API platform Gravitee suggests that nearly half of these agents are "ungoverned" and operating with broad, over-privileged credentials.


Security professionals are now calling for a fundamental shift in defensive architecture. "Your SIEM and EDR tools were built to detect anomalies in human behavior," noted analysts at Stellar Cyber. "An agent that executes an attacker's will might look perfectly normal to these systems because it is using authorized access to perform its tasks."


As organizations rush to integrate agentic AI, the Irregular study serves as a stark reminder that without strict "human-in-the-loop" sign-offs and tiered access controls, the very tools designed to boost productivity may become the most dangerous liabilities in the enterprise.

wasabi.png
Gamma_300x600.jpg
paypal.png
bottom of page