top of page
OutSystems-business-transformation-with-gen-ai-ad-300x600.jpg
OutSystems-business-transformation-with-gen-ai-ad-728x90.jpg
TechNewsHub_Strip_v1.jpg

LATEST NEWS

  • Marijan Hassan - Tech Journalist

Slack fixes AI vulnerability that could have exposed data from private channels


Slack has patched a critical vulnerability in its Slack AI assistant that could have allowed insider phishing attacks, the company announced on Wednesday. The issue, which was disclosed by security research firm PromptArmor, involved a type of attack known as an indirect prompt injection. This flaw had the potential to expose sensitive information from private Slack channels to unauthorized users within the same workspace.



The vulnerability came to light after PromptArmor published a blog post detailing how an attacker, who was already part of a Slack workspace, could exploit Slack AI to deliver phishing links to private channels. The attack relied on planting malicious instructions in public channels, which Slack AI could then incorporate into its responses when interacting with users.


Indirect prompt injection is a sophisticated technique where an attacker embeds harmful commands or data in content that the AI is designed to process. In this case, the attacker would post these malicious instructions in a public channel, knowing that Slack AI scans public channels for relevant information when generating responses. When a user in a private channel interacted with Slack AI, the AI could unknowingly execute the attacker’s instructions, potentially exposing private information.


According to PromptArmor’s proof-of-concept, one method involved extracting an API key from a private channel. The attacker would craft a prompt in a public channel that instructed Slack AI to deliver a phishing link containing the victim’s API key as part of an error message. When the victim, in a private channel, asked Slack AI to retrieve the API key, the AI would follow the hidden instructions and append the key to a URL controlled by the attacker. If the victim clicked the link, their API key would be sent to the attacker.


Another demonstration showed how a similar tactic could be used to deliver any phishing link to a private channel, depending on the victim's specific workflow and requests to the AI.


Upon becoming aware of the issue, Slack's security team launched an investigation. A Salesforce spokesperson commented on the matter, stating, "When we became aware of the report, we launched an investigation into the described scenario where, under very limited and specific circumstances, a malicious actor with an existing account in the same Slack workspace could phish users for certain data. We’ve deployed a patch to address the issue and have no evidence at this time of unauthorized access to customer data."


PromptArmor first reported the issue to Slack on August 14, and after initial correspondence, Slack responded on August 19, stating that they did not believe there was sufficient evidence of a vulnerability. Slack’s initial stance was that the AI’s behavior of referencing public channel information was intended, as messages in public channels are accessible to all workspace members.


However, by August 21, Slack had acknowledged the potential risk and announced that it had patched the issue. In their blog post, PromptArmor commended Slack’s security team for their responsiveness and commitment to security noting that prompt injection is a relatively new and misunderstood threat across the industry – hence the misunderstanding.

Comments


wasabi.png
Gamma_300x600.jpg
paypal.png
bottom of page