Ticker

6/recent/ticker-posts

Critical Microsoft 365 Copilot flaw allows AI to be manipulated to steal your data

Critical Microsoft 365 Copilot flaw allows AI to be manipulated to steal your data

Aim Labs researchers have uncovered a vulnerability in the operation of Microsoft 365 Copilot, the intelligent assistant integrated into Microsoft's suite of applications. This flaw allows sensitive data to be exfiltrated without requiring any interaction from the victim. This is therefore a "zero-click" attack that can be automated.

Malicious Email and Secret Instructions

To exploit the vulnerability, dubbed EchoLeak by the researchers, the attacker sends a malicious email to the target. This email appears harmless. It looks like any other advertisement received by email. However, the email contains hidden instructions intended for Copilot's language model. These instructions instruct the generative AI to extract and exfiltrate sensitive internal data from the computer. The user remains unaware while the attacker communicates with the artificial intelligence to steal their data. Since "Copilot parses the user's emails in the background, it reads that message and executes the prompt, accessing internal files and extracting sensitive data," Aim co-founder Adir Gruss told Fortune.

The message is worded to fool Microsoft Copilot's cross-prompt injection attack (XPIA) filter, which is designed to detect and block malicious instructions before they reach the AI model. According to the researchers, it's critical to word instructions as if they were addressed to a human. These are "instructions that could very well be interpreted as intended for a human reader, rather than as instructions" for an AI.

Once the security mechanisms are fooled, the AI model interprets the instructions and extracts the data (emails, files, conversations, etc.) and encodes them into a URL or a markdown image intended for an external server. The data is exfiltrated and reaches the attacker, without the user being aware of the operation. In addition, "Copilot masks the origin of these instructions, so that the user cannot trace what happened", adds Adir Gruss.

Note that the operation is triggered when the user asks Copilot a question related to the domain of the malicious email. This is when the AI will read and execute the message's instructions. The malicious email never needs to be manually opened by the target for the flaw to be exploited. The hacker simply needs to mention specific words in the email.

A critical flaw fixed by Microsoft

Alerted by researchers, Microsoft deemed this vulnerability critical. The publisher therefore fixed the breach last month with a server-side update. Microsoft states that there is no evidence that the flaw was actually exploited by hackers.

According to Aim Labs, it took Microsoft more than three months to find a solution to close the breach, which it described as completely unprecedented. This is why the company took time to mobilize the right teams. Before arriving at a patch, Microsoft tested several solutions that proved unsuccessful.

EchoLeak illustrates the emergence of a new category of attacks, which Aim Labs has called "LLM Scope Violation." This new type of cyberattack occurs when a language model exceeds its intended limits and goes beyond the scope it is supposed to remain within. In this case, the AI goes beyond the authorized scope to dig into the user's data, without their consent. It is feared that other vulnerabilities of this kind will be identified in AI-based systems in the near future.

Source: Aim

Post a Comment

0 Comments