A vulnerability has just been discovered in Microsoft 365 Copilot's AI, allowing a hacker to retrieve sensitive data without any clicks, without any action, and without your knowledge. Microsoft has responded, but the threat shows the dangers of poorly controlled AI.
Intelligent assistants are increasingly used. They help to summarize documents, organize meetings, or find files. But a critical flaw serves as a reminder that even these apps can be hijacked for malicious purposes. Last February, we reported that hackers are now using AI to breach networks in less than an hour, thanks to tools that can spot and exploit flaws almost instantly. More recently, a GTIG report indicated that Google's Gemini AI was being widely abused to carry out sophisticated attacks. Now, Microsoft has been hit.
Dubbed EchoLeak, this flaw exploited the behavior of the AI built into Microsoft 365 Copilot. The hacker doesn't even need the user to click on anything. They simply send an email containing a booby-trapped message, disguised as something unremarkable. If the user then asks Copilot a question, it can accidentally mix confidential internal data with the booby-trapped content. As a result, sensitive information is sent to the hacker, without anyone noticing. Microsoft's AI could deliver your data without a click or alert, just by asking a simple question. The flaw has been identified as an "indirect injection," a type of attack where the AI is fooled by content that is invisible to the user. Microsoft has identified the issue as CVE-2025-32711, with a very high severity rating of 9.3 out of 10. The patch was deployed in June. According to the researchers, this attack allowed the automatic recovery of confidential data via Outlook, SharePoint, or Teams, simply by asking Copilot a question. All this without direct interaction with the malicious email.
The alert was launched by Aim Security, an Israeli cybersecurity firm, and relayed by The Hacker News. According to its experts, EchoLeak is a so-called “zero-click” flaw, because it requires no user action to work. The hacker relies on Copilot’s automatic behavior to extract sensitive data. This demonstrates the real risks associated with AI used in professional contexts: even if it appears reliable, it can be manipulated remotely.
0 Comments