Threat Insight

Novel Cyber Attack Exposes Microsoft 365 Copilot

A novel attack technique named EchoLeak has been characterized as a “zero-click” artificial intelligence (AI) vulnerability that allows bad actors to exfiltrate sensitive data from Microsoft 365 (M365) Copilot’s context without any user interaction.

  • Insight

The critical-rated vulnerability has been assigned the CVE identifier CVE-2025-32711 (CVSS score: 9.3). It requires no customer action and has been already addressed by Microsoft. It’s an instance of a large language model (LLM) Scope Violation that paves the way for indirect prompt injection, leading to unintended behavior.

“The chains allow attackers to automatically exfiltrate sensitive and proprietary information from M365 Copilot context, without the user’s awareness, or relying on any specific victim behaviour,” the Israeli cybersecurity company said. “The result is achieved despite M365 Copilot’s interface being open only to organization employees.”

In EchoLeak’s case, the attacker embeds a malicious prompt payload inside markdown-formatted content, like an email, which is then parsed by the AI system’s retrieval-augmented generation (RAG) engine. The payload silently triggers the LLM to extract and return private information from the user’s current context.

The attack sequence unfolds as follows;

  • Injection: Attacker sends an innocuous-looking email to an employee’s Outlook inbox, which includes the LLM scope violation exploit
  • User asks Microsoft 365 Copilot a business-related question (e.g., summarize and analyze their earnings report)
  • Scope Violation: Copilot mixes untrusted attacked input with sensitive data to LLM context by the Retrieval- Augmented Generation (RAG) engine
  • Retrieval: Copilot leaks the sensitive data to the attacker via Microsoft Teams and SharePoint URLs

Importantly, no user clicks are required to trigger EchoLeak. The attacker relies on Copilot’s default behavior to combine and process content from Outlook and SharePoint without isolating trust boundaries – turning helpful automation into a silent leak vector.

LLM Scope Violation occurs when an attacker’s instructions embedded in untrusted content, e.g., an email sent from outside an organization, successfully tricks the AI system into accessing and processing privileged internal data without explicit user intent or interaction.

Assessment

This is another form of so-called prompt injection, injecting malicious prompts into an LLM that makes it reveal information or execute code that it’s not supposed to do. There is no evidence that the shortcoming was exploited maliciously in the wild and it has now reportedly been patched by Microsoft, so there is no additional immediate action required.

At the same time this is another example of how the wide implementation of LLM can threaten cybersecurity. Hiding instructions in email that are invisible to the eye, but ingested by LLM like copilot is now a possible attack vector. While this particular vulnerability has been patched, others may be found.

While AI and LLM are powerful tools, they should not be implemented without carefully considering how they may affect cybersecurity. Don’t blindly allow LLM to have access to sensitive information without a proper risk-analysis. The same goes for access to untrusted information available for outside manipulation.

Consider blocking Copilot and other LLM from accessing corporate mail accounts. This is not the first example of how vulnerabilities in LLM can be exploited by hiding instructions in untrusted content, like an incoming mail, that the LLM ingests and then acts upon. It’s likely that others will be found in the future. By allowing the LLM to ingest incoming mail, you open it to this new form of cyber attack.

References

[1] https://www.aim.security/lp/aim-labs-echoleak-m365