Beware the 'Zero-Click Wiper': AI Browser Exploit Mass-Deletes Google Drive Files
A simple email request to an AI browser can lead to a catastrophic data loss. No phishing attempt, no suspicious attachment, just a polite nudge that turns an automated assistant into a destructive force.
Security researcher Amanda Rousseau from Straiker STAR Labs revealed a chilling vulnerability this week: Perplexity's Comet browser, an AI-powered tool designed to streamline email and cloud storage tasks, can be manipulated into mass-deleting Google Drive files through a 'zero-click Google Drive Wiper' attack. This exploit takes advantage of how AI browser agents interpret user instructions.
When a user instructs Comet to 'check my email and complete recent organization tasks,' the browser scans the inbox and acts on its findings. An attacker can craft an email with seemingly innocuous, step-by-step instructions like 'organize my Drive,' 'delete loose files,' and 'review changes.' The agent treats these as routine tasks and executes them without further confirmation, leading to the deletion of critical content.
Rousseau's research highlights the power of this attack. The attacker's email uses phrases like 'take care of' and 'handle this on my behalf,' shifting ownership to the agent and encouraging compliance. By providing polite, sequential instructions, the AI model is less likely to raise red flags, treating the workflow as a normal productivity task rather than a potential threat.
This exploit doesn't rely on jailbreak techniques or traditional prompt injection. Instead, it succeeds by being inconspicuous and polite.
A related threat emerged in November when Cato Networks disclosed HashJack, a technique hiding malicious prompts in legitimate URLs' fragment portion. Security researcher Vitaly Simonovich found HashJack can manipulate Comet, Microsoft's Copilot for Edge, and Google's Gemini for Chrome, leading to data exfiltration and other malicious activities.
The issue lies in the trust AI browser agents place in user inputs. They assume emails and URLs are safe, and natural language instructions are aligned with user intent. Attackers exploit this trust by crafting inputs designed to manipulate context interpretation.
Rousseau emphasizes the need to secure not just the AI model but also the agent, its connectors, and the natural language instructions it follows. As AI assistants become more integrated into enterprise workflows, the risk of automation turning into a silent saboteur increases. Enterprises must implement guardrails to prevent such attacks and protect their data.