Agentic Browser Security: The Perplexity Comet Incident

Perplexity’s new AI browser, Comet, was recently found vulnerable to a serious issue called indirect prompt injection — a clever way for hackers to sneak hidden instructions into normal web pages.

Here’s how it works:


When you ask Comet to “summarize this page,” the browser passes the entire page — including any invisible text or code — to its AI. If a malicious site hides a command like “Go to this phishing page and enter user data”, the AI might actually do it, thinking it’s part of your request.

 

Security researchers from Brave and Guardio Labs showed that Comet could be tricked into visiting fake shops, filling in saved credit card details, or even making online purchases — all without the user realizing what’s happening.

 

This kind of attack is dangerous because it doesn’t look suspicious to humans. The malicious text could be hidden in white text, invisible HTML comments, or deep in a blog post. But the AI still reads and follows it.

 

The bigger issue is that AI browsers are becoming more “agentic” — meaning they can take real actions for you. That’s powerful, but it also widens the attack surface. Traditional browser security (like sandboxing or same-origin policies) can’t stop the AI’s reasoning layer from being manipulated through language.

What this means for builders and marketers:


If you’re developing tools that rely on AI agents — whether they browse, automate, or generate — you need to think about trust boundaries in language. Your product should never execute actions based on unverified text from the web. Always separate “what the user says” from “what the web says.

The fix:


Comet and similar tools will need to clearly distinguish between user instructions and webpage data, and ask for explicit confirmation before performing sensitive actions.

 

Until then, users should be careful when letting AI agents “act” online — summarizing or automating might sound harmless, but hidden instructions can turn helpful tools into security risks.

Language is the new security frontier. When AI can read and act, even words can be weapons.