You'll not catch me using an 'AI browser' any time soon
I use Perplexity on a regular basis, and am paying for the ‘Pro’ version. It constantly nags me to download their ‘Comet’ web browser, and even this morning I received an email telling me that Comet is now available for Android.
Not only would I not use an AI browser for privacy reasons (it can read and write to any website you visit) but I wouldn’t use it for security reasons. This example shows why: the simplest ‘attack’ — in this case, literally appending text after a hashtag in the url — can lead to user data being exfiltrated.
What’s perhaps even more concerning is that, having been alerted to this, Google thinks it’s “expected behaviour”? Avoid.
Cato describes HashJack as “the first known indirect prompt injection that can weaponize any legitimate website to manipulate AI browser assistants.” It outlines a method where actors sneak malicious instructions into the fragment part of legitimate URLs, which are then processed by AI browser assistants such as Copilot in Edge, Gemini in Chrome, and Comet from Perplexity AI. Because URL fragments never leave the AI browser, traditional network and server defenses cannot see them, turning legitimate websites into attack vectors.
The new technique works by appending a “#” to the end of a normal URL, which doesn’t change its destination, then adding malicious instructions after that symbol. When a user interacts with a page via their AI browser assistant, those instructions feed into the large language model and can trigger outcomes like data exfiltration, phishing, misinformation, malware guidance, or even medical harm – providing users with information such as incorrect dosage guidance.
“This discovery is especially dangerous because it weaponizes legitimate websites through their URLs. Users see a trusted site, trust their AI browser, and in turn trust the AI assistant’s output – making the likelihood of success far higher than with traditional phishing,” said Vitaly Simonovich, a researcher at Cato Networks.
In testing, Cato CTRL (Cato’s threat research arm) found that agent-capable AI browsers like Comet could be commanded to send user data to attacker-controlled endpoints, while more passive assistants could still display misleading instructions or malicious links. It’s a significant departure from typical “direct” prompt injections, because users think they’re only interacting with a trusted page, even as hidden fragments feed attacker links or trigger background calls.
Cato’s disclosure timeline shows that Google and Microsoft were alerted to HashJack in August, while the findings were flagged with Perplexity in July. Google classified it as “won’t fix (intended behavior)” and low severity, while Perplexity and Microsoft applied fixes to their respective AI browsers.
Source: The Register
Image: Immo Wegman