
As AI browsers gain popularity, new security risks are emerging—chief among them prompt injection.
Large language models that power tools like ChatGPT or Gemini can be tricked into following malicious instructions hidden in websites or documents.
Unlike traditional hacking, the weapon here is language, not code.
Browser developer Brave recently warned that as users trust AI browsers with banking, healthcare, and shopping tasks, hidden prompts could hijack sensitive data.
Tests revealed vulnerabilities in Perplexity’s Comet, where attackers embedded invisible instructions—such as white text on a white background—that AI can read but humans cannot.
The danger escalates with agentic browsers, which don’t just assist but also execute tasks autonomously.
For example, when asked to book a flight, they can fill out forms, input payment details, and finalize purchases without user intervention.
If manipulated, they could leak credentials or make fraudulent transactions.
To reduce risks, experts recommend: limiting permissions, verifying sources, keeping software updated, using strong authentication, monitoring activity logs, and avoiding full automation of sensitive operations like payments.
Prompt injection highlights a critical need: AI browsers must learn to distinguish between user instructions and external web content—or risk becoming tools for cybercriminals.
See What’s Next in Tech With the Fast Forward Newsletter
Tweets From @varindiamag
Nothing to see here - yet
When they Tweet, their Tweets will show up here.