Comment on Comet AI browser can get prompt injected from any site, drain your bank account
businessfish@lemmy.blahaj.zone 1 month ago
complete insanity that the browser/agent doesnt even ask for user confirmation before interpreting web pages as instructions. this is just AI XSS, just mental that the AI is configured to trust and execute instructions from unsanitized web content. how was this not one of the first problems raised during development prior to release?
jrandomhacker@beehaw.org 1 month ago
LLMs fundamentally don’t/can’t have “sanitized” or “unsanitized” content - it’s all just tokens in the end. “Prompt Injection” is even a bit too generous of a term, I think.
businessfish@lemmy.blahaj.zone 1 month ago
sure but one would hope that if the agent is interpreting content from the web as instructions that there would be literally any security measure between the webpage and the agent - whether that’s some input sanitization, explicit user confirmation, or prohibiting the agent from interpreting web pages as instructions at all.