A now-patched flaw in popular AI model runner Ollama allows drive-by attacks in which a miscreant uses a malicious website to remotely target people’s personal computers, spy on their local chats, and even control the models the victim’s app talks to, in extreme cases by serving poisoned models.

GitLab’s Security Operations senior manager Chris Moberly found and reported the flaw in Ollama Desktop v0.10.0 to the project’s maintainers on July 31. According to Moberly, the team fixed the issue within hours and released the patched software in v0.10.1 — so make sure you’ve applied the update because Moberly on Tuesday published a technical writeup about the attack along with proof-of-concept exploit code.

“Exploiting this in the wild would be trivial,” Moberly told The Register. “There is a little bit of work to build the proper attack infrastructure and to get the interception service working, but it’s something an LLM could write pretty easily.”

This makes me less enthusiastic about local models. I mean, nothing on the internet is inherently secure and the patch came quickly, but local LLMs being hackable in the first place opens a new can of worms.