Comment on Google AI chatbot responds with a threatening message: "Human … Please die."

<- View Parent
thingsiplay@beehaw.org ⁨3⁩ ⁨days⁩ ago

Usually LLMs for the public are sanitized and censored, to prevent lot of creepy stuff. But no system is perfect. Some random state can cause random answers that makes no sense, if triggered. Microsofts Ai attempts, Google’s previous Ai’s, ChatGPT and other LLMs all had their fair share of problems. They will probably add some more guard rails after this public disaster; until next problem happens. There are dedicated users who try to force this kind of stuff, just like hacker trying to hack websites (as an analogy).

source
Sort:hotnewtop