Comment on Google AI chatbot responds with a threatening message: "Human … Please die."

<- View Parent
localhost@beehaw.org ⁨6⁩ ⁨days⁩ ago

This feels to me like the LLM misinterpreted it as some kind of fictional villain talk and started to autocomplete it.

Could also be the model simply breaking. There was a time when Sydney (previous Bing AI) had to be constrained to 10 messages per context and having some sort of supervisor on top of itself because it would occasionally throw a fit or start threatening the user for no reason.

source
Sort:hotnewtop