Comment on Google AI chatbot responds with a threatening message: "Human … Please die."
Bougie_Birdie@lemmy.blahaj.zone 3 days agoI don’t disagree, but it is a challenging problem. If you’re filtering for “die” then you’re going to find diet, indie, diesel, remedied, and just a whole mess of other words.
I’m in the camp where I believe they really should be reading all their inputs. You’ll never know what you’re feeding the machine otherwise.
However I have no illusions that they’re not cutting corners to save money
Swedneck@discuss.tchncs.de 3 days ago
huh? finding only the literal word “die” is a trivial regex, it’s something vim users do all the time when editing text files lol
Bougie_Birdie@lemmy.blahaj.zone 2 days ago
Sure, but underestimating the scope is how you wind up with a Scunthorpe problem
Swedneck@discuss.tchncs.de 2 days ago
i feel like that’s being forced in here, i’m literally just saying that they should scan through any text with the literal word “die” to make sure it’s not obviously calling for murder. it’s not a complex idea
TranquilTurbulence@lemmy.zip 2 days ago
They could just run the whole dataset through sentiment analysis and delete the parts that get categorized as negative, hostile or messed up.