Stuff like this should help with that. If the AI can evaluate the response before spitting it out, that could improve the quality a lot.
Comment on Google AI chatbot responds with a threatening message: "Human … Please die."
thingsiplay@beehaw.org 4 weeks agoUsually LLMs for the public are sanitized and censored, to prevent lot of creepy stuff. But no system is perfect. Some random state can cause random answers that makes no sense, if triggered. Microsofts Ai attempts, Google’s previous Ai’s, ChatGPT and other LLMs all had their fair share of problems. They will probably add some more guard rails after this public disaster; until next problem happens. There are dedicated users who try to force this kind of stuff, just like hacker trying to hack websites (as an analogy).
TranquilTurbulence@lemmy.zip 4 weeks ago
Bougie_Birdie@lemmy.blahaj.zone 4 weeks ago
With the sheer volume of training data required, I have a hard time believing that the data sanitation is high quality.
If I had to guess, it’s largely filtered through scripts, and not thoroughly vetted by humans. So data sanitation might look for the removal of slurs and profanity, but wouldn’t have a way to find misinformation or a request that the reader stops existing.
Swedneck@discuss.tchncs.de 4 weeks ago
anything containing “die” ought to warrant a human skimming it over at least
Bougie_Birdie@lemmy.blahaj.zone 4 weeks ago
I don’t disagree, but it is a challenging problem. If you’re filtering for “die” then you’re going to find diet, indie, diesel, remedied, and just a whole mess of other words.
I’m in the camp where I believe they really should be reading all their inputs. You’ll never know what you’re feeding the machine otherwise.
However I have no illusions that they’re not cutting corners to save money
Swedneck@discuss.tchncs.de 4 weeks ago
huh? finding only the literal word “die” is a trivial regex, it’s something vim users do all the time when editing text files lol