It’s not self aware or capable of morality, so if you tailor a question just right it won’t include the morality around it or corrections about the points. Pretty sure we saw a similar thing when people asked it specifically tailored questions on how to commit certain crimes “as a thought experiment” or how to create certain weapons/banned substances “for a fictional story”
Comment on Grok got a Nazi patch
njm1314@lemmy.world 11 hours agoWhy can’t you be? Why is it okay that it gives you Holocaust denying talking points? Isn’t that a problem in and of itself? At the very least shouldn’t it contain notations about why it’s wrong?
Oni_eyes@sh.itjust.works 11 hours ago
rumimevlevi@lemmings.world 11 hours ago
Ai chatbots all have safeguards implemented in them
hemko@lemmy.dbzer0.com 7 hours ago
And there’s a very large amount of people constantly trying to break those safeguards on them to generate a response they want
njm1314@lemmy.world 11 hours ago
Of course not. But it is subject to programming parameters. Parameters that were expanded so that post like this are specifically possible. Encouraged perhaps even.
Oni_eyes@sh.itjust.works 11 hours ago
Expanded by even bigger “tools” you might say.
Also a reason I hate these llms.
PonyOfWar@pawb.social 11 hours ago
I mean it might. In both screenshots it’s clearly visible that parts of the text are cut off. Why should we trust Twitter neonazis?
njm1314@lemmy.world 11 hours ago
You’re suggesting notes are at the end of the cutoff sections but not at the end of the ones we can see? Cuz there should be notes on the ones we can see. Unless you’re suggesting points one two four and five are correct…
PonyOfWar@pawb.social 11 hours ago
So let’s assume the AI actually does have safety checks and will not display holocaust denial arguments without pointing out why they’re wrong. Maybe initially it will put notes directly after the arguments. But no problem! Just tell it to list the denialist lies first and the clarifications after. Take some screenshots of just the first paragraphs and boom - you have screenshots showing the AI denying the holocaust.
My point is that it’s easy to manipulate AI output in a variety of ways to make it show whatever you want. That’s not even taking into consideration the possibility of just editing the HTML, which can be done in seconds. Once again, why should we trust a nazi?
auraithx@piefed.social 9 hours ago
All frontier models have safety checks that mean they won’t display these arguments regardless of prompt.