Comment on Elon Musk’s Grok Is Calling for a New Holocaust
sleepundertheleaves@infosec.pub 13 hours ago
Oh God not this shit again.
You can get an LLM to say anything if you give it the right prompts.
LLMs are trained on the internet. The internet is full of Holocaust denying white supremacists. So you can get an LLM to spout Holocaust denying white supremacy that mimics the content it was trained on. Shock, horror, oh noes.
That doesn’t mean LLMs are evil fascists. LLMs don’t understand the concepts of evil or fascism. It means they’re fancy autocomplete algorithms that have no ability to check the text they generate against reality.
What articles like this prove is that the average person doesn’t have any goddamn idea what an LLM actually does, because if they did, there wouldn’t be a market for articles like this.
And that fact is more terrifying than any neo-Nazi propaganda spouted by Grok.
ftbd@feddit.org 12 hours ago
But it means that such machines should not be live on twitter. Unless whoever runs twitter and this bot wants fash content on there.
sleepundertheleaves@infosec.pub 10 hours ago
I get where you’re coming from, but let me put it this way.
You can Google “why the Holocaust is a hoax” and get hundreds of websites spouting precisely the same garbage Grok did in the OP.
So how is an AI prompt poking for Holocaust denial different than a Google search looking for Holocaust denial?
The problem isn’t Twitter, or Google, or ChatGPT, or whatever other website or LLM you use. When you go looking for hateful shit, you find hateful shit. The problem is that you’re looking for hateful shit. And there’s not a technological solution for that.
Gaywallet@beehaw.org 2 hours ago
Because one is something you have to actively search for. The other is shoved in your face, by a figure that many feel is one who has some authority.
Why are you defending anything about this situation? This is not a thread to discuss how LLMs work in detail, this is a thread about accountability, consequences, hate, and society.
avidamoeba@lemmy.ca 3 hours ago
The problem is that Grok has been put in a position of authority on information. It’s expected to produce accurate information, not spit out what you ask it for, regardless of the factuality of information. So the expectation created for it by its owners is not the same as that for Google.