Bacause LLMs are furcking stupid.
[deleted]
Submitted 2 months ago by KillingAndKindess@lemmy.blahaj.zone to [deleted]
Comments
tobogganablaze@lemmus.org 2 months ago
j4k3@lemmy.world 2 months ago
I have nothing to say. I can’t read past the insults to start. I think I’m in the wrong place here ATM. This has been a depressing experience I do not wish to continue.
rayquetzalcoatl@lemmy.world 2 months ago
I don’t think anybody insulted you mate - you wrote something that was quite hard to understand, and we were curious about what you meant. OP has clearly put quite a lot of effort into rephrasing and trying to understand your post - that’s as far from an insult as I can imagine.
KillingAndKindess@lemmy.blahaj.zone 2 months ago
Thanks for recognizing the effort.
But in the brief look at replies that I gave, it did appear to be rather unkind, and certainly missing any reply of use. I wouldn’t be surprised that it only got worse in the time I took to make my comment (me be slow typist), and can’t say I blame OP for being upset.
KillingAndKindess@lemmy.blahaj.zone 2 months ago
Thats unfortunate. I’m sorry that has happened to you, and totally understand you no longer wanted to continue the discussion.
Would you like for me to take down my post? I am interested myself, but lack the context/cause to adequately ask the question, but will totally be ok taking if down if you’d like.
hendrik@palaver.p3x.de 2 months ago
If you post strangely written questions on social media, you probably also type in strangely written text into AI. And in turn the AI will be confused and generate some random text. For example about furries or some other random topic.
KillingAndKindess@lemmy.blahaj.zone 2 months ago
The user gave no reason to assume anything of that, nor did my description of the post, and may find the suggestion upsetting. Not going to go all PC 5-0 om you, but did want to distance myself from said assumption.
hendrik@palaver.p3x.de 2 months ago
Sorry, I'm also not a native speaker. I don't know what PC 5-0 means. But if we want to know what happened, we need to know the circumstances. It'll be a big difference which exact LLM model got used. We need to know the exact prompt and text that went in. And then we can start discussing why something happened. I'd say a good chance is the LLM has been made to output stories like that. Like it's the case with LLM models that have been made for ERP. That's why I said that.