Comment on Fixed it.
Dyskolos@lemmy.zip 20 hours agoThat’s what I said. I expect it to interpolate silly questions to a sensible meaning. Of course it would sound like you wanna compare a comparable set of data. I would, as said, also assume you meant 20 pounds of both and just forgot to say that. Maybe adding “take this question literally” helps?
No fan of AI here, but hate where hate is due 😁
chaogomu@lemmy.world 19 hours ago
They did ask it to consider and repeat the exact wording of the question, yes.
Dyskolos@lemmy.zip 18 hours ago
Hm, i just had to try myself with chatgpt, and:
Image
so, as suspected, it just tries to ignore one’s nonsensical questions unless asked to take it verbatim. This is actually not even dumb. If it had said “your question makes no sense”, we would call it stupid to not see the obvious error in the question - as it’s a run-of-the-mill-“riddle”.
chaogomu@lemmy.world 16 hours ago
I’ll have to find the post, but you did it in two steps, changed the units of mass, and object.
The post, which is extremely hard to find with the latest slop release from Nvidia, asked the chatbot to consider the exact wording, without babying it into the correct answer. All because the close variations of the phase “X pounds of bricks and X pounds of feathers weigh the exact same” have been used in various textbooks and such for at least the last hundred years or so.
That means that the chatbot has seen that exact combo of words, in roughly that order, quite a bit more than your use of “100 kilograms of rice”. At least in English.
You can baby it through when the training data is sparse, but not when there are hundreds of uses of the same phrase over and over again in the training.
Dyskolos@lemmy.zip 15 hours ago
Image
Well…I stand corrected.
<insert picard-facepalm-gif>