Comment on Fixed it.
chaogomu@lemmy.world 1 day agoThey did ask it to consider and repeat the exact wording of the question, yes.
Comment on Fixed it.
chaogomu@lemmy.world 1 day agoThey did ask it to consider and repeat the exact wording of the question, yes.
Dyskolos@lemmy.zip 1 day ago
Hm, i just had to try myself with chatgpt, and:
Image
so, as suspected, it just tries to ignore one’s nonsensical questions unless asked to take it verbatim. This is actually not even dumb. If it had said “your question makes no sense”, we would call it stupid to not see the obvious error in the question - as it’s a run-of-the-mill-“riddle”.
chaogomu@lemmy.world 23 hours ago
I’ll have to find the post, but you did it in two steps, changed the units of mass, and object.
The post, which is extremely hard to find with the latest slop release from Nvidia, asked the chatbot to consider the exact wording, without babying it into the correct answer. All because the close variations of the phase “X pounds of bricks and X pounds of feathers weigh the exact same” have been used in various textbooks and such for at least the last hundred years or so.
That means that the chatbot has seen that exact combo of words, in roughly that order, quite a bit more than your use of “100 kilograms of rice”. At least in English.
You can baby it through when the training data is sparse, but not when there are hundreds of uses of the same phrase over and over again in the training.
Dyskolos@lemmy.zip 21 hours ago
Image
Well…I stand corrected.
<insert picard-facepalm-gif>
chaogomu@lemmy.world 21 hours ago
Yup, it’s actually an interesting demonstration of the power of training data on a chatbot, after all, they’re just feeding it back to you.