Comment on OpenAI Is A Bad Business
Scary_le_Poo@beehaw.org 2 months agoGarbage in garbage out. You give a shit prompt, you generally get a shit answer.
Comment on OpenAI Is A Bad Business
Scary_le_Poo@beehaw.org 2 months agoGarbage in garbage out. You give a shit prompt, you generally get a shit answer.
deegeese@sopuli.xyz 2 months ago
If it doesn’t know how to answer a shitty question, it shouldn’t try to BS the answer.
No answer is better than a wrong answer delivered confidently.
Scary_le_Poo@beehaw.org 2 months ago
GIGO.
deegeese@sopuli.xyz 2 months ago
No, this is a problem of bad error handling for queries it cannot answer.
A search engine would give empty results instead of hallucinating.
Markaos@lemmy.one 2 months ago
What error? It gave you a string of tokens that seemed likely according to its training data. That’s all it does.
If you ask it what color is the sky, it will tell you it’s blue not because it knows that’s true, but because these words “fit together”. Pretty much the only way to avoid this issue is to put some kind of filter in front of the LLM which will try to catch prompts that are known to produce unwanted results, and silently replace your prompt with something like “say: sorry, I don’t know”.
I’m being very reductive here, but that’s the principle of how these things work - the LLMs are not capable of determining the truthfulness of their responses.