Comment on Do you think Google execs keep a secret un-enshittified version of their search engine and LLM?

<- View Parent
kromem@lemmy.world ⁨17⁩ ⁨hours⁩ ago

Yeah. The confabulation/hallucination thing is a real issue.

OpenAI had some good research a few months ago that laid a lot of the blame on reinforcement learning that only rewards having the right answer vs correctly saying “I don’t know.” So they’re basically trained like taking tests where it’s always better to guess the answer than not provide an answer.

But this leads to being full of shit when not knowing an answer or being more likely to make up an answer than say there isn’t one when what’s being asked is impossible.

source
Sort:hotnewtop