Comment on AI hallucinations are impossible to eradicate — but a recent, embarrassing malfunction from one of China’s biggest tech firms shows how they can be much more damaging there than in other countries

<- View Parent
t3rmit3@beehaw.org ⁨2⁩ ⁨weeks⁩ ago

Except Lvxferre is actually correct; LLMs are not capable of determining what is useful or not useful, nor can they ever be as a fundamental part of their models; they are simply strings of weighted tokens/numbers. The LLM does not “know” anything, it is approximating text similar to what it was trained on.

It would be like training a parrot and then being upset that it doesn’t understand what the words mean.

The only way to ensure they produce only useful output is to screen their answers against a known-good database of information, at which point you don’t need the AI model anyways.

source
Sort:hotnewtop