Comment on AI hallucinations are impossible to eradicate — but a recent, embarrassing malfunction from one of China’s biggest tech firms shows how they can be much more damaging there than in other countries

<- View Parent
lvxferre@mander.xyz ⁨2⁩ ⁨weeks⁩ ago

When it comes to the code itself you’re right, there’s no difference between “bug” and “not a bug”. The difference is how humans classify the behaviour.

And yet there’s a clear mismatch between what the developers of those large “language” models know that they’re able to do, versus what LLMs are being promoted for, and that difference is what is being called “hallucination”. They are not intelligent systems, the info that they output is not reliably accurate, it’s often useless rubbish. But instead of acknowledging it they label it “hallucination”.

Perhaps an example would be good here. Suppose that I made a text editor; it works nicely as a text editor and nothing much else. Then I make it automatically find and replace the string “=2+2” with “4”, and use it to showcase my text editor as if it was a calculator. “Look, it can do maths!”.

Then the user types down “=3+3”, expecting the “spreadsheet” to output “6”, and it doesn’t. Can we really claim that the user found a “bug”? Not really. It’s just that I’m a phony and I sold him a text editor as if it was a calculator.

And yet that’s exactly what happens with LLMs.

source
Sort:hotnewtop