Comment on AI hallucinations are impossible to eradicate — but a recent, embarrassing malfunction from one of China’s biggest tech firms shows how they can be much more damaging there than in other countries

<- View Parent
t3rmit3@beehaw.org ⁨2⁩ ⁨months⁩ ago

The purpose of an LLM, at a fundamental level, is to approximate text it was trained on. If it was trained on gibberish, outputting gibberish wouldn’t be a bug. If it wasn’t, outputting gibberish would be indicative of a bug.

I can still say the car is malfunctioning.

A better analogy would be selling someone a diesel car, when they wanted an electric vehicle, and them being upset when it requires refueling with gas. The car is t malfunctioning in that case, the salesman was.

source
Sort:hotnewtop