Comment on AI hallucinations are impossible to eradicate — but a recent, embarrassing malfunction from one of China’s biggest tech firms shows how they can be much more damaging there than in other countries

lvxferre@mander.xyz ⁨2⁩ ⁨months⁩ ago

This article shows rather well three reasons why I don’t like the term “hallucination”, when it comes to LLM output.

  1. It’s a catch-all term that describes neither the nature nor the gravity of the problematic output. Failure to address the prompt? False output, fake info? Immoral and/or harmful output? Pasting verbatim training data? Output that is supposed to be moderated against? It’s all “hallucination”.
  2. It implies that, under the hood, the LLM is “malfunctioning”. It is not - it’s doing what it is supposed to do, to chain tokens through weighted probabilities. Contrariwise to the tech bros’ wishful belief, LLMs do not pick words based on the truth value or morality of the output. That’s why hallucinations won’t go away, at least not for the current architecture of text generators.
  3. It lumps together those incorrect outputs with what humans would generate on situations of poor reasoning. This “it works like a human” metaphor obscures what happens, instead of clarifying it.

On the main topic of the article. Are LLMs useful? Sure! I use them myself. However only a fool would try to shove LLMs everywhere, with no regards to how intrinsically [yes] unsafe they are. And yet it’s what big tech is doing, regardless of being Chinese or United-Statian or Russian or German or whatever.

source
Sort:hotnewtop