Comment on AGI achieved 🤖

<- View Parent
jsomae@lemmy.ml ⁨2⁊ ⁨days⁊ ago

The LLM isn’t aware of its own limitations in this regard. The specific problem of getting an LLM to know what characters a token comprises has not been the focus of training. It’s a totally different kind of error than other hallucinations, it’s almost entirely orthogonal, but other hallucinations are much more important to solve, whereas being able to count the number of letters in a word or add numbers together is not very important, since as you point out, there are already programs that can do that.

source
Sort:hotnewtop