Comment on AGI achieved 🤖

<- View Parent
jsomae@lemmy.ml ⁨5⁩ ⁨days⁩ ago

Hallucinations aren’t relevant to my point here. I’m not defending that AIs are a good source of information, and I agree that hallucinations are dangerous (either that or misusing LLMs is dangerous). I also admit that for language learning, artifacts caused from tokenization could be very detrimental to the user.

The point I am making is that LLMs struggling with these kind of tokenization artifacts is poor evidence for assuming anything about their behaviour on other tasks.

source
Sort:hotnewtop