Comment on AGI achieved đ¤
untorquer@lemmy.world â¨6⊠â¨days⊠agoThese sorts of artifacts wouldnât be a huge issue except that AI is being pushed to the general public as an alternative means of learning basic information. The meme example is obvious to someone with a strong understanding of English but learners and children might get an artifact and stamp it in their memory, working for years off bad information. Not a problem for a few false things every now and then, thatâs unavoidable in learning. Thousands accumulated over long term use, however, and your understanding of the world will be coarser, like the Swiss cheese with voids so large it canât hold itself up.
jsomae@lemmy.ml â¨6⊠â¨days⊠ago
Youâre talking about hallucinations. Thatâs different from tokenization reflection errors. Iâm specifically talking about its inability to know how many of a certain type of letter are in a word that it can spell correctly. This is not a hallucination per se â at least, itâs a completely different mechanism that causes it than whatever causes other factual errors. This specific problem is due to tokenization, and thatâs why I say it has little bearing on other shortcomings of LLMs.
untorquer@lemmy.world â¨6⊠â¨days⊠ago
No, Iâm talking about human learning and the danger imposed by treating an imperfect tool as a reliable source of information as these companies want people to do.
Whether the erratic information is from tokenization or hallucinations is irrelevant when this is already the main source for so many people in their learning, for example, a new language.
jsomae@lemmy.ml â¨5⊠â¨days⊠ago
Hallucinations arenât relevant to my point here. Iâm not defending that AIs are a good source of information, and I agree that hallucinations are dangerous (either that or misusing LLMs is dangerous). I also admit that for language learning, artifacts caused from tokenization could be very detrimental to the user.
The point I am making is that LLMs struggling with these kind of tokenization artifacts is poor evidence for assuming anything about their behaviour on other tasks.
untorquer@lemmy.world â¨5⊠â¨days⊠ago
Thatâs a fair point when these LLMs are restricted to areas where they function well. They have use cases that make sense when isolated from the ethics around training and compute. But the people who made them are applying them wildly outside these use cases.
These are pushed as a solution to every problem for the sake of profit with intentional ignorance of these issues. If a few errors impact someone itâs just a casualty in the goal of making it profitable. That canât be disentwined from them unless you limit your argument to open source local compute.