Comment on AGI achieved 🤖
jsomae@lemmy.ml 1 month agoThe LLM isn’t aware of its own limitations in this regard. The specific problem of getting an LLM to know what characters a token comprises has not been the focus of training. It’s a totally different kind of error than other hallucinations, it’s almost entirely orthogonal, but other hallucinations are much more important to solve, whereas being able to count the number of letters in a word or add numbers together is not very important, since as you point out, there are already programs that can do that.
outhouseperilous@lemmy.dbzer0.com 1 month ago
The most convincing arguments that llms are like humans aren’t that llm’s are good, but that humans are just unrefrigerated meat and personhood is a delusion.
jsomae@lemmy.ml 1 month ago
This might well be true yeah. But that’s still good news for AI companies who want to replace humans – bar’s lower than they thought.
outhouseperilous@lemmy.dbzer0.com 1 month ago
And why we should fight them tooth and nail, yes.
They’re not just replacing us, they’re making us suck more so it’s an easy sell.
jsomae@lemmy.ml 1 month ago
Well yeah. You’re preaching to the choir lol.