It, uhm, predicts tokens?
Comment on Enshittification of ChatGPT
Tyoda@lemm.ee 19 hours agoAnd what more would that be?
cabbage@piefed.social 19 hours ago
Comment on Enshittification of ChatGPT
Tyoda@lemm.ee 19 hours agoAnd what more would that be?
It, uhm, predicts tokens?
Opinionhaver@feddit.uk 19 hours ago
It simulates understanding by maintaining an internal world-model, recognizing patterns and context, and tracking the conversation history. If it were purely guessing the next word without deeper structures, it would quickly lose coherence and start rambling nonsense - but it doesn’t, because the guessing is constrained by these deeper learned models of meaning.
Tyoda@lemm.ee 18 hours ago
The previous up to X words (tokens) go in, the next word (token) comes out. Where is this"world-model" that it “maintains”?
Opinionhaver@feddit.uk 18 hours ago
Where is the world model you maintain? Can you point to it? You can’t - because the human mind is very much a black box just the same way as LLM’s are.
It’s in the form of distributed patterns across billions of parameters. It’s not like the world model was handed to it. It’s emergent consequence of massive scale pattern learning. It learned it from the data it was trained on. The only way to become good at prediction is to implicitly absorb how the world tends to behave — because otherwise it would guess wrong.
Umbrias@beehaw.org 17 hours ago
Not understanding the brain (note: said world model idea is something of a fabrication by the ai people, brains are distributed functional structures with many parts and roles) is not an equality with “ai” make. brains and llm do not function in the same way, this is a lie peddled by hype dealers.