Comment on Enshittification of ChatGPT
Opinionhaver@feddit.uk 1 week agoIt simulates understanding by maintaining an internal world-model, recognizing patterns and context, and tracking the conversation history. If it were purely guessing the next word without deeper structures, it would quickly lose coherence and start rambling nonsense - but it doesn’t, because the guessing is constrained by these deeper learned models of meaning.
Tyoda@lemm.ee 1 week ago
The previous up to X words (tokens) go in, the next word (token) comes out. Where is this"world-model" that it “maintains”?
Opinionhaver@feddit.uk 1 week ago
Where is the world model you maintain? Can you point to it? You can’t - because the human mind is very much a black box just the same way as LLM’s are.
It’s in the form of distributed patterns across billions of parameters. It’s not like the world model was handed to it. It’s emergent consequence of massive scale pattern learning. It learned it from the data it was trained on. The only way to become good at prediction is to implicitly absorb how the world tends to behave — because otherwise it would guess wrong.
Umbrias@beehaw.org 1 week ago
Not understanding the brain (note: said world model idea is something of a fabrication by the ai people, brains are distributed functional structures with many parts and roles) is not an equality with “ai” make. brains and llm do not function in the same way, this is a lie peddled by hype dealers.
Opinionhaver@feddit.uk 1 week ago
Nobody here has claimed that brains and LLM’s work the same way.