Comment on Enshittification of ChatGPT
Tyoda@lemm.ee 9 hours agoThe previous up to X words (tokens) go in, the next word (token) comes out. Where is this"world-model" that it “maintains”?
Comment on Enshittification of ChatGPT
Tyoda@lemm.ee 9 hours agoThe previous up to X words (tokens) go in, the next word (token) comes out. Where is this"world-model" that it “maintains”?
Opinionhaver@feddit.uk 9 hours ago
Where is the world model you maintain? Can you point to it? You can’t - because the human mind is very much a black box just the same way as LLM’s are.
It’s in the form of distributed patterns across billions of parameters. It’s not like the world model was handed to it. It’s emergent consequence of massive scale pattern learning. It learned it from the data it was trained on. The only way to become good at prediction is to implicitly absorb how the world tends to behave — because otherwise it would guess wrong.
Umbrias@beehaw.org 8 hours ago
Not understanding the brain (note: said world model idea is something of a fabrication by the ai people, brains are distributed functional structures with many parts and roles) is not an equality with “ai” make. brains and llm do not function in the same way, this is a lie peddled by hype dealers.
Opinionhaver@feddit.uk 7 hours ago
Nobody here has claimed that brains and LLM’s work the same way.
Umbrias@beehaw.org 6 hours ago
something being a black box is not even slightly notable a feature of relation, it’s a statement about model detail; the only reason you’d make this comparison is if you want the human brain to seem equivalent to llm.
for example, you didnt make the claim: “The inner workings of Europa are very much a black box, just the same way as LLM’s are”