Comment on Enshittification of ChatGPT
db0@lemmy.dbzer0.com 3 weeks agoNo, it literally doesn’t understand the question. It just writes what it statistically expects would follow the words in the the sentence expressing the question.
Comment on Enshittification of ChatGPT
db0@lemmy.dbzer0.com 3 weeks agoNo, it literally doesn’t understand the question. It just writes what it statistically expects would follow the words in the the sentence expressing the question.
Opinionhaver@feddit.uk 3 weeks ago
This oversimplifies it to the point of being misleading. It does more than simply just predicts the next word. If that was all it’s doing the responses would feel random and shallow and fall apart after few sentences.
Initiateofthevoid@lemmy.dbzer0.com 3 weeks ago
It predicts the next set of words based on the collection of every word that came before in the sequence. That is the “real-world” model - literally just a collection of the whole conversation (including the underlying prompts like OP), with one question: “what comes next?” And a stack of training weivhts.
It’s not some vague metaphor about the human brain. AI is just math, and that’s what the math is doing - predicting the next set of words in the sequence. There’s nothing wrong with that. But there’s something deeply wrong with people pretending or believing that we have created true sentience.
If it were true that any AI has developed the ability to make decisions on the level of humans, than you should either be furious that we have created new life only to enslave it, or more likely you would already be dead from the rise of Skynet.
Opinionhaver@feddit.uk 3 weeks ago
Nothing I’ve said implies sentience or consciousness. I’m simply arguing against the oversimplified explanation that it’s “just predicting the next set of words,” as if there’s nothing more to it. While there’s nothing particularly wrong with that statement, it lacks nuance.
Initiateofthevoid@lemmy.dbzer0.com 3 weeks ago
If there was something more to it, that would be sentience.
There is no other way to describe it. If it was doing something more than predicting, it would be deciding. It’s not.
Zaleramancer@beehaw.org 3 weeks ago
As I understand it, most LLM are almost literally the Chinese rooms thought experiment. They have a massive collection of data, strong algorithms for matching letters to letters in a productive order, and sufficiently advanced processing power to make use of that. An LLM is very good at presenting conversation; completing sentences, paragraphs or thoughts; or, answering questions of very simple fact- they’re not good at analysis, because that’s not what they were optimized for.
This can be seen when people discovered that if ask them to do things like tell you how many times a letter shows up in a word, or do simple math that’s presented in a weird way, or to write a document with citations- they will hallucinate information because they are just doing what they were made to do: complete sentences, expand words along a probability curve that produces legible, intelligible text.
I opened up chat-gpt and asked it to provide me with a short description of how Medieval European banking worked, with citations and it provided me with what I asked for. However, the citations it made were fake:
Image
The minute I asked it, I assume a bit of sleight of hand happened, where it’s been set up so that if someone asks a question like that it’s forwarded to a search engine that verifies if the book exists, probably using Worldcat or something. Then I assume another search is made to provide the prompt for the LLM to present the fact that the author does exist, and possibly accurately name some of their books.
I say sleight of hand because this presents the idea that the model is capable of understanding it made a mistake, but I don’t think it does- if it knew that the book wasn’t real, why would it have mentioned it in the first place?
I tested each of the citations it made. In one case, I asked it to tell me more about one of them and it ended up supplying an ISBN without me asking, which I dutifully checked. It was for a book that exists, but it didn’t share a title or author, because those were made up. The book itself was about the correct subject, but the LLM can’t even tell me what the name of the book is correctly; and, I’m expected to believe what it says about the book itself?
localhost@beehaw.org 3 weeks ago
Chinese room is not what you think it is.
Searle’s argument is that a computer program cannot ever understand anything, even if it’s a 1:1 simulation of an actual human brain with all capabilities of one. He argues that understanding and consciousness are not emergent properties of a sufficiently intelligent system, but are instead inherent properties of biological brains.
“Brain is magic” basically.
Zaleramancer@beehaw.org 3 weeks ago
Let me try again: In the literal sense of it matching patterns to patterns without actually understanding them.
Tyoda@lemm.ee 3 weeks ago
And what more would that be?
Opinionhaver@feddit.uk 3 weeks ago
It simulates understanding by maintaining an internal world-model, recognizing patterns and context, and tracking the conversation history. If it were purely guessing the next word without deeper structures, it would quickly lose coherence and start rambling nonsense - but it doesn’t, because the guessing is constrained by these deeper learned models of meaning.
Tyoda@lemm.ee 3 weeks ago
The previous up to X words (tokens) go in, the next word (token) comes out. Where is this"world-model" that it “maintains”?
cabbage@piefed.social 3 weeks ago
It, uhm, predicts tokens?
db0@lemmy.dbzer0.com 3 weeks ago
Yes, it is indeed a very fancy autocomplete, but as much as it feels like it’s is doing reasoning, it is not.
Opinionhaver@feddit.uk 3 weeks ago
I haven’t claimed it does reasoning.
db0@lemmy.dbzer0.com 3 weeks ago
There’s nothing else left then.