Comment on Enshittification of ChatGPT
Initiateofthevoid@lemmy.dbzer0.com 16 hours agoIt predicts the next set of words based on the collection of every word that came before in the sequence. That is the “real-world” model - literally just a collection of the whole conversation (including the underlying prompts like OP), with one question: “what comes next?” And a stack of training weivhts.
It’s not some vague metaphor about the human brain. AI is just math, and that’s what the math is doing - predicting the next set of words in the sequence. There’s nothing wrong with that. But there’s something deeply wrong with people pretending or believing that we have created true sentience.
If it were true that any AI has developed the ability to make decisions on the level of humans, than you should either be furious that we have created new life only to enslave it, or more likely you would already be dead from the rise of Skynet.
Opinionhaver@feddit.uk 16 hours ago
Nothing I’ve said implies sentience or consciousness. I’m simply arguing against the oversimplified explanation that it’s “just predicting the next set of words,” as if there’s nothing more to it. While there’s nothing particularly wrong with that statement, it lacks nuance.
Initiateofthevoid@lemmy.dbzer0.com 15 hours ago
If there was something more to it, that would be sentience.
There is no other way to describe it. If it was doing something more than predicting, it would be deciding. It’s not.
Opinionhaver@feddit.uk 15 hours ago
Ability to make decisions doesn’t imply sentience either.
Initiateofthevoid@lemmy.dbzer0.com 15 hours ago
Sorry, you are correct there, the word I was looking for was “sapience”