Comment on Enshittification of ChatGPT
localhost@beehaw.org 6 hours agoAs I understand it, most LLM are almost literally the Chinese rooms thought experiment.
Chinese room is not what you think it is.
Searle’s argument is that a computer program cannot ever understand anything, even if it’s a 1:1 simulation of an actual human brain with all capabilities of one. He argues that understanding and consciousness are not emergent properties of a sufficiently intelligent system, but are instead inherent properties of biological brains.
“Brain is magic” basically.
Zaleramancer@beehaw.org 4 hours ago
Let me try again: In the literal sense of it matching patterns to patterns without actually understanding them.
localhost@beehaw.org 4 hours ago
If I were to have a discussion with a person responding to me like ChatGPT does, I would not dare suggest that they don’t understand the conversation, much less that they are incapable of understanding anything whatsoever.
What is making you believe that LLMs don’t understand the patterns? What’s your idea of “understanding” here?
Zaleramancer@beehaw.org 4 hours ago
What’s yours? I’m stating that LLMs are not capable of understanding the actual content of any words they arrange into patterns. This is why they create false information, especially in places like my examples with citations- they are purely the result of it creating “academic citation” sounding sets of words. It doesn’t know what a citation actually is.
Can you prove otherwise? In my sense of “understanding” it’s actually knowing the content and context of something, being able to actually subject it to analysis and explain it accurately and completely. An LLM cannot do this. It’s not designed to- there are neural network AI built on similar foundational principles towards divergent goals that can produce remarkable results in terms of data analysis, but not ChatGPT. It doesn’t understand anything, which is why you can repeatedly ask it about a book only to look it up and discover it doesn’t exist.
localhost@beehaw.org 3 hours ago
This is something that sufficiently large LLMs like ChatGPT can do pretty much as well as non-expert people on a given topic. Sometimes better.
This definition is also very knowledge dependent. You can find a lot of people that would not meet this criteria, especially if the subject they’d have to explain is arbitrary and not up to them.
You can ask it to write a poem or a song on some random esoteric topic. You can ask it to play DnD with you. You can instruct it to write something more concisely, or more verbosely. You can tell it to write in specific tone. You can ask follow-up questions and receive answers. This is not something that I would expect of a system fundamentally incapable of any understanding whatsoever.
But let me reverse this question. Can you prove that humans are capable of understanding? What test can you posit that every English-speaking human would pass and every LLM would fail, that would prove that LLMs are not capable of understanding while humans are?