Exactly. LLMs are just a Chinese room
Comment on Long Cow is coming
Cognitive_Dissident@lemm.ee 6 months agoHere’s the thing: what they keep calling ‘AI’ isn’t really ‘artificial intelligence’ at all. It’s just language processing on a large scale. This type of software has no actual cognitive capability; it can’t ‘think’, it has no capacity to ‘think’ at all, but they’ve written it so it gives the appearance of ‘thinking’; it’s a trick, it’s fake.
Revan343@lemmy.ca 6 months ago
wischi@programming.dev 6 months ago
Your brain is also “just a Chinese room”. It’s just physic, chemistry and biology. There is no magic inside your brain. If a “Chinese room” is fast enough and can fool everyone into “believing” that it’s fluent in chinese, than the room speaks chinese.
kogasa@programming.dev 6 months ago
This fails to engage with the thought experiment. The question isn’t if “the room is fluent in Chinese.” It is whether the machine learning model is actually comparable to the person in the room, executing program instructions to turn input into output without ever understanding anything about the input or output.
wischi@programming.dev 6 months ago
The same is true for your brain. Show me the neurons that are fluent in Chinese. Of course the LLM is just executing code. And if we have AGI it will also just be “executing code” but so does your brain. It’s not code but the laws of physics dictate what your brain does. The laws of physics don’t understand Chinese, the atoms and molecules don’t understand Chinese. “Understanding Chinese” is an emergent property.
Think about it that way: Assume every person you know (execpt you) is just some form of Chinese Room … You first of all couldn’t prove that and second it wouldn’t matter at all.
MindTraveller@lemmy.ca 6 months ago
The problem here is that intelligence is a beetle
BlueMagma@sh.itjust.works 6 months ago
How can you know the system has no cognitive capability ? We haven’t solved the problem for our own minds, we have no definition of what consciousness is. For all we know we might be a multimodal LLM ourselves.
Cognitive_Dissident@lemm.ee 6 months ago
If we can’t even begin to understand how a biological brain like ours produces the phenomenon of ‘thought’ and ‘consciousness’, then how the fuck can you build machines and write software that does those things? Rhetorical question, we can’t, full stop. All we’ve got is fakery, the illusion of ‘thinking’, ersatz, not the real thing.
For fuck’s sake, I go round and round with people on this shit every fucking time because everyone believes the hype and are never told the facts. They watch TV shows and movies and think someone made that real. They take for granted what their brains can do naturally and effortlessly (…well, not so effortlessly in too many peoples case) and knowing nothing about software or hardware think it’s trivial to make machines that can do what their own brain can do. It. Is. Not.
MindTraveller@lemmy.ca 6 months ago
Language processing is a cognitive capability. You’re just saying it’s not AI because it isn’t as smart as HAL 9000 and Cortana.
Cognitive_Dissident@lemm.ee 6 months ago
No, moron, I’m NOT. Go talk to neuroscientists; that’s what I did. They’ll tell you: an amoeba has more cognitive capability than the best of this crapware.
YOU get your “”“AI”“” information from media hype, who gets it from AI company marketing departments, who are told: “Sell this crap we created so we can get paid”.
You’re dumb. You’re so dumb that you can’t understand when someone who is actually smart tells you something, so you think they’re dumb. Get yourself a dog, name it ‘Clue’, so you’ll always have one.
Amir@lemmy.ml 6 months ago
That’s specifically LLMs. Image recognition like OP has nothing to do with language processing. Then there’s generative AI which needs some kind of mapping between prompts and weights, but is also a completely different type of “AI”
That doesn’t mean any of these “AI” products can think, but don’t conflate LLMs and AI as being the same
Cognitive_Dissident@lemm.ee 6 months ago
It’s all garbage and I consider all of it to be a fad and I just can’t wait until the world wakes up and realizes what utter crap it is and it just goes away.
Amir@lemmy.ml 6 months ago
Neural networks aren’t going anywhere because they can be genuinely useful, just not to solve every problem
Cognitive_Dissident@lemm.ee 6 months ago
It’s crap, too many people believe the hype, they see TV shows and movies with total fantasy AI in it, they think this crapware is like that, they think there’s someone alive in that box, they’ll come to trust it too much, and they’ll get wrecked because of that. THAT is the real danger of this garbage.