This is what I was going to point to. When I was in grad school, it was often referred to as the Symbol Gounding Problem. Basically it’s a interdisciplinary research problem involving pragmatics, embodied cognition, and a bunch of others. The LLM people are now crashing into this research problem, and it’s interesting to see how they react.
Comment on What is a good eli5 analogy for GenAI not "knowing" what they say?
magic_lobster_party@kbin.run 7 months ago
There’s the Chinese Room argument, which is a bit related:
Rolando@lemmy.world 7 months ago
Asifall@lemmy.world 7 months ago
I always thought the Chinese Room argument was kinda silly. It’s predicated on the idea that humans have some unique capacity to understand the world that can’t be replicated by a syntactic system, but there is no attempt made to actually define this capacity.
The whole argument depends on our intuition that we think and know things in a way inanimate objects don’t. In other words, it’s a tautology to draw the conclusion that computers can’t think from the premise that computers can’t think.