Comment on What is a good eli5 analogy for GenAI not "knowing" what they say?
LainTrain@lemmy.dbzer0.com 7 months ago
That analogy is hard to come up with because the question of whether it even comprehends meaning requires first answering the unanswerable question of what meaning actually is and whether or not humans are also just spicy pattern predictors / autocompletes, since predicting patterns is like the whole point of evolving intelligence, being able to connect cause and effect in patterns and anticipate the future just helps with not starving. The line is far blurrier than most are willing to admit and ultimately hinges on our experience of sapience rather than being able to strictly define knowledge and meaning.
Instead it’s far better to say that ML models are not sentient, they are like a very big brain that’s switched off, but we can access it by stimulating it with a prompt.
Drummyralf@lemmy.world 7 months ago
Interesting thoughts! Now that I think about this, we as humans have a huge advantage by having not only language, but also sight, smell, hearing and taste. An LLM basically only has “language.” We might not realize how much meaning we create through those other senses.
CodeInvasion@sh.itjust.works 7 months ago
To add to this insight, there are many recent publications showing the dramatic improvements of adding another modality like vision to language models.
While this is my conjecture that is loosely supported by existing research, I personally believe that multimodality is the secret to understanding human intelligence.