That sounds extremely interesting, i gotta look into that when i have more time
Comment on This page in my kid’s book from school to learn how to read.
Barack_Embalmer@lemmy.world 10 months agoMany modern theories in cognitive science posit that the brain’s objective is to be a kind of “prediction machine” to predict the incoming stream of sensory information from the top down, as well as processing it from the bottom up. This is sometimes referred to through the aphorism “perception is controlled hallucination”.
intensely_human@lemm.ee 10 months ago
So human thought is … text prediction?
Barack_Embalmer@lemmy.world 10 months ago
In a sense… yes! Although of course it’s thought to be across many modalities and time-scales, and not just text. Also a crucial piece of the picture is the Bayesian aspect - which also involves estimating one’s uncertainty over predictions. Further info: en.wikipedia.org/wiki/Predictive_coding
It’s also important to note the recent trends towards so-called “Embodied” and “4E cognition”, which emphasize the importance of being situated in a body, in an environment, with control over actions, as essential to explaining the nature of mental phenomena.
But yeah, it’s very exciting how in recent years we’ve begun to tap into the power of these kinds of self-supervised learning objectives for practical applications like Word2Vec and Large Language/Multimodal Models.
intensely_human@lemm.ee 10 months ago
We can have robots with bodies that talk and form relationships with people now. Not deep intimate relationships, but simple things like maintaining conversations with people. You wouldn’t need much more software on top of the LLM to make a really functional person.
Barack_Embalmer@lemmy.world 10 months ago
I have to disagree about that last sentence. Augmenting LLMs to have any remotely person-like attributes is far from trivial.
The current thought in the field about this centers around so-called “Objective Driven AI”:
openreview.net/pdf?id=BZ5a1r-kVsf
arxiv.org/abs/2308.10135
in which strategies are proposed to decouple the AI’s internal “world model” from its language capabilities, to facilitate hierarchical planning and mitigate hallucination.
The latter half of this talk by Yann LeCun addresses this topic too: www.youtube.com/watch?v=pd0JmT6rYcI
It’s very much an emerging and open-ended field with more questions than answers.
erusuoyera@sh.itjust.works 10 months ago
Pterty mcuh, as lnog as the frist and lsat ltteres are in the crrecot palecs.