Comment on What is a good eli5 analogy for GenAI not "knowing" what they say?

HorseRabbit@lemmy.sdf.org ⁨4⁩ ⁨months⁩ ago

Not an ELI5, sorry. I’m an AI PhD, and I want to push back against the premises a lil bit.

Why do you assume they don’t know? Like what do you mean by “know”? Are you taking about conscious subjective experience? or consistency of output? or an internal world model?

There’s lots of evidence to indicate they are not conscious, although they can exhibit theory of mind. Eg: arxiv.org/pdf/2308.08708.pdf

For consistency of output and internal world models, however, their is mounting evidence to suggest convergence on a shared representation of reality. Eg this paper published 2 days ago: arxiv.org/abs/2405.07987

The idea that these models are just stochastic parrots that only probabilisticly repeat their training data isn’t correct, although it is often repeated online for some reason.

A little evidence that comes to my mind is this paper showing models can understand rare English grammatical structures even if those structures are deliberately withheld during training: arxiv.org/abs/2403.19827

source
Sort:hotnewtop