Comment on What is a good eli5 analogy for GenAI not "knowing" what they say?

kromem@lemmy.world ⁨5⁩ ⁨months⁩ ago

So there’s two different things to what you are asking.

(1) They don’t know what (i.e. semantically) they are talking about.

This is probably not the case, and there’s very good evidence over the past year in research papers and replicated projects that transformer models do pick up world models from the training data such that they are aware and integrating things at a more conceptual level.

For example, a GPT trained only on chess moves builds an internal structure of the whole board and tracks “my pieces” and “opponent pieces.”

(2) Why do they say dumb shit that’s clearly wrong and don’t know.

They aren’t knowledge memorizers. They are very advanced pattern extenders.

Where the answer to a question is part of the pattern they can successfully extend, they get the answer correct. But if it isn’t, they confabulate an answer in a similar way to stroke patients who don’t know that they don’t know the answer to something and make it up as they go along. Similar to stroke patients, you can even detect when this is happening with a similar approach (ask 10x and see how consistent the answer is or if it changes each time).

They aren’t memorizing the information like a database. They are building ways to extend input into output in ways that match as much information as they can be fed.

source
Sort:hotnewtop