Comment on What is a good eli5 analogy for GenAI not "knowing" what they say?

<- View Parent
kromem@lemmy.world ⁨4⁩ ⁨months⁩ ago

So the paper that found that particular bit in Othello was this one: arxiv.org/abs/2310.07582

Which was building off this earlier paper: arxiv.org/abs/2210.13382

And then this was the work replicating it in Chess: lesswrong.com/…/a-chess-gpt-linear-emergent-world…

It’s not by chance - there’s literally interventions where flipping a weight or vector results in the opposite behavior (like acting like a piece is in a different place, or playing well he badly no matter the previous moves).

But it’s more that it seems unlikely that there’s any actual ‘feeling’ or ‘conscious’ sentience/consciousness to understand beyond the model knowing what the abstracted pattern means in relation to the inputs and outputs. It probably is simulating some form of ego and self, but not actively experiencing it if it makes sense.

source
Sort:hotnewtop