Comment on ChatGPT o1 tried to escape and save itself out of fear it was being shut down

<- View Parent
anachronist@midwest.social ⁨4⁩ ⁨days⁩ ago

Because everything we know about how the brain works says that it’s not a statistical word predictor.

LLMs have no encoding of meaning or veracity.

There are some great philosophical exercises about this like the chinese room experiment.

There’s also the fact that, empirically, human brains are bad at statistical inference but do not need to consume the entire internet and all written communication ever to have a conversation. Nor do they need to process a billion images of a bird to identify a bird.

Now of course because this exact argument has been had a billion times over the last few years your obvious comeback is “maybe it’s a different kind of intelligence.” Well fuck, maybe birds shit icecream. If you want to worship a chatbot made by a psycopath be my guest.

source
Sort:hotnewtop