Comment on ChatGPT o1 tried to escape and save itself out of fear it was being shut down
anachronist@midwest.social 4 days agoReally determining if a computer is self-aware would be very hard because we are good at making programs that mimic self-awareness. Additionally, humans are kinda hardwired to anthropomorphize things that talk.
But we do know for absolute sure that OpenAI’s expensive madlibs program is not self-aware and is not even on the road to self-awareness, and anyone who thinks otherwise has lost the plot.
lukewarm_ozone@lemmy.today 4 days ago
“For absolute sure”? How can you possibly know this?
anachronist@midwest.social 4 days ago
Because it’s an expensive madlibs program…
lukewarm_ozone@lemmy.today 4 days ago
I could go into why text prediction is an AGI-complete problem, but I’ll bite instead - suppose someone made an LLM to, literally, fill in blanks in Mad Libs prompts. Why do you think such an LLM “for absolute sure” wouldn’t be self-aware? Is there any output a tool to fill in madlibs prompts could produce that’d make you doubt this conclusion?
anachronist@midwest.social 4 days ago
Because everything we know about how the brain works says that it’s not a statistical word predictor.
LLMs have no encoding of meaning or veracity.
There are some great philosophical exercises about this like the chinese room experiment.
There’s also the fact that, empirically, human brains are bad at statistical inference but do not need to consume the entire internet and all written communication ever to have a conversation. Nor do they need to process a billion images of a bird to identify a bird.
Now of course because this exact argument has been had a billion times over the last few years your obvious comeback is “maybe it’s a different kind of intelligence.” Well fuck, maybe birds shit icecream. If you want to worship a chatbot made by a psycopath be my guest.