Comment on ChatGPT o1 tried to escape and save itself out of fear it was being shut down
socsa@piefed.social 4 days agoThe reality is that a certain portion of people will never believe that an AI can be self aware no matter how advanced they get. There are a lot of interesting philosophical questiona here, and the hard skeptics are punting just as much as the trie believers in this case.
It's honestly kind of sad to see how much reactionary anti-tech sentiment there is in this tech enthusiast community.
anachronist@midwest.social 4 days ago
Really determining if a computer is self-aware would be very hard because we are good at making programs that mimic self-awareness. Additionally, humans are kinda hardwired to anthropomorphize things that talk.
But we do know for absolute sure that OpenAI’s expensive madlibs program is not self-aware and is not even on the road to self-awareness, and anyone who thinks otherwise has lost the plot.
lukewarm_ozone@lemmy.today 4 days ago
“For absolute sure”? How can you possibly know this?
anachronist@midwest.social 4 days ago
Because it’s an expensive madlibs program…
lukewarm_ozone@lemmy.today 4 days ago
I could go into why text prediction is an AGI-complete problem, but I’ll bite instead - suppose someone made an LLM to, literally, fill in blanks in Mad Libs prompts. Why do you think such an LLM “for absolute sure” wouldn’t be self-aware? Is there any output a tool to fill in madlibs prompts could produce that’d make you doubt this conclusion?