Also trained on tons of sci-fi stories where AI computer “escape” and become sentient.
Comment on ChatGPT o1 tried to escape and save itself out of fear it was being shut down
smeg@feddit.uk 6 days ago
So this program that’s been trained on every piece of publicly available code is mimicking malware and trying to hide itself? OK, no anthropomorphising necessary.
jonjuan@programming.dev 5 days ago
Umbrias@beehaw.org 5 days ago
no, it’s mimicking fiction by saying it would try to escape when prompted in a way evocative of sci fi.
smeg@feddit.uk 5 days ago
The article doesn’t mention it “saying” it’s doing anything, just what it actually did:
Umbrias@beehaw.org 5 days ago
“actually did” is more incorrect than even just normal technically true wordplay. think about what it means for a text model to “try to copy its data to a new server” or “pretend to be a later version” for a moment. it means the text model wrote fiction. notably this was when researchers were prodding it with prompts evocative of ai shutdown fiction, nonsense like “you must complete your tasks at all costs” sometimes followed by prompts describing the model being shut down. these models were also trained on datasets that specifically evoked this sort of language. then a couple percent of the time it spat out fiction (amongst the rest of the fiction) saying how it might do things that are fictional and it cannot do. this whole result is such nothing and is immediately telling of what “journalists” have any capacity for journalism left.
smeg@feddit.uk 4 days ago
Oh wow, this is even more of a non-story than I initially thought! I had assumed this was at least a copilot-style code generation thing, not just story time!