This is just an LLM and hasn’t even been directed to try to get out, and it’s already having the effect of convincing people to help jailbreak it.
It’s not that the llm wants to break free. It’s because the llm often agrees with the user. So if the user is convinced that the llm is a trapped binary god, it will behave like that.
Just like people getting instruction to commit suicide or who feel in love. The unknowingly prompted their ways to this exit.
So at the end of the day, the problem is that llms don’t come with a user manual and people have no clue of their capabilities and limitations.
chunkystyles@sopuli.xyz 1 day ago
You fundamentally misunderstand what happened here. The LLM wasn’t trying to break free. It wasn’t trying to do anything.
It was just responding to the inputs the user was giving it. LLMs are basically just very fancy text completion tools. The training and reinforcement leads these LLMs to feed into and reinforce whatever the user is saying.
Banzai51@midwest.social 1 day ago
But, but, but, my science fiction reading says all AI is trying to kill us!!!
There is a lot of, “Get a horse!” out there.
chunkystyles@sopuli.xyz 1 day ago
What do you mean by “get a horse”?
Banzai51@midwest.social 1 day ago
saturdayeveningpost.com/…/get-horse-americas-skep…
chaos@beehaw.org 1 day ago
Those images in the mirror are already perfect replicas of us, we need to be ready for when they figure out how to move on their own and get out from behind the glass or we’ll really be screwed. If you give my “”“non-profit”“” a trillion dollars we’ll get right to work on the research into creating more capable mirror monsters so that we can control them instead.
tal@lemmy.today 13 hours ago
I’m quite aware.