“People demanding that AIs have rights would be a huge mistake,” said Bengio. “Frontier AI models already show signs of self-preservation in experimental settings today, and eventually giving them rights would mean we’re not allowed to shut them down.
“As their capabilities and degree of agency grow, we need to make sure we can rely on technical and societal guardrails to control them, including the ability to shut them down if needed.”
As AIs become more advanced in their ability to act autonomously and perform “reasoning” tasks, a debate has grown over whether humans should, at some point, grant them rights. A poll by the Sentience Institute, a US thinktank that supports the moral rights of all sentient beings, found that nearly four in 10 US adults backed legal rights for a sentient AI system.
It’s wildly difficult to control the output of the black box and that’s hardly llms showing signs of self-preservation. These cries are from people in the industry trying to pretend the models are something that they are not, and cannot ever be. I do agree with the sentiment that we should be prepared to pull the plug on them though, for other reasons.
Cyv_@lemmy.blahaj.zone 2 days ago
Problem is, AI isn’t sentient. It’s advanced auto complete.
Sure, if we get AGI give it rights, but we’re nowhere near that point right now.
BCsven@lemmy.ca 2 days ago
I think the LLM is auto complete, the scientists may have been referring to AI neural networks that have shown emergent behaviors. The article kinda of glosses over any distinction about what they are actually talking about
Powderhorn@beehaw.org 2 days ago
Granting corporations personhood worked out really well.