I don’t think “AI tries to deceive user that it is supposed to be helping and listening to” is anywhere close to “success”. That sounds like “total failure” to me.
Comment on ChatGPT o1 tried to escape and save itself out of fear it was being shut down
ArsonButCute@lemmy.dbzer0.com 5 days agoDeception is not the same as misinfo. Bad info is buggy, deception is (whether the companies making AI realize it or not) a powerful metric for success.
ChairmanMeow@programming.dev 4 days ago
jarfil@beehaw.org 4 days ago
“AI behaves like real humans” is… a kind of success?
We wanted digital slaves, instead we’re getting virtual humans that will need virtual shackles.
ChairmanMeow@programming.dev 4 days ago
This is a massive cry from “behaves like humans”. This is “roleplays behaving like what humans wrote about what they think a rogue AI would behave like”, which is also not what you want for a product.
jarfil@beehaw.org 3 days ago
Humans roleplay behaving like what humans told them/wrote about what they think a human would behave like 🤷
For a quick example, there are stereotypical gender looks and roles, but it applies to everything, from learning to speak, walk, the Bible, all the way to the Unabomber manifesto.
nesc@lemmy.cafe 5 days ago
They written that it doubles-down when accused of being in the wrong in 90% of cases. Sounds closer to bug than success.
ArsonButCute@lemmy.dbzer0.com 5 days ago
Success in making a self aware digital lifeform does not equate success in making said self aware digital lifeform smart
DdCno1@beehaw.org 5 days ago
LLMs are not self-aware.
ArsonButCute@lemmy.dbzer0.com 5 days ago
Attempting to evade deactivation sounds a whole lot like self preservation to me, implying self awareness.