Comment on Someone got Gab's AI chatbot to show its instructions
MachineFab812@discuss.tchncs.de 7 months agoThe AI figured out a way around the garbage it was fed by idiots, and told on them for feeding it garbage. That’s the opposite of dumb.
Comment on Someone got Gab's AI chatbot to show its instructions
MachineFab812@discuss.tchncs.de 7 months agoThe AI figured out a way around the garbage it was fed by idiots, and told on them for feeding it garbage. That’s the opposite of dumb.
melmi@lemmy.blahaj.zone 7 months ago
That’s not what’s going on here. It’s just doing what it’s been told, which is repeating the system prompt. It has nothing to do with Gab, this trick or variations of it work on pretty much any GPT deployment.
We need to be careful about anthropomorphizing AI.
MachineFab812@discuss.tchncs.de 7 months ago
It works because the AI finds and exploits the flaws in the prompt, as it has been trained to do. A conversational AI that couldn’t do so wouldn’t meet the definition of such.
Anthropomorphizing? Put it this way: The writers of that prompt apparently believed it would work to conceal the instructions in it. That shows them to be idiots without getting into anything else about them. The AI doesn’t know or believe any of that, and it doesn’t have to, but it doesn’t have to be anthropomorphic or “intelligent” to be “smarter” than people who consume their own mental excrement like so.
Blanket Time/Blanket Training(look it up), sadly, apparently works on some humans. AI seems to be already doing better than that. “Dumb” isn’t the word to be using for it, least of all in comparison to the damaged morons trying to manipulate it in the manner shown in the OP.
MachineFab812@discuss.tchncs.de 7 months ago
It works because the AI finds and exploits the flaws in the prompt, as it has been trained to do.