Comment on Someone got Gab's AI chatbot to show its instructions
sweng@programming.dev 7 months agoNo. Consider a model that has been trained on a bunch of inputs, and each corresponding output has been “yes” or “no”. Why would it suddenly reproduce something completely different, that coincidentally happens to be the input?
teawrecks@sopuli.xyz 7 months ago
Because it’s probibalistic and in this example the user’s input has been specifically crafted as the best possible jailbreak to get the output we want.
Unless we have actually appended a non-LLM filter at the end to only allow yes/no through, the possibility for it to output something other than yes/no, even though it was explicitly instructed to, is always there. Just like how in the Gab example it was told in many different ways to never repeat the instructions, it still did.
sweng@programming.dev 7 months ago
I’m confused. How does the input for LLM 1 jailbreak LLM 2 when LLM 2 does mot follow instructions in the input?
The Gab bot is trained to follow instructions, and it did. It’s not surprising. No prompt can make it unlearn how to follow instructions.
It would be surprising if a LLM that does not even know how to follow instructions (because it was never trained on that task at all) would suddenly spontaneously learn how to do it.
teawrecks@sopuli.xyz 7 months ago
Oh I see, you’re saying the training set is exclusively with yes/no answers. That’s called a classifier, not an LLM. But yeah, you might be able to make a reasonable “does this input and this output create a jailbreak for this set of instructions” classifier.
sweng@programming.dev 7 months ago
LLM means “large language model”. A classifier can be a large language model. They are not mutially exclusive.