Comment on Someone got Gab's AI chatbot to show its instructions

<- View Parent
teawrecks@sopuli.xyz ⁨8⁩ ⁨months⁩ ago

Yeah, as soon as you feed the user input into the 2nd one, you’ve created the potential to jailbreak it as well. You could possibly even convince the 2nd one to jailbreak the first one for you, or If it has also seen the instructions to the first one, you just need to jailbreak the first.

This is all so hypothetical, and probabilistic, and hyper-applicable to today’s LLMs that I’d just want to try it. But I do think it’s possible, given the paper mentioned up at the top of this thread.

source
Sort:hotnewtop