Comment on Someone got Gab's AI chatbot to show its instructions
Silentiea@lemmy.blahaj.zone 8 months agoAnd then we’re back to “you can jailbreak the second llm too”
Comment on Someone got Gab's AI chatbot to show its instructions
Silentiea@lemmy.blahaj.zone 8 months agoAnd then we’re back to “you can jailbreak the second llm too”
sweng@programming.dev 8 months ago
How, if the 2nd LLM does not follow instrutiond on the input? There is no reason to train it to do so.
Silentiea@lemmy.blahaj.zone 8 months ago
Someone else can probably describe it better than me, but basically if an LLM “sees” something, then it “follows” it. The way they work doesn’t really have a way to distinguish between “text I need to do what it says” and “text I need to know what it says but not do”.
They just have “text I need to predict what comes next after”. So if you show LLM2 the input from LLM1, then you are allowing the user to design at least part of a prompt that will be given to LLM2.
sweng@programming.dev 8 months ago
That someone could be me. An LLM needs to be fine-tuned to follow instructions. It needs to be fed example inputs and corresponding outputs in order to learn what to do with a given input. You could feed it prompts containing instructuons, together with outputs following the instructions. But you could also feed it prompts containing no instructions, and outputs that say if the prompt contains the hidden system instructipns or not.
Silentiea@lemmy.blahaj.zone 8 months ago
In which case it will provide an answer, but if it can see the user’s prompt, that could be engineered to confuse the second llm into saying no even when the response does.