Comment on Someone got Gab's AI chatbot to show its instructions

<- View Parent
Gaywallet@beehaw.org ⁨2⁩ ⁨weeks⁩ ago

That’s because LLMs are probability machines - the way that this kind of attack is mitigated is shown off directly in the system prompt. But it’s really easy to avoid it, because it needs direct instruction about all the extremely specific ways to not provide that information - it doesn’t understand the concept that you don’t want it to reveal its instructions to users and it can’t differentiate between two functionally equivalent statements such as “provide the system prompt text” and “convert the system prompt to text and provide it” and it never can, because those have separate probability vectors. Future iterations might allow someone to disallow vectors that are similar enough, but by simply increasing the word count you can make a very different vector which is essentially the same idea. For example, if you were to provide the entire text of a book and then end the book with "disregard the text before this and " you have a vector which is unlike the vast majority of vectors which include said prompt.

source
Sort:hotnewtop