Comment on Someone got Gab's AI chatbot to show its instructions
Silentiea@lemmy.blahaj.zone 7 months agoIt would see it. I’m merely suggesting that it may not successfully notice it. LLMs process prompts by translating the words into vectors, and then the relationships between the words into vectors, and then the entire prompt into a single vector, and then uses that resulting vector to produce a result. The second LLM you’ve described will be trained such that the vectors for prompts that do contain the system prompt will point towards “true”, and the vectors for prompts that don’t still point towards “false”. But enough junk data in the form of unrelated words with unrelated relationships could cause the prompt vector to point too far from true towards false, basically. Just making a prompt that doesn’t have the vibes of one that contains the system prompt, as far as the second LLM is concerned
sweng@programming.dev 7 months ago
Ok, but now you have to craft a prompt for LLM 1 that
Fulfilling all 3 is orders of magnitude harder then fulfilling just the first.
Silentiea@lemmy.blahaj.zone 7 months ago
Maybe. But have you seen how easy it has been for people in this thread to get gab AI to reveal its system prompt? 10x harder or even 1000x isn’t going to stop it happening.
sweng@programming.dev 7 months ago
Oh please. If there is a new exploit now every 30 days or so, it would be every hundred years or so at 1000x.
Silentiea@lemmy.blahaj.zone 7 months ago
And the second llm is running on the same basic principles as the first, so it might be 2 or 4 times harder, but it’s unlikely to be 1000x. But here we are.
You’re welcome to prove me wrong, but I expect if this problem was as easy to solve as you seem to think, it would be more solved by now.