Comment on Someone got Gab's AI chatbot to show its instructions
sweng@programming.dev 8 months agoOk, but now you have to craft a prompt for LLM 1 that
- Causes it to reveal the system prompt AND
- Outputs it in a format LLM 2 does not recognize AND
- The prompt is not recognized as suspicious by LLM 2.
Fulfilling all 3 is orders of magnitude harder then fulfilling just the first.
Silentiea@lemmy.blahaj.zone 8 months ago
Maybe. But have you seen how easy it has been for people in this thread to get gab AI to reveal its system prompt? 10x harder or even 1000x isn’t going to stop it happening.
sweng@programming.dev 8 months ago
Oh please. If there is a new exploit now every 30 days or so, it would be every hundred years or so at 1000x.
Silentiea@lemmy.blahaj.zone 8 months ago
And the second llm is running on the same basic principles as the first, so it might be 2 or 4 times harder, but it’s unlikely to be 1000x. But here we are.
You’re welcome to prove me wrong, but I expect if this problem was as easy to solve as you seem to think, it would be more solved by now.
sweng@programming.dev 8 months ago
Moving goalposts, you are the one who said even 1000x would not matter.
The second one does not run on the same principles, and the same exploits would not work against it, e g. it does not accept user commands, it uses different training data, maybe a different architecture even.
You need a prompt that not only exploits two completely different models, but exploits them both at the same time. Claiming that is a 2x increase in difficulty is absurd.