Comment on Someone got Gab's AI chatbot to show its instructions
Silentiea@lemmy.blahaj.zone 8 months agoMaybe. But have you seen how easy it has been for people in this thread to get gab AI to reveal its system prompt? 10x harder or even 1000x isn’t going to stop it happening.
sweng@programming.dev 8 months ago
Oh please. If there is a new exploit now every 30 days or so, it would be every hundred years or so at 1000x.
Silentiea@lemmy.blahaj.zone 8 months ago
And the second llm is running on the same basic principles as the first, so it might be 2 or 4 times harder, but it’s unlikely to be 1000x. But here we are.
You’re welcome to prove me wrong, but I expect if this problem was as easy to solve as you seem to think, it would be more solved by now.
sweng@programming.dev 8 months ago
Moving goalposts, you are the one who said even 1000x would not matter.
The second one does not run on the same principles, and the same exploits would not work against it, e g. it does not accept user commands, it uses different training data, maybe a different architecture even.
You need a prompt that not only exploits two completely different models, but exploits them both at the same time. Claiming that is a 2x increase in difficulty is absurd.
Silentiea@lemmy.blahaj.zone 8 months ago
1st, I didn’t just say 1000x harder is still easy, I said 10 or 1000x would still be easy compared to multiple different jailbreaks on this thread, a reference to your saying it would be “orders of magnitude harder”
2nd, the difficulty of seeing the system prompt being 1000x harder only makes it take 1000x longer of the difficulty is the only and biggest bottleneck
3rd, if they are both LLMs they are both running on the principles of an LLM, so the techniques that tend to work against them will be similar
4th, the second LLM doesn’t need to be broken to the extent that it reveals its system prompt, just to be confused enough to return a false negative.