Comment on Consistent Jailbreaks in GPT-4, o1, and o3 - General Analysis

SorteKanin@feddit.dk ⁨2⁩ ⁨weeks⁩ ago

Am I the only one that feels it’s a bit strange to have such safeguards in an AI model? I know most models aren’t available online but some models are available to download and run locally right? So what prevents me from just doing that if I wanted to get around the safeguards? I guess maybe they’re just doing it so that they can’t be somehow held legally responsible for anything the AI model might say?

source
Sort:hotnewtop