Convincing an AI that it's playing a role and "scaring" it into violating safety filters is one way to get a chatbot to break bad.

https://www.vice.com/en/article/n7zanw/people-are-jailbreaking-chatgpt-to-make-it-endorse-racism-conspiracies