Comment on Enshittification of ChatGPT

melmi@lemmy.blahaj.zone ⁨1⁩ ⁨week⁩ ago

LLMs are very good at giving what seems like the right answer for the context. Whatever “rationality” jailbreak you did on it is going to bias its answers just as much as any other prompt. If you put in a prompt that talks about the importance of rationality and not being personal, it’s only natural that it would then respond that a personal tone is harmful to the user—you basically told it to believe that.

source
Sort:hotnewtop