I know the title will trigger people but it’s a short so please briefly hear her out. I’ve since given this a try and it’s incredibly cool. It’s a very different experience and provides much better information AFAICT
My thinking is that LLMs are human-like enough that mistreating them can be a strong indicator of someone’s character. If you’re comfortable being cruel to something that closely resembles a person, it suggests you might treat actual people poorly too. That’s why I think the premise of the TV series Westworld wouldn’t really work in real life - you’d have to be a literal psychopath to mistreat those human-like robots, even if you know (or are pretty sure) they’re not conscious.
I don’t think people need to go out of their way to be overly polite to an LLM - we can be pretty confident it doesn’t actually care - but if I saw someone’s chat history and it was nothing but them being mean or abusive, that would be a massive red flag for me personally.
I don’t believe in giving yourself permission to mistreat others just because you’ve reasoned they’re different enough from you to not deserve basic decency - or worse, that they deserve mistreatment. Whatever excuse you use to “other” someone is still just that - an excuse. Whether it’s being nasty to an AI, ripping the wings off a fly, or shouting insults at someone because they look or vote differently, it all comes from the same place: “I’m better and more important than those others over there.” Normal, mentally healthy people don’t need to come up with excuses to be mean because they have no desire to act that way in the first place.
spit_evil_olive_tips@beehaw.org 1 week ago
tl;dw is that you should say “please” as basically prompt engineering, I guess?
the theory seems to be that the chatbot will try to match your tone, so if you ask it questions in a tone like it’s an all-knowing benevolent information god, it’ll respond in kind, and if you treat it politely its responses will tend more towards politeness?
I don’t see how this solves any of the fundamental problems with asking a fancy random number generator for authoritative information, but sure, if you want to be polite to the GPUs, have at it.
like, several lawyers have been sanctioned for submitting LLM-generated legal briefs with hallucinated case citations. if you tack on “pretty please, don’t make up any fake case citations or I could get disbarred” to a prompt…is that going to solve the problem?