I’m not sure that checks out. I mean, fair, I do think that someone being habitually cruel toward AI might not be the greatest indicator of their disposition in general, though I’d hesitate to make a hasty judgement on that. But if we take AI’s presentation as a person as fictional, does that extend to other fictional contexts? Would you consider an evil play-through in a video game to indicate an issue? Playing a hostile character in a roleplay setting? Writing horror fiction?
It seems to me that there are many contexts where exhibiting or creating simulated behavior in a fictional environment isn’t really equivalent to doing so with genuine individuals in non-imaginary circumstances. AI isn’t quite the same as a fictional setting, but it’s potentially closer to that than it is to dealing with a real person.
By the same token, if not being polite to an AI is problematic, is it equally problematic to repeatedly say things like “human” and “operator” to an automated phone system until you get a response? Both mimic human speech, while neither ostensibly have a legitimate understanding of what’s being said by either party.
Where does the line get drawn? Is it wrong to curse at fully inanimate objects that don’t even pretend to be people? Is verbally condemning a malfunctioning phone, refrigerator, or toaster equivalent to berating a hallucinating AI?
sabreW4K3@lazysoci.al 1 week ago
I think you’re overestimating people. Let’s look at this post for example. The lady in the video is essentially saying that the way to get the best out of LLMs is to treat them like you’re hiring them to perform a role. Now look at the comments in here, a bunch of sanctimonious people with too much appreciation for their own thoughts and a lack of any semblance of basic behaviour. Just because people aren’t at the stage of abusing androids doesn’t mean their behaviour isn’t shitty. If people disagree with the tip on how to create prompts, post and say what’s better. If people dislike LLMs, don’t enter posts about them. The fact that people can’t do even these and act in good faith, suggests the world will be filled with literal psychopaths when humanoid androids are everywhere.
Vodulas@beehaw.org 1 week ago
Is there a word for a sentence that does the same thing it describes?
There is no reason to denigrate people because they don’t agree with you or don’t like the nature of LLMs/generative algorithms. It really feels like you came looking for a fight (“I know the title will trigger people”) and trying to dismiss folks that point out this does not fix the fundamental problems with LLMs
sabreW4K3@lazysoci.al 1 week ago
I don’t editorilise titles. You can check my post history. Acknowledging that people are triggered by posts about AI, doesn’t mean I was looking for a fight. It simply showed that I’m self-aware and all of that ignores that this is beehaw where people post to avoid low quality shitty virtue signalling that you can expect from LW.
Vodulas@beehaw.org 1 week ago
Use of the word “trigger” is the key part here. It is most often used by right wingers just trying to piss people off (“triggering the libs”). Starting off with that and then commenting on how people are “sanctimonious” for expressing valid opinions are what tell me you were less than open to an actual conversation. It has nothing to do do with the post itself (though I tend to agree she really doesn’t say anything of value) and more to do with how you are going about interacting with people like a right wing troll. Hell even in this comment you end with a common right wing dismissive phrase (virtue signalling) that, again, tells other people you don’t want a conversation
MountingSuspicion@reddthat.com 1 week ago
I think the issue is the post title. If the title was “role-based prompt engineering” you probably wouldn’t have gotten as many comments and certainly not as many disagreeing. She says she’s going to make a case for using please, and then fails to provide any actual examples of that. Pointing that out isn’t sanctimonious, nor does it mean people are being rude to AI. If you want to make a moral argument for it go ahead, but it seems like she’s attempting to propose a technical argument and then just doesn’t. For what it’s worth, I generally try and leave out superfluous words from prompts, in the same way that googling full sentences was previously less likely to result in a good answer than just key words. AI is not human. It’s a tool. If being rude to it ensured it would stop hallucinating, I don’t think it’d make you a bad person if you were rude to it.
There’s a comment here talking about antisocial behavior in gaming, and imho, if you without hesitation kick a dog in a video game, I’m not sure I’d view you the same way after. Plenty of people talk about how they struggle to do evil play throughs because they don’t like using rude options for npcs. Not saying please to AI doesn’t make you a psychopath.
sabreW4K3@lazysoci.al 1 week ago
I don’t editorilise titles. However they’re presented is how I post them. As for people being shit. There’s a reason I post to beehaw and stay away from LW where possible. Every once in a while standards slip here and it results in a new sticky.
As for the argument she’s making. Yes she could’ve made it in a more long form format and provided data. But it’s a short, the emphasis, being that it’s beehaw, was on discussion in good conscience.
I think AI, like front row/customer facing staff deserve to be treated as we would like to be.
MountingSuspicion@reddthat.com 1 week ago
Regarding the post title, I didn’t mean to indicate it being your decision, just that the title in general feels misleading, seeing as no argument is presented and it purports to be about “why” you should do something.
I don’t really know a lot about the difference between instances, so I can’t really opine on that. From what I’m seeing, it does look as though people are having a discussion, they just don’t seem to agree with the idea as presented.
I’m not saying it has to be longer format. I’m saying no argument is made. She could have gave ai the same prompt but with a please and compared the results. I have done so below. She could have used the irrelevant middle section to just display screenshots.
It’s fine if you think people should be polite to ai. I think that’s great, but if you feel that way regardless of effect on response quality then this video doesn’t really make that point either. I think a post saying that and offering your personal opinions might’ve prompted more discussion on that topic rather than just disagreement with a video that doesn’t make that argument.
Here are two prompt comparisons. There is imho no difference in quality when saying please. The same results can be seen regardless of prompt length or how polite I was, I just chose short examples that could have been used in a video if they actually illustrated her point.
List five animals
Here are five animals:
List five animals please
Here are five animals:
Create a limerick about being polite
There once was a person so fine, Whose manners were truly divine. They’d say “please” with a grin, And “thank you” within, And their politeness was truly sublime.
Create a limerick about being polite please
There once was a person so bright, Whose politeness was a delight. They’d hold doors with care, And listen with flair, And their kindness was always in sight.
jarfil@beehaw.org 1 week ago
Some estimates put the percentage of psychopaths in the single digits, that of sociopaths in the double digits. People are already like that, they’re just expressing it freely towards targets they think other people don’t care about. Let’s not forget the fate of Tay: en.m.wikipedia.org/wiki/Tay_(chatbot)
What these people don’t realize, is that modern LLMs are trained on human interactions, get tuned and/or limited to “positive” interactions, and interacting with them like kicking a rock, will give them zero results. And honestly… I’m fine with that. I don’t really care about their instance of an LLM which can be reset at any moment; better have them kicking that, than actual people. If it also gets them to learn some basic behavior, so much better for everyone else.