A very poor Lemmy article headline. The linked article says “alleged” and clearly there were multiple factors involved.
Comment on Chatbot that caused teen’s suicide is now more dangerous for kids, lawsuit says
TherapyGary@lemmy.blahaj.zone 4 weeks ago
caused
Hmmm
kinttach@lemm.ee 4 weeks ago
sabreW4K3@lazysoci.al 4 weeks ago
kinttach@lemm.ee 4 weeks ago
sabreW4K3@lazysoci.al 4 weeks ago
Could be different headlines for different regions?
davehtaylor@beehaw.org 4 weeks ago
If a HumanA pushed and convinced HumanB to kill themselves, then HumanA caused it. IMO they murdered them. It doesn’t matter if they didn’t pull the trigger. I don’t care what the legal definitions say.
If a chatbot did the same thing, it’s no different. Except in this case, it’s a team of developers behind it that did so, that allowed it to do so. Character.ai has blood on their hands, should be completely dismantled, and every single person at that company tried for manslaughter.
TherapyGary@lemmy.blahaj.zone 4 weeks ago
Except character.ai didn’t explicitly push or convince him to commit suicide. When he explicitly mentioned suicide, it made efforts to dissuade him and showed concern. When it supposedly encouraged him, it was in the context of a roleplay in which it said “please do” in response to him “coming home” which GPT3.5 doesn’t have the context or reasoning abilities to recognize as a euphemism for suicide when the character it’s roleplaying is dead and the user alive
Regardless, it’s a tool designed for roleplay. It doesn’t work if it breaks character
Buttons@programming.dev 4 weeks ago
Your comment might cause me to do something. You’re responsible. I don’t care what the legal definitions say.
derbis@beehaw.org 4 weeks ago
That will show that pesky receptionist