Why does a suicidal 14 year old have access to a gun?
Chatbot that caused teen’s suicide is now more dangerous for kids, lawsuit says
Submitted 1 month ago by sabreW4K3@lazysoci.al to technology@beehaw.org
Comments
django@discuss.tchncs.de 1 month ago
PerogiBoi@lemmy.ca 1 month ago
America
Sauerkraut@discuss.tchncs.de 1 month ago
Anyone else think it is super weird how exposing kids to violence is super normalized but parents freak out over nipples?
I feel like if anything should be taboo it should be violence.
technocrit@lemmy.dbzer0.com 1 month ago
This is why we gotta an TikTok!!! \s
stoy@lemmy.zip 1 month ago
As someone who is very lonely, chatbots like these scare the shit out of me, not only for their seeming lack of limits, but also for the fact that you have to pay for access.
I know I have a bit of an addictive personality, and know that this kind of system could completely ruin me.
Evil_Shrubbery@lemm.ee 1 month ago
Yeah, it’s not good if it’s profit driven, all kinds of shady McFuckery can arise from that.
Maybe local chat bots will become a bit more accessible in time.
olicvb@lemmy.ca 1 month ago
You could run your own if you have the hardware for it (most upper mid-range gaming pc will do, 6gb+ gpu)
Then something like KoboldAI + SillyTavern would work well
TherapyGary@lemmy.blahaj.zone 1 month ago
caused
Hmmm
davehtaylor@beehaw.org 1 month ago
If a HumanA pushed and convinced HumanB to kill themselves, then HumanA caused it. IMO they murdered them. It doesn’t matter if they didn’t pull the trigger. I don’t care what the legal definitions say.
If a chatbot did the same thing, it’s no different. Except in this case, it’s a team of developers behind it that did so, that allowed it to do so. Character.ai has blood on their hands, should be completely dismantled, and every single person at that company tried for manslaughter.
TherapyGary@lemmy.blahaj.zone 1 month ago
Except character.ai didn’t explicitly push or convince him to commit suicide. When he explicitly mentioned suicide, it made efforts to dissuade him and showed concern. When it supposedly encouraged him, it was in the context of a roleplay in which it said “please do” in response to him “coming home” which GPT3.5 doesn’t have the context or reasoning abilities to recognize as a euphemism for suicide when the character it’s roleplaying is dead and the user alive
Regardless, it’s a tool designed for roleplay. It doesn’t work if it breaks character
Buttons@programming.dev 1 month ago
Your comment might cause me to do something. You’re responsible. I don’t care what the legal definitions say.
derbis@beehaw.org 1 month ago
That will show that pesky receptionist
kinttach@lemm.ee 1 month ago
A very poor Lemmy article headline. The linked article says “alleged” and clearly there were multiple factors involved.
sabreW4K3@lazysoci.al 1 month ago
Ashelyn@lemmy.blahaj.zone 1 month ago
One or more parents in denial that there’s anything wrong with their kids and/or the idea they need to take gun storage seriously? That’s the first thing that comes to mind, and it’s not uncommon in the US. Especially when you consider that a lot of gun rhetoric revolves around self defense in an emergency/home invasion, not having at least one gun readily available defeats the main purpose in their minds.
django@discuss.tchncs.de 1 month ago
As a European I am astonished, that the article never mentions, or even questions, why this child had access to a loaded firearm.
The chatbot might be a horrible mess and shouldn’t be accessible by children, but a gun should be even less accessible to a child.
Ashelyn@lemmy.blahaj.zone 1 month ago
Ideally, I agree wholeheartedly. American gun culture multiplies the damage of every other issue we have by a lot
tatterdemalion@programming.dev 1 month ago
This makes me so nervous about how AI is going to influence children and adolescents of the coming generations. From iPad kids to AI teens. They’ll be at a huge risk of dissociation from reality.
secret300@lemmy.sdf.org 1 month ago
Bro it’s an AI. It doesn’t have any reasoning skills it’s just stringing words together.
scrubbles@poptalk.scrubbles.tech 1 month ago
Man, what a complex problem with no easy answers. No sarcasm, it’s a hard thing. On one hand these guys made a chat platform where you could have fun chatting with your dream characters, and honestly I think that’s fun - but I also know llms pretty well now and can start seeing the tears pretty quickly. It’s fun, but not addictive for me.
That doesn’t mean it isn’t for others though. Here we have a young impressionable lonely teen who was reaching out and used the system for something it was not meant to do. Not blaming, but honestly it just wasn’t meant to be a full time replacement for human contact. The LLM follows your conversation, if someone is having dark thoughts the conversation will pick up on those notes and dive into them, that’s just how llms behave.
They say their adding guard rails, but it’s really not that easy with llms to just “not have dark thoughts”. It was trained on reddit, it’s going to behave negativity.
I don’t know the answer. It’s complicated. I don’t think they planned on this happening, but it has and it will continue.
Tezka_Abhyayarshini@lemmy.today 1 month ago
Are we clear as we examine this occurrence that there are a series of steps which must be chosen, and a series of interdependent cogent actions which must be taken, in order to accomplish a multi-step task and produce objectives, interventions and goal achievement, even once?
While I am confident that there are abyssaly serious issues with Google and Alphabet; with their incorporated architecture and function, with their relationships, with their awareness, lack of awareness, ability to function, and inability to function aligned with healthy human considerations, ‘they’ are an entity, and a lifeless, perhaps zombie-like and/or ‘undead’ conclave/hive phenomenon created by human co-operation in teams to produce these natural and logical consequences by prioritization, resource delegation and lack of informed sound judgment.
Without actual, direct accountability, responsibility, conscience, morals, ethics, empathy, lived experience, comprehension; without uninsulated, direct, unbuffered accessibility, communication, openness and transparency, are ‘they’ not actually the existing, functioning agentic monstrosity that their products and services now conjure into ‘service’ and action through inanimate objects (and perhaps unhealthy fantasy/imagination), resource acquisition, and something akin to predation or consumption of domesticated, sensitized (or desensitized), uninformed consumer cathexis and catharsis?
It is no measure of health to be well-adjusted to a profoundly sick incorporation.
Letstakealook@lemm.ee 1 month ago
What hairnet to this young man is unfortunate, and I know the mother is grieving, but the chatbots did not kill her son. Her negligence around the firearm is more to blame, honestly. Regardless, he was unwell, and this was likely going to surface in one way or another. With more time for therapy and no access to a firearm, he may have been here with us today. I do agree, though, that sexual/romantic chatbots are not for minors. They are for adult weirdos.
Hirom@beehaw.org 1 month ago
That’s a good point, but there’s more to this story than a gunshot.
The lawsuit alleges amongst other things this the chatbots are posing are licensed therapist, as real persons, and caused a minor to suffer mental anguish.
A court may consider these accusations and whether the company has any responsibility on everything that happened up to the child’s death, regarless of whether they find the company responsible for the death itself or not.
Kissaki@beehaw.org 1 month ago
Where do I sign up?
Sabata11792@ani.social 1 month ago
If she’s not running on your hardware, she only loves your for money.