Drop the wild speculation, there is zero reason to play devil’s advocate in this situation. If you cared to do any reading there are a myriad of examples of this company’s llms pushing harmful behavior.
Comment on Character.AI bans users under 18 after being sued over child’s suicide
thingsiplay@beehaw.org 1 day ago
Last year, the company was sued by the family of 14-year-old Sewell Setzer III, who took his own life after allegedly developing an emotional attachment to a character he created on Character.AI. His family laid blame for his death at the feet of Character.AI and argued the technology was “dangerous and untested”.
I wonder if the death is really connected to the Ai. It could be the person was already in a problematic situation with family and friends, and they just need to blame someone or something and don’t want to admit the real problems. Kind of what often happened back in the day with videogames getting blamed for killing humans.
Now, I don’t know if this is what is happening, but it is good to have the mind open. Because we could end up in a society where everyone undermines real problems in physical world and blames Ai to sideload the question. Nontheless I think the outcome that the companies restrict access to adult only is a good thing. But the entire industry needs to do this, not select services AFTER deaths. Because if this is the root cause of death, then why should others be allowed to operate? This needs regulation! I am not for regulating every detail, but IF this is causing kids to die, then it NEEDS regulation worldwide.
- GammaGames@beehaw.org 1 day ago- thingsiplay@beehaw.org 1 day ago- but you are saying that the companies selling these products should get off for free because they “would’ve done it anyway” - I am not saying that. Did you not read the last part of my reply? - GammaGames@beehaw.org 1 day ago- I did, it was full of speculation based on something you admitted you had no idea about - thingsiplay@beehaw.org 1 day ago- It is not just speculation, it is a warning not to just believe alleged accusations. We saw this a lot of times with politicians too too, while pointing to Ai to hide their real problems. So I ask you, have you prove that all of the accusations are true that the kid died because of the Ai and there the kid had no suicidal problems before? - But yes, its easy to say “you have no clue” instead of coming up with facts. Its easy this way to point with the finger and believe what you want to believe. Plus I said if its true at all, then I am for regulation. You instead ignore all of my points and say “you have no clue”. I wonder if you have any clue what you are talking about. 
 
 
 
Gaywallet@beehaw.org 1 day ago
This is not a fair analogy for what is going on here. Video games being blamed harkens back to times when music or other counter cultural media was blamed for behavior. We have a lot of literature which shows that the passive consumption of media doesn’t really affect someone in the ways which they were being blamed.
AI on the other hand, interacts back with you, and amplifies psychosis. Now this is early days and most of what we have is theoretical in nature, based off case-studies, or simply clinical hypothesis [1, 2, 3]. However, there is a clear difference in media itself - the chatbot is able to interact with the user in a dynamic way, and is programmed in a manner by which to reinforce certain thoughts and feelings. The chatbot is also human-seeming enough for a person to anthropomorphize the chatbot and treat it like an individual for the purposes of therapy or an attempt at emotional closeness.
This is a valid point to bring up, however, I think it is shortsighted when we think in a broader context such as that of public health. We could say the same about addictive behaviors and personalities, for example, and absolve casinos of any blame for designing a system which takes advantage of these individuals and sends them down a spiraling path of gambling addiction. Or, we can recognize that this is a contributing and amplifying factor, by paying close attention to what is happening to individuals in a broad sense, as well as smartly applying theory and hypothesis.
I think it’s completely fair to say that this kid likely had a lot of contributing factors to his depression and ultimate and final decision. There is a clear hypothetical framework with some circumstantial evidence with strong theoretical support to suggest that AI are exacerbating the problem and also should be considered a contributing factor. This suggests that regulation may be helpful, or at the very least increased public awareness of this particular technology having the potential to cause harm to certain individuals.
thingsiplay@beehaw.org 22 hours ago
It does not matter what media or if its interactive or not. My point was about people not talking about real issues and just pointing with the finger to the boo-man, like videogames and now Ai tools. Besides videogames are also interactive, but that was never the point.