Comment on call of the void
Scubus@sh.itjust.works 13 hours ago… so the article should focus on stopping the users from doing that? There is a lot to hate AI companies for but their tool being useful is actually the bottom of that list
Comment on call of the void
Scubus@sh.itjust.works 13 hours ago… so the article should focus on stopping the users from doing that? There is a lot to hate AI companies for but their tool being useful is actually the bottom of that list
Samskara@sh.itjust.works 10 hours ago
People in distress will talk to an LLM instead of calling a suicide hotline.
Scubus@sh.itjust.works 8 hours ago
Ok, people will turn to google when they’re depressed. I just googled a couple months ago the least painful way to commit suicide. Google gave me the info I was looking for. Should I be mad at them?
Samskara@sh.itjust.works 8 hours ago
You are ignoring that people are already developing personal emotional reaction with chatbots. That’s no the case with search bars.
The first line above the search results at google for queries like that is a suicide hotline phone number.
A chatbot should provide at least that as well.
I’m not saying it shouldn’t provide no information.
Scubus@sh.itjust.works 7 hours ago
Ok, then we are in agreement. That is a good idea.
I think that at low levels the tech should not be hindered because a subset of users use the tool improperly. There is a line, however, but im not sure where it is. If that problem were to become as widespread as, say, gun violence, then i would agree that the utility of the tool may need to be effected to curb the negative influence