Comment on Why do people hate AI so much?
cheese_greater@lemmy.world 1 day agoThere has to be a liabillity standard tho, otherwise it completely does away with any possibillity of even nominal accountabillity. If harm is caused because of a human, there is liabillity (whether directly or to whoever is responsible for that persons actions). The same should be true for whoever employs LLM for some purpose that results in harm. The LLM cant be jailed or “shutdown” really, its incumbent upon the handler to assume liabillity for the activities they are involved with
schnurrito@discuss.tchncs.de 1 day ago
I agree. If you make any kind of real-world decision based on the output of AI, you should be liable for it as if you’d made that decision yourself.
But I remember reading some news stories about cases where people (often minors) chatted with chatbots and managed to get those chatbots into states where the chatbots encouraged that the users harm themselves (in some cases even commit suicide?). As tragic as that is, I don’t see how it’s morally right to hold the AI companies responsible for that unless it can be shown they did this on purpose. All the AI did in such cases was what it was advertised and understood to do: generate plausible-sounding text based on user input. Those are the cases I’m talking about.
cheese_greater@lemmy.world 1 day ago
Its a difficult issue, no doubt about it