Comment on ChatGPT wrote “Goodnight Moon” suicide lullaby for man who later killed himself
MNByChoice@midwest.social 3 weeks ago
Thanks to Ars for including the lullaby. It is incredibly bleak.
Just to draw it out a little more.
A company intentionally made a product that is more than capable of killing its users. The company monitors the communications, and decides to not intervene. (Beats me how closely communications are monitored, but the company can and does close accounts as told in other articles.)
This communication went on for months or years. The company had more than enough time to act.
Sam Altman is a bad person for choosing not to.
icelimit@lemmy.ml 2 weeks ago
This opens up a slippery slope of requiring openai to analyze user-llm input and outputs, along with the question of privacy.
If anything, llms simply weren’t ready for the open market.
MNByChoice@midwest.social 2 weeks ago
Opens? OpenAI spent years doing exactly that. Though, apparently they almost three years ago.
maginative.com/…/openai-clarifies-its-data-privac…
icelimit@lemmy.ml 2 weeks ago
If I’m reading this right, they (claim) they are not reading user input/outputs to user, in which case they can’t be held liable for results.
MNByChoice@midwest.social 2 weeks ago
Social media, like Facebook, use humans (largely in poor countries, which has its own issues).
It should be simple to for the AI to flag problematic conversations for human review. In the linked case, OpenAI would have had months to notice and act.