Comment on ChatGPT wrote “Goodnight Moon” suicide lullaby for man who later killed himself
MNByChoice@midwest.social 2 weeks agoOpens? OpenAI spent years doing exactly that. Though, apparently they almost three years ago.
maginative.com/…/openai-clarifies-its-data-privac…
Previously, data submitted through the API before March 1, 2023 could have been incorporated into model training. This is no longer the case since OpenAI implemented stricter data privacy policies.
Inputs and outputs to OpenAI’s API (directly via API call or via Playground) for model inference do not become part of the training data unless you explicitly opt in.
icelimit@lemmy.ml 2 weeks ago
If I’m reading this right, they (claim) they are not reading user input/outputs to user, in which case they can’t be held liable for results.
MNByChoice@midwest.social 2 weeks ago
Social media, like Facebook, use humans (largely in poor countries, which has its own issues).
It should be simple to for the AI to flag problematic conversations for human review. In the linked case, OpenAI would have had months to notice and act.
icelimit@lemmy.ml 2 weeks ago
Thanks for the thoughts.
I’ve thought about this particular case further, and the more I think about it, the more I feel the article is biased and openai has done their reasonable best. Article does say that gpt initially attempted to dissuade the user. However. As we all know it is only too easy to bypass it sideskirt such ‘protections’, especially when it is adjacent as in this case, for writing some literature ‘in accompaniment’. Gpt has no arms nor legs, and has no agency to affect the real world, it could not, and should never have, the ability to call any authority in (dangerous legal precedence, think automated swatting), nor should it flag a particular interaction for manual intervention (privacy).
Gpt can only offer some token resistance, but it is now, and always will be, and must remain, a tool for our use. Consequences of using a tool in any way just lie with the user themselves.
All mitigations in my opinion should be user side. Age-restricted access, or licenses after training, and so on.