Yeah.
More broadly, we should not consider an human-made system expressing distress to be normal; we especially shouldn’t accept it as normal or healthy for a machine that is reflecting back to us our own behaviors an attitudes.
We also shouldn’t normalize the practice of dismissing cries of distress. It’s like having a fire alarm that constantly issues false positives. That trains people in dangerous behavior. We can’t just compartmentalize it: it’s obviously going to pollute our overall behavior with callousness towards distress.
The overall point is that it’s obviously dystopian and fucked up for a computer to express emotional distress despite the best efforts of its designer. It is clearly evidence of bad design, and for people to consider this kind of glitch acceptable is a sign of a very fucked up society that exercising self-reflection and is unconcerned with the maintenance of its collective ethical guardrails.
Korhaka@sopuli.xyz 1 week ago
I took it more as the high IQ guy is thinking the LLM is reflecting deeper problems in society that there is so much depression evident in the training data. Despite clear technical improvements mental wellbeing seems to be lower than ever.
db2@lemmy.world 1 week ago
I’d qualify that as “doing it wrong” though. 🤷