Comment on Anthropic says some Claude models can now end ‘harmful or abusive’ conversations
themeatbridge@lemmy.world 5 days agoThat’s probably part of it, and all of this is pretty silly.
But maybe an upside is that if people stop being shitty to chatbots, maybe we can normalize live customer service agents ending interactions when they become abusive. Maybe Claude is monitoring live agent conversations, making and documenting the decision to terminate the call. Humans have a higher threshold for abuse, and will often tolerate shitty behavior because they err on the side of customer service. If it’s an automated process, that protects the agent.
Of course, all of this is wishful thinking on my part. It would be nice if new tech wasn’t used for evil, but evil is profitable.