Comment on It’s practically impossible to run a big AI company ethically: Anthropic was supposed to be the good guy. It can’t be — unless government changes the incentives in the industry.

<- View Parent
MagicShel@programming.dev ⁨4⁩ ⁨months⁩ ago

There are biometric-restricted guns that attempt to ensure only authorized users can fire them.

This doesn’t prevent an authorized user from committing murder. It would prevent someone from looting it off of your corpse and returning fire to an attacker.

This is not a great analogy for AI, but it’s still effectively amoral anyway.

The argument for limiting magazine capacity is that it prevents using the gun to kill as many people as you otherwise could with a larger magazine, which is certainly worse, in moral terms.

This is closer. Still not a great analogy for AI, but we can agree that outside of military and police action mass more is more likely than an alternative. That being said, ask a Ukrainian how moral it would be to go up against Russian soldiers with a 5 round mag.

I feel like you’re focused too narrowly on the gun itself and not the gun as an analogy for AI.

you could have a camera on the barrel of a hunting rifle that is running an object recognition algorithm that would only allow the gun to fire if a deer or other legally authorized animal was visible

This isn’t bad. We can currently use AI to examine the output of an AI to infer things about the nature of what is being asked and the output. It’s definitely effective in my experience. The trick is knowing what questions to ask about in the first place. But for example OAI has a tool for identifying violence, hate, sexuality, child sexuality, and I think a couple of others. This is promising, however it is an external tool. I don’t have to run that filter if I don’t want to. The API is currently free to use, and a project I’m working on does use it because it allows the use case we want to allow (describing and adjudicating violent actions in a chat-based RPG) while still allowing us to filter out more intimate replaying actions.

An object doesn’t have to have cognition that it is trying to do something moral, in order to be performing a moral function.

The object needs it to differentiate between allowing moral use and denying immoral use. Otherwise you need an external tool for that. Or perhaps a law. But none of that interferes with the use of the tool itself.

source
Sort:hotnewtop