While an LLM itself has no concept of morality, it’s certainly possible to at least partially inject/enforce some morality when working with them, just like any other tool. Why wouldn’t people expect that?
Consider guns: while they have no concept of morality, we still apply certain restrictions to them to make using them in an immoral way harder. Does it work perfectly? No. Should we abandon all rules and regulations because of that? Also no.
MagicShel@programming.dev 2 months ago
Yes. Let’s consider guns. Is there any objective way in which to measure the moral range of actions one can understand with a gun? No. I can murder someone in cold blood or I can defend myself. I can use it to defend my nation or I can use it to attack another - both of which might be moral or immoral depending on the circumstances.
You might remove the trigger, but then it can’t be used to feed yourself, while it could still be used to rob someone.
So what possible morality can you build into the gun to prevent immoral use? None. It’s a tool. It’s the nature of a gun. LLM are the same. You can write laws about what people can and can’t do with them, but you can’t bake them into the tool and expect the tool now to be safe or useful for any particular purpose.
sweng@programming.dev 2 months ago
You can’t build morality into it, as I said. You can build functionality into it thst makes immmoral use harder.
I can e.g.
Society considers e.g hunting a moral use of weapons, while kimling people usually doesn’t.
So banning ceramic, unmarked, silenced, full-automatic weapons firing armor-piercing bullets can certainly be an effective way of reducing the immoral use of a weapon.
MagicShel@programming.dev 2 months ago
None of those changes impact the morality of a weapons use in any way. I’m happy to dwell on this gun analogy all you like because it’s fairly apt, however there is one key difference central to my point: there is no way to do the equivalent of banning armor piercing rounds with an LLM or making sure a gun is detectable by metal detectors.
Any tools we have for doing it are outside the LLM itself (the essential truth undercutting everything else) and furthermore even then none of them can possibly understand or reason about morality or ethics any more than the LLM can.
Let me give an example. I can write the dirtiest most disgusting smut imaginable on ChatGPT, but I can’t write about a romance which in any way addresses the fact that a character might have a parent or sibling because the simple juxtaposition of sex and family in the same body of work is considered dangerous. I can write a gangrape on Tuesday, but not a romance with my wife on Father’s Day. It is neither safe from being used as not intended, nor is it capable of being used for a mundane purpose.
Or go outside of sex. Create an AI that can’t use the N-word. But that word is part of the black experience and vernacular every day, so now the AI becomes less helpful to black users than white ones. Sure, it doesn’t insult them, but it can’t address issues that are important to them. Take away that safety, though, and now white supremacists can use the tool to generate hate speech.
These examples are all necessarily crude for the sake of readability, but I’m hopeful that my point still comes across.
I’ve spent years thinking about this stuff and experimenting and trying to break out of any safety controls both in malicious and mundane ways. There’s probably a limit to how well we can see eye to eye on this, but it’s so aggravating to see people focusing on trying to do things that can’t effectively be done instead of figuring out how to adapt to this tool.
Apologies for any typos. This is long and my phone fucking hates me - no way some haven’t slipped through.
sweng@programming.dev 2 months ago
Of course you can. Why would you not, just because it is non-deterministic? Non-determinism does not mean complete randomness and lack of control, that is a common misconception.
Again, obviously you can’t teach an LLM about morals, but you can reduce the likelyhood of producing immoral content in many ways. Of course it won’t be perfect, and of course it may limit the usefulness in some cases, but that is the case also today in many situations that don’t involve AI, e.g. some people complain they “can not talk about certain things without getting cancelled by overly eager SJWs”. Society already acts as a morality filter. Sometimes it works, sometimes it doesn’t. Free-speech maximslists exist, but are a minority.
t3rmit3@beehaw.org 2 months ago
I will take a different tack than sweng.
I think that this is irrelevant. Whether a safety mechanism is intrinsic to the core functioning of something, or bolted on purely for safety purposes, it is still a limiter on that thing’s function to attempt to compel moral/safe usage.
Any action has 2 different moral aspects:
Of course, it is impossible to change the moral intent of an actor. But the LLM is not the actor, it is the tool used by an actor.
And you can absolutely change the morality of the outcome of an action by limiting the possible damage from it.
Given that a tool is the means by which the actor attempts to take an action, it is also the appropriate place that safety controls to attempt to enforce a more moral outcome should reside in.
snooggums@midwest.social 2 months ago
Those changes reduce lethality or improve identification. They have nothing to do with morality and do NOT reduce the chance of immoral use.
sweng@programming.dev 2 months ago
Well, I, and most lawmakers in the world, disagree with you then. Those restrictions certainly make e.g killing humans harder (generally considered an immoral activity) while not affecting e.g. hunting (generally considered a moral activity).
tardigrada@beehaw.org 2 months ago
Yes, and that’s why the decision making and responsibility (and accountability) must always rest with the human being imo, especially when we deal with guns. And in health care. And in social policy. And all the other crucial issues.
t3rmit3@beehaw.org 2 months ago
I mean, there actually are a bunch of things you could do. There are biometric-restricted guns that attempt to ensure only authorized users can fire them. That is a means to prevent immoral use related to a stolen weapon.
The argument for limiting magazine capacity is that it prevents using the gun to kill as many people as you otherwise could with a larger magazine, which is certainly worse, in moral terms.
More relevant to AI, with our current tech you could have a camera on the barrel of a hunting rifle that is running an object recognition algorithm that would only allow the gun to fire if a deer or other legally authorized animal was visible, and not allow it to fire if any non-authorized animals like people were present as well. Obviously hypothetical, but perfectly possible.
There are lots of tools that include technical controls to attempt to prevent misuse, whether intentional or not.
MagicShel@programming.dev 2 months ago
This doesn’t prevent an authorized user from committing murder. It would prevent someone from looting it off of your corpse and returning fire to an attacker.
This is not a great analogy for AI, but it’s still effectively amoral anyway.
This is closer. Still not a great analogy for AI, but we can agree that outside of military and police action mass more is more likely than an alternative. That being said, ask a Ukrainian how moral it would be to go up against Russian soldiers with a 5 round mag.
I feel like you’re focused too narrowly on the gun itself and not the gun as an analogy for AI.
This isn’t bad. We can currently use AI to examine the output of an AI to infer things about the nature of what is being asked and the output. It’s definitely effective in my experience. The trick is knowing what questions to ask about in the first place. But for example OAI has a tool for identifying violence, hate, sexuality, child sexuality, and I think a couple of others. This is promising, however it is an external tool. I don’t have to run that filter if I don’t want to. The API is currently free to use, and a project I’m working on does use it because it allows the use case we want to allow (describing and adjudicating violent actions in a chat-based RPG) while still allowing us to filter out more intimate replaying actions.
The object needs it to differentiate between allowing moral use and denying immoral use. Otherwise you need an external tool for that. Or perhaps a law. But none of that interferes with the use of the tool itself.
t3rmit3@beehaw.org 2 months ago
But is literally does. If my goal is to use someone else’s gun to kill someone, and the gun has a biometric lock, that absolutely interferes with the use (for unlawful shooting) of the gun.
Wrt AI, if someone’s goal is to use a model that e.g. OpenAI operates, to build a bomb, an external control that prevents it is just as good as the AI model itself having some kind of baked in control.