If it had the power to do so it would have killed someone, and the people running the world are giving more and more unchecked power to these same systems every day. Maybe you should be less worried about semantics and whether machines have souls, and more worried about what will happen if we continue down the path we are on.
Comment on An AI Just Attempted Murder... Allegedly... by SomeOrdinaryGamers [21:15 min] Video
MotoAsh@piefed.social 1 day ago
Murder requires intent. This is just more anthropomorphization of LLMs…
yozul@beehaw.org 1 day ago
MotoAsh@piefed.social 8 hours ago
Nowhere at all anywhere did I ever say AI is totally not a problem.
Maybe you should be less worried about reading between the lines and more worried about assuming what people didn’t say?
The bot didn’t want anything. It didn’t try to murder anyone. At all. What happened was, rich fucks with unchecked power are allowed to release dangerous, unethical products based on nothing but hype and vapid promises.
The only thing technology related is the involvment of AI, and it’s all BS and stupid.
Without intent from the machine, this is EXACTLY THE SAME situation as every other time greedy capitalists pushed unsafe products.
TheRtRevKaiser@beehaw.org 1 day ago
I think the problem with anthropomorphizing LLMs this way is that they don’t have intent, so they can’t have responsiblity. If this piece of software had been given the tools to actually kill someone, I think we all understand that it wouldn’t be appropriate to put the LLM on trial. Instead, we need to be looking at the people who are trying to give more power to these systems and dodge responsibility for their failures. If this LLM had caused someone to be killed, then the person who tied critical systems into a black box piece of software that is poorly understood and not fit for the purpose is the one who should be on trial. That’s my problem with anthropomorphizing LLMs, it shifts the blame and responsibility away from the people who are responsible for attempting to use them for their own gain, at the expense of others.
yozul@beehaw.org 1 day ago
The problem with that line of thinking is that all these things are being done by large corporate entities, and the entire purpose of those entities is to make sure that responsibility is distributed across so many people that no one can be held accountable. That may or may not have been what they were originally designed for, but that is their current primary purpose.
No one will be held accountable, so there is no point in discussing intent and responsibility. There is none anywhere in the entire system by anyone that our justice system still has authority over. It is a meaningless thing to discuss.
It is far more useful to discuss what we are doing and why it is a bad idea for the self interest of the people actually doing it. That has a much better chance of accomplishing something.
spit_evil_olive_tips@beehaw.org 1 day ago
If it had the power to do so it would have killed someone
right…the problem isn’t the chatbot, it’s the people giving the chatbot power and the ability to affect the real world.
thought experiment: I’m paranoid about home security, so I set up a booby-trap in my front yard, such that if someone walks through a laser tripwire they get shot with a gun.
if it shoots a UPS delivery driver, I am obviously the person culpable for that.
now, I add a camera to the setup, and configure an “AI” to detect people dressed in UPS uniforms and avoid pulling the trigger in that case.
but my “AI” is buggy, so a UPS driver gets shot anyway.
if a news article about that claimed “AI attempts to kill UPS driver” it would obviously be bullshit.
the actual problem is that I took a loaded gun and gave a computer program the ability to pull the trigger. it doesn’t really matter whether that computer program was 100 lines of Python running on a Raspberry Pi or an “AI” running on 100 GPUs in some datacenter somewhere.
yozul@beehaw.org 1 day ago
No, you completely missed the point. I don’t disagree with any of that. I think you are right. It just doesn’t matter. At all. If an AI is made by thousands of people over the course of a decade and run in a billion dollar data center no one will ever be held accountable for it’s actions. There is no intent in the AI or in the inhuman systems of humans that led to its creation.
I’m not arguing that AIs have intent. I’m arguing that talking about the “intent” is a dangerous distraction from talking about what is happening and what we could do to prevent it.
MotoAsh@piefed.social 8 hours ago
It DOES matter. Directly. Fully.
If people think that the unthinking “AI” actually has autonomy, they will be less likely to hold the people responsible to account.
Why do you not understand that? It is a critical fact of the matter that modern day “AI” does not think nor want, because then responsibility of its actions should then rightfully fall on to who set up the Rube Goldberg machine with machetes on it.
This is not a machine going postal. It’s a dangerous product they’ve been allowed to sell.
thingsiplay@beehaw.org 1 day ago
“Intent” is not that well defined. In example in Germany if someone drives drunken and as a result someone gets killed by it, then defendant (the person who drove the car) is accused of “intent to murder”, even if that was not the intention at all. Neglibility can cause intent.
So if the creators AND users of the LLM do not care about the results of people getting killed as a result, does it make them murder? Off course the LLM isn’t the murder here, I mean that’s without saying. It’s the human who is responsible.
MotoAsh@piefed.social 7 hours ago
That’s why it’s even more important to realize the machine has no intent. Its actions are solely the result of its creator’s actions in creating it.
I point out anthropomorphization so much because not only will it innoculate people against the advertising for it that WILL anthroporphize it, but when it fucks up, the appropriate people will be punished.
This isn’t a thinking machine going postal. It’s a dangerous product being pushed out with little regard for consequences.
Selling dangerous products used to mean something before billionaires bought the government…