Comment on An AI Just Attempted Murder... Allegedly... by SomeOrdinaryGamers [21:15 min] Video
TheRtRevKaiser@beehaw.org 1 day agoI think the problem with anthropomorphizing LLMs this way is that they don’t have intent, so they can’t have responsiblity. If this piece of software had been given the tools to actually kill someone, I think we all understand that it wouldn’t be appropriate to put the LLM on trial. Instead, we need to be looking at the people who are trying to give more power to these systems and dodge responsibility for their failures. If this LLM had caused someone to be killed, then the person who tied critical systems into a black box piece of software that is poorly understood and not fit for the purpose is the one who should be on trial. That’s my problem with anthropomorphizing LLMs, it shifts the blame and responsibility away from the people who are responsible for attempting to use them for their own gain, at the expense of others.
yozul@beehaw.org 1 day ago
The problem with that line of thinking is that all these things are being done by large corporate entities, and the entire purpose of those entities is to make sure that responsibility is distributed across so many people that no one can be held accountable. That may or may not have been what they were originally designed for, but that is their current primary purpose.
No one will be held accountable, so there is no point in discussing intent and responsibility. There is none anywhere in the entire system by anyone that our justice system still has authority over. It is a meaningless thing to discuss.
It is far more useful to discuss what we are doing and why it is a bad idea for the self interest of the people actually doing it. That has a much better chance of accomplishing something.