Comment on Microsoft's LinkedIn: If our AI gets it wrong, that's your problem
jmcs@discuss.tchncs.de 1 month ago
A COMPUTER CAN NEVER BE HELD ACCOUNTABLE THEREFORE A COMPUTER MUST NEVER MAKE A MANAGEMENT DECISION
As IBM said in 1979, computers aren’t accountable, and I would go further and say they should never make any meaningful decision. The algorithm used doesn’t really make a difference. The sooner people understand that they are responsible for what they do with computers (like any other tool) the better.
HK65@sopuli.xyz 1 month ago
The real question is, what if you commission a work from another, and they make you something in a completely automated way. Let’s say a vending machine. Are you responsible for what the vending machine does if you use it as it’s supposed to be used? Or is it the owner of the machine?
Why is it different for LLM text generators?
42Firehawk@lemmynsfw.com 1 month ago
If I commission a vending machine, get one that was made automatically and runs itself, and I set it up and let it operate in my store, then I am responsible if it eats someone’s money without giving them their item, giving the wrong thing, or dispensing dangerous products.
This has already been decided, and it’s why you can open up and fix them, and each mechanism is controlled.
A llm making business decisions has no such control or safety mechanisms.
Midnitte@beehaw.org 1 month ago
I wouldn’t say that - there’s nothing preventing them from building in (stronger) guardrails and retraining the model based on input.
If it turns out the model suggests someone killing themselves based on very specific input, do you not think they should be held accountable to retrain the model and prevent that from happening again?
From an accountability perspective, there’s no difference from a text generator machine and a soda generating machine.
The owner and builder should be held accountable and thereby put a financial incentive on making these tools more reliable and safer. You don’t hold Tesla not accountable when their self driving kills someone because they didn’t test it enough or build in enough safe guards – that’d be insane.
42Firehawk@lemmynsfw.com 1 month ago
Stronger guardrails can help, sure. But getting new input and building a new model is the equivalent of replacing the entire vending machine with a different model by the same company if one is failing (by the old analogy).
The problem is that if you do the same thing with a llm for hiring or job systems, then the failure and bias instead is from the model being bigoted, which while illegal, is hidden in a model that is basically trained on how to be a more effective bigot.
You can’t hide your race from the llm that was accidentally trained to know what job histories are traditionally black, or anything else.