Comment on Microsoft's LinkedIn: If our AI gets it wrong, that's your problem

<- View Parent
Midnitte@beehaw.org ⁨1⁩ ⁨week⁩ ago

A llm making business decisions has no such control or safety mechanisms.

I wouldn’t say that - there’s nothing preventing them from building in (stronger) guardrails and retraining the model based on input.

If it turns out the model suggests someone killing themselves based on very specific input, do you not think they should be held accountable to retrain the model and prevent that from happening again?

From an accountability perspective, there’s no difference from a text generator machine and a soda generating machine.

The owner and builder should be held accountable and thereby put a financial incentive on making these tools more reliable and safer. You don’t hold Tesla not accountable when their self driving kills someone because they didn’t test it enough or build in enough safe guards – that’d be insane.

source
Sort:hotnewtop