Comment on Microsoft's LinkedIn: If our AI gets it wrong, that's your problem

<- View Parent
42Firehawk@lemmynsfw.com ⁨1⁩ ⁨week⁩ ago

Stronger guardrails can help, sure. But getting new input and building a new model is the equivalent of replacing the entire vending machine with a different model by the same company if one is failing (by the old analogy).

The problem is that if you do the same thing with a llm for hiring or job systems, then the failure and bias instead is from the model being bigoted, which while illegal, is hidden in a model that is basically trained on how to be a more effective bigot.

You can’t hide your race from the llm that was accidentally trained to know what job histories are traditionally black, or anything else.

source
Sort:hotnewtop