Comment on Covert Racism in AI: How Language Models Are Reinforcing Outdated Stereotypes

<- View Parent
leisesprecher@feddit.org ⁨1⁩ ⁨month⁩ ago

The real problem are implicit biases. Like the kind of discrimination that a reasonable user of a system can’t even see. How are you supposed to know, that applicants from “bad” neighborhoods are rejected at a higher rate, if the system is presented to you as objective? And since AI models don’t really explain how they got to a solution, you can’t even audit them.

source
Sort:hotnewtop