Comment on Gender, Race, and Intersectional Bias in Resume Screening via Language Model

GenderNeutralBro@lemmy.sdf.org ⁨3⁩ ⁨weeks⁩ ago

We find that the MTEs are biased, signif-icantly favoring White-associated names in 85.1% of casesand female-associated names in only 11.1% of case

If you’re planning to use LLMs for anything along these lines, you should filter out irrelevant details like names before any evaluation step. Honestly, humans should do the same, but it’s impractical. This is, ironically, something LLMs are very well suited for.

Of course, that doesn’t mean off-the-shelf tools are actually doing that, and there are other potential issues as well, such as biases around cities, schools, or any non-personal info on a resume that might correlate with race/gender/etc.

I think there’s great potential for LLMs to reduce bias compared to humans, but half-assed implementations are currently the norm, so be careful.

source
Sort:hotnewtop