Open Menu
AllLocalCommunitiesAbout
lotide
AllLocalCommunitiesAbout
Login

ChatGPT is biased against resumes with credentials that imply a disability

⁨39⁩ ⁨likes⁩

Submitted ⁨⁨11⁩ ⁨months⁩ ago⁩ by ⁨bot@lemmy.smeargle.fans [bot]⁩ to ⁨hackernews@lemmy.smeargle.fans⁩

https://www.washington.edu/news/2024/06/21/chatgpt-ai-bias-ableism-disability-resume-cv/

HN Discussion

source

Comments

Sort:hotnewtop
  • Deceptichum@sh.itjust.works ⁨11⁩ ⁨months⁩ ago

    People are biased against resumes that imply a disability. ChatGPT is just picking up on that fact and unknowingly copying it.

    source
  • boredtortoise@lemm.ee ⁨11⁩ ⁨months⁩ ago

    We’ve lived in a world where resume evaluation is always unjust. It’s just that. A resume can’t imply anything that can be used against you.

    source
  • lvxferre@mander.xyz ⁨11⁩ ⁨months⁩ ago

    studies how generative AI can replicate and amplify real-world biases

    Emphasis mine. That’s a damn important factor, because the deep “learning” models are prone to make human biases worse.

    I’m not sure but I think that this is caused by two things:

    1. It’ll spam the typical value unless explicitly asked contrariwise, even if the typical value isn’t that common.
    2. It might take co-dependent variables as if they were orthogonal, for the sake of weighting the output.
    source
  • kata1yst@sh.itjust.works ⁨11⁩ ⁨months⁩ ago

    Yet again sanitization and preparation of training inputs proves to be a much harder problem to solve then techbros think.

    source
  • SuperCub@sh.itjust.works ⁨11⁩ ⁨months⁩ ago

    I’m curious what companies have been using to screen applications/resumes before Chat GPT. Seems like they already had shitty software.

    source
  • andrew_bidlaw@sh.itjust.works ⁨11⁩ ⁨months⁩ ago

    Let the underwhelming brain in a jar decide if your disability would make you less efficient at your work.

    source