You may be able to prompt for the less common realities, but the default of the model is still going to see “doctor” as a white man.
Comment on AI couldn't create an image of a woman like me - until now
SSUPII@sopuli.xyz 4 days agoI am instead thinking it will instead not be the case? Bigger models will be able to store the less common realities
luxyr42@lemmy.dormedas.com 4 days ago
Eq0@literature.cafe 4 days ago
They will, at best, replicate the data sets. They will learn racial discrimination and propagate it.
If you have a deterministic system, for example, to rate a CV, you can ensure that no obvious negative racial bias is included. If instead you have a LLM (or other AI) there is no supervision on which data element is used and how. The only thing we can check is if the predictions match the (potentially racist) data.