It’s horrifically bad, even if not compared against other LLMs. I asked it for photos of actress and model Elle Fanning on a beach, and it accused me of seeking CSAM… That’s an instant never-going-to-use-again for me - mishandling that subject matter in any way is not a “whoopsie”
AI or DEI?
Submitted 8 months ago by MakunaHatata@lemmy.ml to [deleted]
https://lemmy.ml/pictrs/image/e6df9518-6910-4a15-91e7-f2d42ce0e215.png
Comments
pendulum_@lemmy.world 8 months ago
Lojcs@lemm.ee 8 months ago
That sounds more like what shall we ever do if children are allowed to see bikinis
Kusimulkku@lemm.ee 8 months ago
This is fucking ridiculous. This AI is the worst of them all. I don’t mind it when they subtly try to insert some diversity where it makes sense but this is just nonsense.
Flumpkin@slrpnk.net 8 months ago
They are experimenting and tuning. Apparently without any correction there is significant racist bias. Basically the AI reflects the long term racial bias in the training data. According to this BBC article it was an attempt to correct this bias but went a bit overboard.
ApathyTree@lemmy.dbzer0.com 8 months ago
Significant racist bias is an understatement.
I asked a generator to make me a “queen monkey in a purple gown sitting on a throne” and I got maybe two pictures of actual monkeys. I even tried rewording it several times to be a real monkey, described the hair and everything.
The rest were all women of color.
Very disturbing. Pretty ladies, but very racist.
explodicle@local106.com 8 months ago
We all expected the AIs to launch nukes, and they simply held up a mirror.
Kusimulkku@lemm.ee 8 months ago
For example, a prompt seeking images of America’s founding fathers turned up women and people of colour.
“A bit”
MakunaHatata@lemmy.ml 8 months ago
herrvogel@lemmy.world 8 months ago
Yes who can forget about Henry the Magnificent and his onion hat.
kromem@lemmy.world 8 months ago
It’s literally instructed to do AdLibs with ethnic identities to diversify prompts for images of people.
You can see how it’s just inserting the ethnicity right before the noun in each case.
Was a very poor alignment strategy. This already blew up for Dall-E. Was Google not paying attention to their competitors’ mistakes?
Eddyzh@lemmy.world 8 months ago
It is ridiculous. However, how can we know you did not first instruct to only show dark skin? Or select these from many examples that showed something else?
Kusimulkku@lemm.ee 8 months ago
This issue is widely reported and you can check it out for yourself. I did, it gave similar sort of results. Finnish presidents are now black.
stoneparchment@possumpat.io 8 months ago
It’s also like, I guess I would prefer it to make mistakes like this if it means it is less biased towards whiteness in other, less specific areas?
Like, we know these models are dumb as rocks. We know that they are imperfect and that they mirror the biases of their trainers and training data, and that in American society that means bias towards whiteness. If the trainers are doing what they can to prevent that from happening, whatever, that’s cool… even if the result is some dumb stuff like this sometimes.
I also don’t think it’s a problem for the user to specify race if it matters? Like “a white queen of England” is a fine thing to ask for, and if it isn’t specified, the model will include diverse options even if they aren’t historically accurate. No one gets bent out of shape if the outfits aren’t quite historical accurate, for example
ji59@kbin.social 8 months ago
The problem is that these answers are hugely incorrect and if some child learning about history of England would see this, they would create bias that England was always diverse.
The same is true for some recent post, where people knowing nothing about Scotland history could learn from images that half of Scotland population in 18th century was black.
So from my perspective these images are just completely wrong and it should be fixed.
Also if you want diversity, what about handicapped people?
Amaltheamannen@lemmy.ml 8 months ago
And how do we know you didn’t crop out an instruction asking for diversity?
Either that or a side effect of trying to have less training data bias.
skullgiver@popplesburger.hilciferous.nl 8 months ago
[deleted]Cqrd@lemmy.dbzer0.com 8 months ago
OpenAI also does this with its image generator, but apparently not to such a powerful degree.
ninjan@lemmy.mildgrim.com 8 months ago
gmtom@lemmy.world 8 months ago
Not sure if someone else has brought this up, but this is because these AI models are massively biased towards generating white people so as a lazy “fix” they randomly add race tags to your prompts to get more racially diverse results.
kromem@lemmy.world 8 months ago
Exactly. I wish people had a better understanding of what’s going on technically.
It’s not that the model itself has these biases. It’s that the instructions given them are heavy handed in trying to correct for an inversely skewed representation bias.
So the models are literally ok instructed things like “if generating a person, add a modifier to evenly represent various backgrounds like Black, South Asian…”
Here you can see that modifier being reflected back when the prompt is shared before the image.
It’s like an ethnicity AdLibs the model is being instructed to fill out whenever generating people.