And every single Asian game and anime tends to go for skimpy or virtual softcore with it’s female characters. Rarely you see a female character in full armor.
Comment on Why is AI Pornifying Asian Women?
GiuseppeAndTheYeti@midwest.social 10 months ago
Because we have been pornifying asian women on the internet for decades. Does that really beg the question posed in the title?
Admetus@sopuli.xyz 10 months ago
Gaywallet@beehaw.org 10 months ago
You’re absolutely correct, yet ask someone who’s very pro AI and they might dismiss such claims as “needing better prompts”. Also many people may not be as tech informed as you are, and bringing light to algorithmic bias can help them understand and navigate the world we now live in.
helenslunch@feddit.nl 10 months ago
If the author doesn’t know the answer, then it is helpful to provide it. If they know the answer, then why are they phrasing the title as a question?
MBM@lemmings.world 10 months ago
If you genuinely don’t know: because it’s an attention-grabbing title (which isn’t inherently bad)
Even_Adder@lemmy.dbzer0.com 10 months ago
It’s really hard getting dark skin sometimes. A lot of the time it’s not even just the model, LoRAs and Textual Inversions make the skin lighter again so you have to try even harder. It’s going to take conscious effort from people to tune models that are inclusive. With the way media is biased right now, I feel like it’s going to take a lot of effort.
jarfil@beehaw.org 10 months ago
“Inclusive models” would need to be larger.
Right now people seem to prefer smaller quantized models, with whatever set of even smaller LoRAs on top, that make them output what they want… and only include more generic elements in the base model.
Muehe@lemmy.ml 10 months ago
[citation needed]
To my understanding the problem is that the models reproduce biases in the training material, not model size. Alignment is currently a manual process after the initial unsupervised learning phase, often done by click-workers (Reinforcement Learning from Human Feedback, RLHF), and aimed at coaxing the model towards more “politically correct” outputs; But ultimately at that time the damage is already done since the bias is encoded in the model weights and will resurface in the outputs just randomly or if you “jailbreak” enough.
In the context of the OP, if your training material has a high volume of sexualised depictions of Asian women the model will reproduce that in its outputs. Which is also the argument the article makes. So what you need for more inclusive models is essentially a de-biased training set designed with that specific purpose in mind.
I’m glad to be corrected here, especially if you have any sources to look at.
Even_Adder@lemmy.dbzer0.com 10 months ago
I wouldn’t mind. I’m here for it.