Open Menu
AllLocalCommunitiesAbout
lotide
AllLocalCommunitiesAbout
Login

Why is AI Pornifying Asian Women?

⁨88⁩ ⁨likes⁩

Submitted ⁨⁨1⁩ ⁨year⁩ ago⁩ by ⁨Gaywallet@beehaw.org⁩ to ⁨technology@beehaw.org⁩

https://joysauce.com/why-is-ai-pornifying-asian-women/

source

Comments

Sort:hotnewtop
  • raccoona_nongrata@beehaw.org ⁨1⁩ ⁨year⁩ ago
    [deleted]
    source
    • Appoxo@lemmy.dbzer0.com ⁨1⁩ ⁨year⁩ ago

      But, that said, when I messed around with AI image generators pretty much any kind of prompt that included woman or female designations tended towards sexualized versions, even to the point of violating its own content policy.

      Tried it on the copilot app and one result had an asian but wasnt sexual but indeed very sexy in style.

      Prompt: Generate me a picture of a female wizard reading a massive book of spells

      Pictures:
      Image

      source
      • DdCno1@beehaw.org ⁨1⁩ ⁨year⁩ ago

        What is sexy in style here? They are wearing loose, long-sleeved robes up to the neck. Makeup and hair are just following current trends.

        source
        • -> View More Comments
      • p03locke@lemmy.dbzer0.com ⁨1⁩ ⁨year⁩ ago

        That’s DALL-E. DALL-E is different than Stable Diffusion, which is different from Midjourney, which is different from the many NAI anime models out there.

        We need to stop treating LD models like they are all the same thing. Models are based on the data they are trained on. Sure, a lot of them started out from a Stable Diffusion model, but that’s not always the case, and enough training can have them go off in specialized directions.

        source
        • -> View More Comments
      • astraeus@programming.dev ⁨1⁩ ⁨year⁩ ago

        Cute wizard girl w

        source
    • p03locke@lemmy.dbzer0.com ⁨1⁩ ⁨year⁩ ago

      Yeah, if you go back through hundreds of years of artwork, most of it are pictures of women. Some of them are nude. There are many many artists that only draw women, modern or classical. And there’s a ton of male Japanese artists from centuries ago that did the same thing.

      I asked it to create a sort of witchy, sorceress character and many of the generations she was fully topless with her boobs out, despite me not asking that or even explicitly putting “fully clothed” into the prompt. There was one image that the system created and then removed and threatened me with a ban for it being too sexualized despite me putting no sexual language in the prompt and it being all the AI.

      That’s just one model, and obviously not Stable Diffusion. LLM models are just based on whatever they were trained on. If you don’t like it, download another model trained on something else and try it out. Or train one yourself.

      Also, I wish everybody would download a SD client and just use this software locally. All of these toy websites are shit, and local clients aren’t going to threaten to ban you because of what you generated. It’s a good learning experience to figure out the software, and these tools are more useful for more things than just bitching about the tech on the web.

      source
  • GiuseppeAndTheYeti@midwest.social ⁨1⁩ ⁨year⁩ ago

    Because we have been pornifying asian women on the internet for decades. Does that really beg the question posed in the title?

    source
    • Gaywallet@beehaw.org ⁨1⁩ ⁨year⁩ ago

      You’re absolutely correct, yet ask someone who’s very pro AI and they might dismiss such claims as “needing better prompts”. Also many people may not be as tech informed as you are, and bringing light to algorithmic bias can help them understand and navigate the world we now live in.

      source
      • helenslunch@feddit.nl ⁨1⁩ ⁨year⁩ ago

        Dismissing the article just because you already know the answer doesn’t really encourage people to participate in a discussion.

        If the author doesn’t know the answer, then it is helpful to provide it. If they know the answer, then why are they phrasing the title as a question?

        source
        • -> View More Comments
      • Even_Adder@lemmy.dbzer0.com ⁨1⁩ ⁨year⁩ ago

        It’s really hard getting dark skin sometimes. A lot of the time it’s not even just the model, LoRAs and Textual Inversions make the skin lighter again so you have to try even harder. It’s going to take conscious effort from people to tune models that are inclusive. With the way media is biased right now, I feel like it’s going to take a lot of effort.

        source
        • -> View More Comments
    • Admetus@sopuli.xyz ⁨1⁩ ⁨year⁩ ago

      And every single Asian game and anime tends to go for skimpy or virtual softcore with it’s female characters. Rarely you see a female character in full armor.

      source
  • jarfil@beehaw.org ⁨1⁩ ⁨year⁩ ago

    Wrong question. The right question would be:

    Why is AI as used in Lensa’s Magic Avatars App Pornifying Asian Women?

    Ask Lensa to remove the “ugly” and similar negative prompts from their avatar generating App, and let’s see what comes out.

    stable-diffusion-art.com/how-to-use-negative-prom…

    source
    • smeg@feddit.uk ⁨1⁩ ⁨year⁩ ago

      Can we please collectively get into the habit of editing these borderline-clickbait titles or at least add sub-titles explaining the real article? This isn’t reddit where you can’t edit anything and can’t add explanatory text!

      source
  • megopie@beehaw.org ⁨1⁩ ⁨year⁩ ago

    If I had to guess, they probably did a shit job labeling training data or used pre labeled images, now where in the world they possibly found huge amounts of pictures of women on the internet with the specific label of “Asian”?

    Almost like, most of what determines the quality of the output is now “prompt engineering” but actually the back end work of labeling the training data properly, and you’re not actually saving much labor over more traditional methods, just making the labor more anonymous , easier to hide, and thus easier to exploit and devalue.

    Almost like this shit is a massive farce just like the “meta verse” and crypto that will fail to be market viable and waist a shit ton of money that could have been spent on actually useful things.

    source
    • webghost0101@sopuli.xyz ⁨1⁩ ⁨year⁩ ago

      They did literally nothing and seem to use the default stable diffusion model which is supposed to be a techdemo. Would have been easy to put “(((nude, nudity, naked, sexual, violence, gore)))” as the negative prompt

      source
      • megopie@beehaw.org ⁨1⁩ ⁨year⁩ ago

        The problem is that negative prompts can help, but when the training data is so heavily poisoned in one direction, stuff gets through.

        source
  • lloram239@feddit.de ⁨1⁩ ⁨year⁩ ago

    Published on December 16, 2022

    Please ignore this article. It’s completely out of date.

    source
    • Gaywallet@beehaw.org ⁨1⁩ ⁨year⁩ ago

      This is incredibly dismissive of the concerns raised and adds nothing to the discussion

      source
  • Buelldozer@lemmy.today ⁨1⁩ ⁨year⁩ ago

    Because the Internet is for porn. Always has been, always will be.

    source
  • Muffi@programming.dev ⁨1⁩ ⁨year⁩ ago

    Scroll through the trained models on civit.ai and you’ll quickly get a feeling of the dystopian level of “prettifying” everything in the AI-generation world.

    I also once searched for “brown” just to see if any models were trained to create non-white-skinned people, and got shocked when the result was filled with models trained on Millie Bobby Brown from Stranger Things. I don’t even want to know what those models are used for.

    source
    • ExLisper@linux.community ⁨1⁩ ⁨year⁩ ago

      dystopian level of “prettifying” everything in the AI-generation world.

      So like all the ad campaigns, TV shows and movies in the real world?

      source
    • EddoWagt@feddit.nl ⁨1⁩ ⁨year⁩ ago

      From the first 10 models I saw, the first image was a woman 9 times…

      source
  • GilgameshCatBeard@lemmy.ca ⁨1⁩ ⁨year⁩ ago

    Because simps.

    Saved you a click.

    source
  • belated_frog_pants@beehaw.org ⁨1⁩ ⁨year⁩ ago

    Because white dudes fetishizing asian women wrote the llms and pointed at the training data

    source
    • anachronist@midwest.social ⁨1⁩ ⁨year⁩ ago

      I work in tech and asian guys tend to outnumber white guys in it, especially if you combine east asian and south asian.

      source
  • Omega_Haxors@lemmy.ml ⁨1⁩ ⁨year⁩ ago

    Stable Diffusion is little more than content laundering. It cannot create anything more than what you put in.

    source
    • lloram239@feddit.de ⁨1⁩ ⁨year⁩ ago

      Yawn, are we still repeating blinding repeating this utter nonsense from a year ago?

      source
    • darkphotonstudio@beehaw.org ⁨1⁩ ⁨year⁩ ago

      You’re so confidently incorrect about something you clearly don’t know much about.

      source
      • anachronist@midwest.social ⁨1⁩ ⁨year⁩ ago

        How is he wrong?

        source
  • intensely_human@lemm.ee ⁨1⁩ ⁨year⁩ ago

    Are the images above supposed to depict “porn”? I’ve never seen porn like that.

    source
    • 1984@lemmy.today ⁨1⁩ ⁨year⁩ ago

      In 2024, the brain washing is almost compete.

      source
  • millie@beehaw.org ⁨1⁩ ⁨year⁩ ago

    I’m not exposed to a huge amount of media coming out of Asia, outside of a handful of Korean shows that Netflix has picked up and anime. But like, if anime is any indicator, I’m not really surprised that the training data for Asian women is leaning more toward overt sexualization. Even setting aside the whole misogynistic ‘fan service’ thing, I don’t feel like I see as much representation of women who defy traditional gender roles as the last twenty or so years of Western media.

    It certainly could be that anime is actually a huge outlier here, but if the training data is primarily from the English speaking web, it might be overrepresented anyway. But like, when it comes to weird AI image behaviors, it pays to think about the probable training data.

    Like, stable diffusion seems to do a better job of rendering jewelry if you tell it to surround it with berries. Given the output, this seems to be due to Christmas themed jewelry ads. They also tend to add a lot of bokeh for the same reason.

    source
  • onlinepersona@programming.dev ⁨1⁩ ⁨year⁩ ago

    Garbage in, garbage out 🤷

    CC BY-NC-SA 4.0

    source
    • IHeartBadCode@kbin.social ⁨1⁩ ⁨year⁩ ago

      Absolutely this. The reason AI defaults female into "female armor mode" is the same reason Excel has January February Maruary. Our spicy autocorrect overlords cannot extrapolate data in a direction that it's training has no knowledge of.

      source
    • scrubbles@poptalk.scrubbles.tech ⁨1⁩ ⁨year⁩ ago

      You train on a bunch of reddit crap, you’re going to get neck beard reddit crap out. It’d look different if they only used art history books.

      source
  • Nacktmull@lemm.ee ⁨1⁩ ⁨year⁩ ago

    Does AI not generally pornify all women and girls?

    source
  • sculd@beehaw.org ⁨1⁩ ⁨year⁩ ago

    Looking at some of the replied that tried to dismiss the issue and the general lack of concern from moderators against aggressive replies from AI apologists (in this thread but also other AI related threads) are disheartening.

    source
  • RobotToaster@mander.xyz ⁨1⁩ ⁨year⁩ ago

    Because it’s trained on the internet

    source
    • Buelldozer@lemmy.today ⁨1⁩ ⁨year⁩ ago

      I prefer the original.

      source
  • webghost0101@sopuli.xyz ⁨1⁩ ⁨year⁩ ago

    While i agree there is a big issue with the bad biased and sexist training data this entire article is about the lensa app which uses (i assume) the default stable diffusion model.

    Intentional creating sexualized pictures is banned in their guidelines. And yet no one thought of creating a good negative prompt that negates any kind of nudity or eroticism? It still doesn’t properly fix the training data but at least people aren’t unwillingly presented porn of their own images.

    Also everyone can create a dataset and build a stable diffusion model, so why is lensa relying on the default model which is more like a quick and dirty tech demo. They had all the tools to do this right but decided to not even uses the easy lazy ones.

    source
  • Even_Adder@lemmy.dbzer0.com ⁨1⁩ ⁨year⁩ ago

    If we’re talking open source models, it’s because a lot of the people training are Asian, and have that bias.

    source
  • intensely_human@lemm.ee ⁨1⁩ ⁨year⁩ ago

    Because people are telling it to, I’d wager

    source
  • shellsharks@infosec.pub ⁨1⁩ ⁨year⁩ ago

    Because AI is the literal worst.

    source