Open Menu
AllLocalCommunitiesAbout
lotide
AllLocalCommunitiesAbout
Login

AI or DEI?

⁨127⁩ ⁨likes⁩

Submitted ⁨⁨1⁩ ⁨year⁩ ago⁩ by ⁨MakunaHatata@lemmy.ml⁩ to ⁨[deleted]⁩

https://lemmy.ml/pictrs/image/e6df9518-6910-4a15-91e7-f2d42ce0e215.png

source

Comments

Sort:hotnewtop
  • gmtom@lemmy.world ⁨1⁩ ⁨year⁩ ago

    Not sure if someone else has brought this up, but this is because these AI models are massively biased towards generating white people so as a lazy “fix” they randomly add race tags to your prompts to get more racially diverse results.

    source
    • kromem@lemmy.world ⁨1⁩ ⁨year⁩ ago

      Exactly. I wish people had a better understanding of what’s going on technically.

      It’s not that the model itself has these biases. It’s that the instructions given them are heavy handed in trying to correct for an inversely skewed representation bias.

      So the models are literally ok instructed things like “if generating a person, add a modifier to evenly represent various backgrounds like Black, South Asian…”

      Here you can see that modifier being reflected back when the prompt is shared before the image.

      It’s like an ethnicity AdLibs the model is being instructed to fill out whenever generating people.

      source
  • pendulum_@lemmy.world ⁨1⁩ ⁨year⁩ ago

    It’s horrifically bad, even if not compared against other LLMs. I asked it for photos of actress and model Elle Fanning on a beach, and it accused me of seeking CSAM… That’s an instant never-going-to-use-again for me - mishandling that subject matter in any way is not a “whoopsie”

    source
    • Lojcs@lemm.ee ⁨1⁩ ⁨year⁩ ago

      That sounds more like what shall we ever do if children are allowed to see bikinis

      source
  • Kusimulkku@lemm.ee ⁨1⁩ ⁨year⁩ ago

    This is fucking ridiculous. This AI is the worst of them all. I don’t mind it when they subtly try to insert some diversity where it makes sense but this is just nonsense.

    source
    • Flumpkin@slrpnk.net ⁨1⁩ ⁨year⁩ ago

      They are experimenting and tuning. Apparently without any correction there is significant racist bias. Basically the AI reflects the long term racial bias in the training data. According to this BBC article it was an attempt to correct this bias but went a bit overboard.

      source
      • ApathyTree@lemmy.dbzer0.com ⁨1⁩ ⁨year⁩ ago

        Significant racist bias is an understatement.

        I asked a generator to make me a “queen monkey in a purple gown sitting on a throne” and I got maybe two pictures of actual monkeys. I even tried rewording it several times to be a real monkey, described the hair and everything.

        The rest were all women of color.

        Very disturbing. Pretty ladies, but very racist.

        source
      • explodicle@local106.com ⁨1⁩ ⁨year⁩ ago

        We all expected the AIs to launch nukes, and they simply held up a mirror.

        source
        • -> View More Comments
      • Kusimulkku@lemm.ee ⁨1⁩ ⁨year⁩ ago

        For example, a prompt seeking images of America’s founding fathers turned up women and people of colour.

        “A bit”

        source
        • -> View More Comments
  • MakunaHatata@lemmy.ml ⁨1⁩ ⁨year⁩ ago

    Image

    Image

    source
    • herrvogel@lemmy.world ⁨1⁩ ⁨year⁩ ago

      Yes who can forget about Henry the Magnificent and his onion hat.

      source
    • kromem@lemmy.world ⁨1⁩ ⁨year⁩ ago

      It’s literally instructed to do AdLibs with ethnic identities to diversify prompts for images of people.

      You can see how it’s just inserting the ethnicity right before the noun in each case.

      Was a very poor alignment strategy. This already blew up for Dall-E. Was Google not paying attention to their competitors’ mistakes?

      source
  • Eddyzh@lemmy.world ⁨1⁩ ⁨year⁩ ago

    It is ridiculous. However, how can we know you did not first instruct to only show dark skin? Or select these from many examples that showed something else?

    source
    • Kusimulkku@lemm.ee ⁨1⁩ ⁨year⁩ ago

      This issue is widely reported and you can check it out for yourself. I did, it gave similar sort of results. Finnish presidents are now black.

      source
    • stoneparchment@possumpat.io ⁨1⁩ ⁨year⁩ ago

      It’s also like, I guess I would prefer it to make mistakes like this if it means it is less biased towards whiteness in other, less specific areas?

      Like, we know these models are dumb as rocks. We know that they are imperfect and that they mirror the biases of their trainers and training data, and that in American society that means bias towards whiteness. If the trainers are doing what they can to prevent that from happening, whatever, that’s cool… even if the result is some dumb stuff like this sometimes.

      I also don’t think it’s a problem for the user to specify race if it matters? Like “a white queen of England” is a fine thing to ask for, and if it isn’t specified, the model will include diverse options even if they aren’t historically accurate. No one gets bent out of shape if the outfits aren’t quite historical accurate, for example

      source
      • ji59@kbin.social ⁨1⁩ ⁨year⁩ ago

        The problem is that these answers are hugely incorrect and if some child learning about history of England would see this, they would create bias that England was always diverse.
        The same is true for some recent post, where people knowing nothing about Scotland history could learn from images that half of Scotland population in 18th century was black.
        So from my perspective these images are just completely wrong and it should be fixed.
        Also if you want diversity, what about handicapped people?

        source
        • -> View More Comments
  • Amaltheamannen@lemmy.ml ⁨1⁩ ⁨year⁩ ago

    And how do we know you didn’t crop out an instruction asking for diversity?

    Either that or a side effect of trying to have less training data bias.

    source
    • skullgiver@popplesburger.hilciferous.nl ⁨1⁩ ⁨year⁩ ago
      [deleted]
      source
      • Cqrd@lemmy.dbzer0.com ⁨1⁩ ⁨year⁩ ago

        OpenAI also does this with its image generator, but apparently not to such a powerful degree.

        source
  • konkonjoja@lemmy.world ⁨1⁩ ⁨year⁩ ago

    They stopped their AI from generating people …

    source
  • ninjan@lemmy.mildgrim.com ⁨1⁩ ⁨year⁩ ago

    Image

    Is there some preview version of Gemini Ultra that can generate images or what gives?

    source