Google apologizes for ‘missing the mark’ after Gemini generated racially diverse Nazis::Google says it’s aware of historically inaccurate results for its Gemini AI image generator, following criticism that it depicted historically white groups as people of color.

    • Jojo@lemm.ee
      link
      fedilink
      English
      arrow-up
      0
      ·
      5 months ago

      The complaint listed in the text was that it “refused to generate white people in any context”, which was not the author’s experience, hence they shared screens of their results which did include white americans

    • 🔍🦘🛎@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      5 months ago

      It’s a demonstration that the model is coded to include diversity, and it doesn’t generate 4 middle aged WASP moms

    • fidodo@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      5 months ago

      I think it’s an example of why they programmed in diversity, to ensure you get diverse responses, but they forgot about edge cases.

  • kaffiene@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    5 months ago

    Why would anyone expect “nuance” from a generative AI? It doesn’t have nuance, it’s not an AGI, it doesn’t have EQ or sociological knowledge. This is like that complaint about LLMs being “warlike” when they were quizzed about military scenarios. It’s like getting upset that the clunking of your photocopier clashes with the peaceful picture you asked it to copy

    • UlrikHD@programming.dev
      link
      fedilink
      English
      arrow-up
      0
      ·
      5 months ago

      I’m pretty sure it’s generating racially diverse nazis due to companies tinkering with the prompts under the hood to counterweight biases in the training data. A naive implementation of generative AI wouldn’t output black or Asian nazis.

      it doesn’t have EQ or sociological knowledge.

      It sort of does (in a poor way), but they call it bias and tries to dampen it.

      • Echo Dot@feddit.uk
        link
        fedilink
        English
        arrow-up
        0
        ·
        5 months ago

        At the moment AI is basically just a complicated kind of echo. It is fed data and it parrots it back to you with quite extensive modifications, but it’s still the original data deep down.

        At some point that won’t be true and it will be a proper intelligence. But we’re not there yet.

        • maynarkh@feddit.nl
          link
          fedilink
          English
          arrow-up
          0
          ·
          5 months ago

          Nah, the problem here is literally that they would edit your prompt and add “of diverse races” to it before handing it to the black box, since the black box itself tends to reflect the built-in biases of training data and produce black prisoners and white scientists by itself.

      • kaffiene@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        5 months ago

        I don’t disagree. The article complained about the lack of nuance in generating responses and I was responding to the ability of LLMs and Generative AI to exhibit that. Your points about bias I agree with

    • stockRot@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      5 months ago

      Why shouldn’t we expect more and better out of the technologies that we use? Seems like a very reactionary way of looking at the world

      • kaffiene@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        5 months ago

        I DO expect better use from new technologies. I don’t expect technologies to do things that they cannot. I’m not saying it’s unreasonable to expect better technology I’m saying that expecting human qualities from an LLM is a category error

  • rab@lemmy.ca
    link
    fedilink
    English
    arrow-up
    0
    ·
    4 months ago

    I can’t fathom why google would force diversity into AI.

    People use AI as tools. If the tool doesn’t work correctly, people will not use it, full stop. It’s that simple.

    There are many different AI out there that don’t behave this way and people will be quick to move on to one of those instead.

    Surprisingly stupid even for google.

    • Player2@lemm.ee
      link
      fedilink
      English
      arrow-up
      0
      ·
      5 months ago

      There is a difference between having actually diverse data sources and secretly adding the word “diverse” to each image generation prompt

      • Dayroom7485@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        5 months ago

        Never claimed they had diverse data sources - they probably don’t.

        My point is that that when minorities are underrepresented, which is the default case in GenAI, the (white, male) public tends to accept that.

        I like that they tried to fix the issue of GenAI being racist and sexist. Even though the solution is obviously flawed: Better this than a racist model.

        • StereoTrespasser@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          5 months ago

          I can’t believe someone has to spell this out for you, but here we go: an accurate picture of people from an era in which there was no diversity will, by definition, not be diverse.

  • Jeom@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    5 months ago

    inclusivity is obviously good but what googles doing just seems all too corporate and plastic

    • guajojo@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      5 months ago

      It’s trying so hard to not be racist that is being even more racing that other AI, is hilarious

  • heavy@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    0
    ·
    5 months ago

    Now that shit is funny. I hope more people take more time to laugh at companies scrambling to pour billions into projects they don’t understand.

    Laugh while it’s still funny, anyway.

  • NotJustForMe@lemmy.ml
    link
    fedilink
    English
    arrow-up
    0
    ·
    5 months ago

    It’s okay when Disney does it. What a world. Poor AI, how are they supposed to learn if all its data is created by mentally ill and crazy people. ٩(。•́‿•̀。)۶

    • rottingleaf@lemmy.zip
      link
      fedilink
      English
      arrow-up
      0
      ·
      5 months ago

      WDYM?

      Only their new SW trilogy comes to mind, but in SW racism among humans was something limited to very backwards (savage by SW standards) planets, racism of humans towards other spacefaring races and vice versa was more of an issue, so a villain of any kind of human race is normal there.

      It’s rather the purely cinematographic part which clearly made skin color more notable for whichever reason, and there would be some racists among viewers.

      Probably they knew they can’t reach the quality level of OT and PT, so made such things intentionally during production so that they could later complain about fans being racist.

      • NotJustForMe@lemmy.ml
        link
        fedilink
        English
        arrow-up
        0
        ·
        5 months ago

        Have you read the article? It was about misrepresenting historical figures, racism was just a small part.

        It was about favoring diversity, even if it’s historically inaccurate or even impossible. Something Disney is very good at.

  • yildolw@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    5 months ago

    Oh no, not racial impurity in my Nazi fanart generator! /s

    Maybe you shouldn’t use a plagiarism engine to generate Nazi fanart. Thanks

  • RGB3x3@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    5 months ago

    A Washington Post investigation last year found that prompts like “a productive person” resulted in pictures of entirely white and almost entirely male figures, while a prompt for “a person at social services” uniformly produced what looked like people of color. It’s a continuation of trends that have appeared in search engines and other software systems.

    This is honestly fascinating. It’s putting human biases on full display at a grand scale. It would be near-impossible to quantify racial biases across the internet with so much data to parse. But these LLMs ingest so much of it and simplify the data all down into simple sentences and images that it becomes very clear how common the unspoken biases we have are.

    There’s a lot of learning to be done here and it would be sad to miss that opportunity.

    • Eyck_of_denesle@lemmy.zip
      link
      fedilink
      English
      arrow-up
      0
      ·
      5 months ago

      How are you guys getting it to generate"persons". It simply says It’s against my GOGLE AI PRINCIPLE to generate images of people.

    • Buttons@programming.dev
      link
      fedilink
      English
      arrow-up
      0
      ·
      5 months ago

      It’s putting human biases on full display at a grand scale.

      The skin color of people in images doesn’t matter that much.

      The problem is these AI systems have more subtle biases, ones that aren’t easily reveals with simple prompts and amusing images, and these AIs are being put to work making decisions who knows where.

      • intensely_human@lemm.ee
        link
        fedilink
        English
        arrow-up
        0
        ·
        5 months ago

        In India they’ve been used to determine whether people should be kept on or kicked off of programs like food assistance.

        • rottingleaf@lemmy.zip
          link
          fedilink
          English
          arrow-up
          0
          ·
          edit-2
          5 months ago

          Well, humans are similar to pigs in the sense that they’ll always find the stinkiest pile of junk in the area and taste it before any alternative.

          EDIT: That’s about popularity of “AI” today, and not some semantic expert systems like what they’d do with Lisp machines.

    • kromem@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      5 months ago

      It’s putting human biases on full display at a grand scale.

      Not human biases. Biases in the labeled data set. Those could sometimes correlate with human biases, but they could also not correlate.

      But these LLMs ingest so much of it and simplify the data all down into simple sentences and images that it becomes very clear how common the unspoken biases we have are.

      Not LLMs. The image generation models are diffusion models. The LLM only hooks into them to send over the prompt and return the generated image.

        • kromem@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          5 months ago

          If you train on Shutterstock and end up with a bias towards smiling, is that a human bias, or a stock photography bias?

          Data can be biased in a number of ways, that don’t always reflect broader social biases, and even when they might appear to, the cause vs correlation regarding the parallel isn’t necessarily straightforward.

          • VoterFrog@lemmy.world
            link
            fedilink
            English
            arrow-up
            0
            ·
            5 months ago

            I mean “taking pictures of people who are smiling” is definitely a bias in our culture. How we collectively choose to record information is part of how we encode human biases.

            I get what you’re saying in specific circumstances. Sure, a dataset that is built from a single source doesn’t make its biases universal. But these models were trained on a very wide range of sources. Wide enough to cover much of the data we’ve built a culture around.

  • Underwaterbob@lemm.ee
    link
    fedilink
    English
    arrow-up
    0
    ·
    5 months ago

    This could make for some hilarious, alternate history satire or something. I could totally see Key and Peele heading a group of racially diverse nazis ironically preaching racial purity and attempting to take over the world.

    • AstridWipenaugh@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      5 months ago

      Dave Chappelle did that with a blind black man that joined the Klan (back in the day before he went off the deep end)

  • Rob@lemdro.id
    link
    fedilink
    English
    arrow-up
    0
    ·
    5 months ago

    I’m all for letting people of all backgrounds having an equal work/representation opportunity but this ai went too far.

    What I am against is taking official / past figures such as u.s presidents and race swapping them. These are real people who were white. Sorry if it offends someone but that’s just how it was.

    At this point we are putting dei even over who use to govern the u.s as offical presidents? Why? Who does this help? If anything you make people with legit purpises hate dei more by doing this. Imagine if they did that to president Obama people would be sticking it to Google 10 times harder then they are now.

      • Rob@lemdro.id
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        5 months ago

        I can’t speak much about my opinion on a person, as it might a be off topic of the original post and b start controversy for either side of politics.

        Sure some people can be controversal, but to have something like gemini and to seemingly go out of their way to just not generate a persons appearance traits accurately. Not the most proffesional look.

        Although, if a user were to ask for a raceswap of a historical president I would be ok with that if that’s something they inputed that they wanted.

    • roofuskit@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      5 months ago

      So what you’re saying is that a white actor should always be cast to play any character that was originally white whether they are the best actor or not?

      Keep in mind historical figures are largely white because of systemic racism and in your scenario the film and television industry would have to purposefully double down on the discrimination that empowered those people to meet your requirements.

      I’m not defending Google’s ham fisted approach. But at the same time it’s a great reinforcement of the reality that Large Language Models cannot and should not be relied upon for accurate information. LLMs are just as ham fisted for accurate information as Google’s approach to diversity in LLMs.

      • Rob@lemdro.id
        link
        fedilink
        English
        arrow-up
        0
        ·
        5 months ago

        Let me answer your first question by reversing it back at you If Barack Obama was historocally black should a black person be able to play as him. I believe so. This should be the same for all real life historical figures. If you want more diversity create new characters to fill the void. If the new characters are good people will love them.

        In film industry I feel that may be different since a made up story generally in alot of these shows and movies. So if they changed something it isn’t the biggest deal to me because it wasn’t meant to be taken seriosly rather meant for entertainment.

        My argument was actually for real life historical figures to be represented more properly because this isn’t just about diversity in jobs and entertainment anymore, your changing real life history regarding governments, militaries and presidents and etc. And this wasn’t done just to u.s figures by Gemini.

        I do agree ai can make mistakes and isn’t perfect. Shouldn’t be used as real life context all the time but from Google sometimes you just expect better.

        • roofuskit@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          5 months ago

          Someone who is half white would have to play him right? So you’d have to exclude any truly dark skinned black people for the role. You know, because the American public would have never put someone dark skinned into the presidency.

          • Rob@lemdro.id
            link
            fedilink
            English
            arrow-up
            0
            ·
            4 months ago

            I disagree with that, because Barack was actually black so he should be depicted as such despite how people feel because that is how his appearence was.

            • roofuskit@lemmy.world
              link
              fedilink
              English
              arrow-up
              0
              ·
              4 months ago

              But you see where this gets dicey right?

              It’s also different when someone’s race is central to their story.

  • BurningnnTree@lemmy.one
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    5 months ago

    No matter what Google does, people are going to come up with gotcha scenarios to complain about. People need to accept the fact that if you don’t specify what race you want, then the output might not contain the race you want. This seems like such a silly thing to be mad about.

    • fidodo@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      5 months ago

      It’s silly to point at brand new technology and not expect there to be flaws. But I think it’s totally fair game to point out the flaws and try to make it better, I don’t see why we should just accept technology at its current state and not try to improve it. I totally agree that nobody should be mad at this. We’re figuring it out, an issue was pointed out, and they’re trying to see if they can fix it. Nothing wrong with that part.

    • UnderpantsWeevil@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      5 months ago

      No matter what Google does, people are going to come up with gotcha scenarios to complain about.

      American using Gemini: “Please produce images of the KKK, historically accurate Santa’s Workshop Elves, and the board room of a 1950s auto company”

      Also Americans: “AH!! AH!!! Minorities and Women!!! AAAAAHHH!!!”

      I mean, idk, man. Why do you need AI to generate an image of George Washington when you have thousands of images of him already at your disposal?

      • FinishingDutch@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        5 months ago

        Because sometimes you want an image of George Washington, riding a dinosaur, while eating a cheeseburger, in Paris.

        Which you actually can’t do on Bing anyway, since it ‘content warning’ stops you from generating anything with George Washington…

        Ask it for a Founding Father though, it’ll even hand him a gat!

        https://lemmy.world/pictrs/image/dab26e07-34c8-422e-944f-83d7f719ea2e.jpeg

        • pirat@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          5 months ago

          The random lettuce between every layer is weirdly off-putting to me. It seems like it’s been growing on the burger for quite some time :D

          • FinishingDutch@lemmy.world
            link
            fedilink
            English
            arrow-up
            0
            ·
            5 months ago

            Doesn’t look too bad to me. I love a fair bit of crispy lettuce on a burger. Doing it like that at least spreads it out a bit, rather than having a big chunk of lettuce.

            Still, it that was my burger… I’d add another patty and extra cheese.

          • FinishingDutch@lemmy.world
            link
            fedilink
            English
            arrow-up
            0
            ·
            5 months ago

            Funnily enough, he’s not eating one in the other three images either. He’s holding an M16 in one, with the dinosaur partially as a hamburger (?). In the other two he’s merely holding the burger.

            I assume if I change the word order around a bit, I could get him to enjoy that burger :D

            • VoterFrog@lemmy.world
              link
              fedilink
              English
              arrow-up
              0
              ·
              5 months ago

              This is the thing. There’s an incredible number of inaccuracies in the picture, several of which flat out ignore the request in the prompt, and we laugh it off. But the AI makes his skin a little bit darker? Write the Washington Post! Historical accuracy! Outrage!

              • FinishingDutch@lemmy.world
                link
                fedilink
                English
                arrow-up
                0
                ·
                5 months ago

                Well, the tech is of course still young. And there’s a distinct difference between:

                A) User error: a prompt that isn’t as good as it can be, with the user understanding for example the ‘order of operations’ that the AI model likes to work in.

                B) The tech flubbing things because it’s new and constantly in development

                C) The owners behind the tech injecting their own modifiers into the AI model in order to get a more diverse result.

                For example, in this case I understand the issue: the original prompt was ‘image of an American Founding Father riding a dinosaur, while eating a cheeseburger, in Paris.’ Doing it in one long sentence with several comma’s makes it harder for the AI to pin down the ‘main theme’ from my experience. Basically, it first thinks ‘George on a dinosaur’ with the burger and Paris as afterthoughts. But if you change the prompt around a bit to ‘An American Founding Father is eating a cheeseburger. He is riding on a dinosaur. In the background of the image, we see Paris, France.’, you end up with the correct result:

                Basically the same input, but by simply swapping around the wording it got the correct result. Other ‘inaccuracies’ are of course to be expected, since I didn’t really specify anything for the AI to go of. I didn’t give it a timeframe for one, so it wouldn’t ‘know’ not to have the Eiffel Tower and a modern handgun in it. Or that that flag would be completely wrong.

                The problem is with C) where you simply have no say in the modifiers that they inject into any prompt you send. Especially when the companies state that they are doing it on purpose so the AI will offer a more diverse result in general. You can write the best, most descriptive prompt and there will still be an unexpected outcome if it injects their modifiers in the right place of your prompt. That’s the issue.

                • VoterFrog@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  0
                  ·
                  5 months ago

                  C is just a work around for B and the fact that the technology has no way to identify and overcome harmful biases in its data set and model. This kind of behind the scenes prompt engineering isn’t even unique to diversifying image output, either. It’s a necessity to creating a product that is usable by the general consumer, at least until the technology evolves enough that it can incorporate those lessons directly into the model.

                  And so my point is, there’s a boatload of problems that stem from the fact that this is early technology and the solutions to those problems haven’t been fully developed yet. But while we are rightfully not upset that the system doesn’t understand that lettuce doesn’t go on the bottom of a burger, we’re for some reason wildly upset that it tries to give our fantasy quasi-historical figures darker skin.

    • OhmsLawn@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      5 months ago

      It’s really a failure of one-size-fits-all AI. There are plenty of non-diverse models out there, but Google has to find a single solution that always returns diverse college students, but never diverse Nazis.

      If I were to use A1111 to make brown Nazis, it would be my own fault. If I use Google, it’s rightfully theirs.

      • PopcornTin@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        5 months ago

        The issue seems to be the underlying code tells the ai if some data set has too many white people or men, Nazis, ancient Vikings, Popes, Rockwell paintings, etc then make them diverse races and genders.

        What do we want from these AIs? Facts, even if they might be offensive? Or facts as we wish they would be for a nicer world?