Google apologizes for ‘missing the mark’ after Gemini generated racially diverse Nazis::Google says it’s aware of historically inaccurate results for its Gemini AI image generator, following criticism that it depicted historically white groups as people of color.

  • kromem@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    5 months ago

    If you train on Shutterstock and end up with a bias towards smiling, is that a human bias, or a stock photography bias?

    Data can be biased in a number of ways, that don’t always reflect broader social biases, and even when they might appear to, the cause vs correlation regarding the parallel isn’t necessarily straightforward.

    • VoterFrog@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      5 months ago

      I mean “taking pictures of people who are smiling” is definitely a bias in our culture. How we collectively choose to record information is part of how we encode human biases.

      I get what you’re saying in specific circumstances. Sure, a dataset that is built from a single source doesn’t make its biases universal. But these models were trained on a very wide range of sources. Wide enough to cover much of the data we’ve built a culture around.