cross-posted from: https://lemmy.ml/post/7481270

Automated image generators are often accused of spreading harmful stereotypes, but studies usually only look at MidJourney. Other tools make serious efforts to increase diversity in their output, but effective remedies remain elusive.

  • burliman@lemm.ee
    link
    fedilink
    English
    arrow-up
    10
    ·
    1 year ago

    If you use google images to do basically the same searches you get the same diversity issues. It’s reflecting the training data, and the larger world by extension. Whatever they would have us do to fix that must be applied to reality before it can or should be artificially skewed in AI models. Because if you bias the model to compensate you will create a worse bias. One that was intentional.

    Even if you don’t agree with that take, have a look at the Firefly example. they asked for a trucker named Paul, and they got a woman in the result set. Maybe somewhere out there exists a woman trucker named Paul, but it’s a clear reduction in accuracy and quality because Adobe attempted to inject artificial diversity.

    • jungle@lemmy.world
      link
      fedilink
      English
      arrow-up
      7
      ·
      1 year ago

      Yes, but on the other hand biasing the models could be a way to influence reality.

      • burliman@lemm.ee
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 year ago

        Could be, maybe. Or maybe not. Not sure. But the thing for sure is that forcing the diversity reduces the quality of the model.