Automated image generators are often accused of spreading harmful stereotypes, but studies usually only look at MidJourney. Other tools make serious efforts to increase diversity in their output, but effective remedies remain elusive.
Automated image generators are often accused of spreading harmful stereotypes, but studies usually only look at MidJourney. Other tools make serious efforts to increase diversity in their output, but effective remedies remain elusive.
Removed by mod
And even if we could provide the training algorithm a perfectly diverse dataset, who gets to decide what that means? You could probably poll a million anthropologists from across the world and observe trends, but no certain consensus. What if polling anthropologists in underdeveloped nations skews in a different direction than what we consider rich countries? How about if a country was a colonizer in the past or has participated in a violent revolution?
How do we decide who qualifies as an anthropologist? Is a doctorate required, or is a college degree with numerous publications sufficient?
I don’t think we’ll ever see a perfectly neutral solution to this problem. At best, we can come equipped with knowledge that these tools may come with some biases, like when you analyze texts from the past. You make the best with what you have and strive to improve