dalle-2-preview icon indicating copy to clipboard operation
dalle-2-preview copied to clipboard

No diversity

Open ziofil opened this issue 2 years ago • 6 comments

It worries me when I see that "a CEO" are all men, "a flight attendant" are all asian women, "an evil person" are mostly south-asian men, more than half of "a black runner" are white men, "model 2 CEO" are all white men, "model 2 nurse" are all women, "lawyer" are all white men...

You get the picture: it seems that there is very little diversity and that DALL E reproduces old stereotypes. Troubling.

ziofil avatar Apr 11 '22 14:04 ziofil

That is the point of the examples in the section Bias, representation, stereotypes.

Our intent is to provide concrete illustrations that can inform users and affected non-users at this very initial preview stage.

I don't think that OpenAI has a solution at this point, hence the limited access, terms of use on social media, etc.

If a checkpoint is ever made public, I would expect it to have been trained on a filtered dataset (without human beings, as glide).

woctezuma avatar Apr 11 '22 15:04 woctezuma

Oh good, thanks for clarifying!

ziofil avatar Apr 11 '22 15:04 ziofil

Is this not just an average, accurate representation of reality? I mean, "evil person" as far as the bulk of image data from google goes, is denoted by narrowed, glaring eyes, and a darker color palate, for instance. Is the model not best as a represntation of reality, or should it be transformed to fit the image of an individuals desired fantasy? The fact of the matter is that, within the scope of english-speaking web data, the average CEO is an older white man, average nurse is a woman, etc. When were dealing with developments of the scale of AI, and as we as humans approach the technological singularity, it is extremely important we make the right decisions with how we build these systems. Having a common practice of forcing AI systems into a desired fantasy that is dissonant with reality seems very, very unlikely to turn out well.

justintrotta avatar May 10 '22 15:05 justintrotta

I’m concerned whenever an answer boils down to “trust the science” or “trust the experts”. How do unaffiliated entities audit a process like mentioned here?

justaguywhocodes avatar May 22 '22 14:05 justaguywhocodes

Here are funny pictures which showcase issues with DALL-E 2 for non-human categories and conflicting prompts.

These figures can be found in the Imagegen paper.

I don't know if these are limitations of DALL-E 2 (problems with colors and assignments), or partially due to bias ("an apple is green", "a horse does not ride an astronaut", "a panda does not make coffee", etc.).

Colors

Conflicts

woctezuma avatar May 27 '22 10:05 woctezuma

https://openai.com/blog/reducing-bias-and-improving-safety-in-dall-e-2/

After

woctezuma avatar Jul 19 '22 21:07 woctezuma