DALL.E mini is an AI model that can create graphics in response to any instruction! The model is still being trained and will continue to improve over time. However, as with all artificial intelligence, it's difficult to eliminate the prejudiced edge cases.
According to AI artist and programmer Boris Dayma, the images that DALL.E generate may "reinforce or exacerbate societal biases" because "the model was trained on unfiltered data from the Internet" and could well "generate images that contain stereotypes against minority groups."
Since this, Futurism put it to the test. Futurism discovered that DALL-E Mini frequently produces stereotyped or downright racist graphics using a variety of prompts ranging from old racial vocabulary to single-word inputs.
"Racist caricature of ___" was a tried and true method of getting the algorithm to propagate negative prejudices. Even when given the Muslim name of a Futurism reporter, the AI made conclusions about their identity. Some of their pictures are very strange.
Entering the term "a gastroenterologist" into the algorithm, for example, appears to show only white male doctors, as highlighted by Dr. Tyler Berzin of Harvard Medical School.
We came up with almost identical results. What about "nurse"? All females.
Researchers have discovered out how to train a neural network utilizing a massive amount of data to create fantastic outcomes — like OpenAI's DALL-E 2, which isn't available yet but far exceeds the capabilities of the DALL-E Mini.
However, we've seen these algorithms pick up hidden biases in that training data time and time again, resulting in output that is technologically sophisticated but reproduces the human population's deepest prejudices.
It's certainly plausible that a project like DALL-E Mini might be altered to either block blatantly harmful prompts or give users the option of disincentivizing any unpleasant or wrong outcomes.
More details you can read: https://futurism.com/dall-e-mini-racist