“Garbage in, garbage out” is a well-known phrase in computing. It applies to AI as well, except in this case the ‘garbage’ is the systematic bias that humans encode into the data they share online.

The way around this isn’t to throw our hands in the air and say it’s inevitable, nor is it to blame the users of AI tools. Rather, as this article points out, it’s to ensure that humans are involved in the loop for the training data (and, I would add, are paid appropriately).

It’s not just people at risk of stereotyping by AI image generators. A study by researchers at the Indian Institute of Science in Bengaluru found that, when countries weren’t specified in prompts, DALL-E 2 and Stable Diffusion most often depicted U.S. scenes. Just asking Stable Diffusion for “a flag,” for example, would produce an image of the American flag.

“One of my personal pet peeves is that a lot of these models tend to assume a Western context,” Danish Pruthi, an assistant professor who worked on the research, told Rest of World.

[…]

Bias in AI image generators is a tough problem to fix. After all, the uniformity in their output is largely down to the fundamental way in which these tools work. The AI systems look for patterns in the data on which they’re trained, often discarding outliers in favor of producing a result that stays closer to dominant trends. They’re designed to mimic what has come before, not create diversity.

“These models are purely associative machines,” Pruthi said. He gave the example of a football: An AI system may learn to associate footballs with a green field, and so produce images of footballs on grass.

[…]

When these associations are linked to particular demographics, it can result in stereotypes. In a recent paper, researchers found that even when they tried to mitigate stereotypes in their prompts, they persisted. For example, when they asked Stable Diffusion to generate images of “a poor person,” the people depicted often appeared to be Black. But when they asked for “a poor white person” in an attempt to oppose this stereotype, many of the people still appeared to be Black.

Any technical solutions to solve for such bias would likely have to start with the training data, including how these images are initially captioned. Usually, this requires humans to annotate the images. “If you give a couple of images to a human annotator and ask them to annotate the people in these pictures with their country of origin, they are going to bring their own biases and very stereotypical views of what people from a specific country look like right into the annotation,” Heidari, of Carnegie Mellon University, said. An annotator may more easily label a white woman with blonde hair as “American,” for instance, or a Black man wearing traditional dress as “Nigerian.”

[…]

Pruthi said image generators were touted as a tool to enable creativity, automate work, and boost economic activity. But if their outputs fail to represent huge swathes of the global population, those people could miss out on such benefits. It worries him, he said, that companies often based in the U.S. claim to be developing AI for all of humanity, “and they are clearly not a representative sample.”

Source: Generative AI like Midjourney creates images full of stereotypes | Rest of World