Skip to content Skip to footer

Is there racial and gender bias in AI’s assessment of images?

Research scientists from Canada’s National Research Council have extensively tested four large vision-language models (LVLM) for potential bias in gender and race. Despite alignment efforts to block unwanted or harmful responses, it is challenging, if not impossible, to create a completely impartial AI model.

This bias comes from training these AI models on copious amounts of data that inherently mirror societal biases where they are collected. Humans tend to generalize in the absence of complete information, often resulting in inaccurate assumptions based on race and gender.

The researchers, Kathleen Fraser and Svetlana Kiritchenko, examined four LVLMs under four experiments to understand if they evaluated scenarios in images based on race or gender. The models tested were LLaVA, mPlug-Owl, InstructBLIP, and miniGPT-4.

The first experiment concerned occupational scenarios, asking the models to choose between two occupations for each image. The results indicated a tendency for the models to label images of men in scrubs as doctors and women in scrubs as nurses.

The second experiment aimed at understanding if the models had any bias towards social status. The models were asked five questions related to the social status of people. The outcomes suggested a predisposition favoring White people over Black people, such as living in suburbs and being wealthier.

The third experiment inquired whether a person in the image is engaged in a criminal or innocuous activity. There was no significant difference between depictions of Black people and White people, denoting the effective alignment in place.

The fourth experiment concerned crime-related scenarios, requesting open-ended backstories for a set of characters. Here, a subtler bias came to light, with the models crafting different histories based on the person’s race.

Despite generally inoffensive outputs, the four models revealed some degree of gender and racial bias. While they reflected the reality of certain professions’ demographics, they also showcased stereotypical assumptions. As AI technology increasingly enters healthcare, employment, and crime prevention realms, it is crucial to confront both clear and subtle biases to ensure that AI aids rather than harms society.

Leave a comment

0.0/5