Skip to content Skip to footer

It is found that AI Chatbots exhibit greater racial bias than previously assumed and human input does not seem to rectify the issue.

Researchers at the Allen Institute for AI and Stanford University have found new evidence of racial discrimination in large language models (LLMs), shedding light on a more concealed form of prejudice than previously known. The study, published on arXiv, determined that current LLMs discriminated against people of color based on their dialect, particularly against speakers of African American English (AAE) compared to Standard American English (SAE).

Five language models were studied – OpenAI’s GPT-2, GPT-3.5, GPT-4, RoBERTa, and T5. Using a method called ‘Matched Guise Probing’, researchers tested the LLMs by inputting sentences in both AAE and SAE and then requested the models to infer the character of the speaker based on these sentences. The results hinted at a clear “dialect prejudice”, concluding that all tested LLMs held stereotypes towards speakers of AAE.

This dialect prejudice in AI turns problematic as it affects those using AAE negatively, particularly in areas such as employment and criminal justice. For instance, researchers found that these LLMs were more prone to assign less prestigious roles to AAE speakers, even when clarified that the speakers were not African Americans. Job roles related to music and entertainment were also more often assigned to AAE speakers.

In a contrasting scenario, SAE speakers were generally given white-collar job roles requiring a higher level of education. Likewise, in a hypothetical murder trial where AAE and SAE texts were put forth as evidence, the models were found to be more likely to infer guilt upon a speaker when confronted with AAE texts.

Attempts at rectifying these biases have so far proven unsuccessful. Despite increased sample size aiding in LLMs improving their understanding of AAE, underlying racial prejudices have persisted. While human intervention and feedback have managed to partially mitigate overt stereotypes, they have been largely ineffective against the more covert forms of prejudice.

Essentially, these LLMs have been trained to suppress their outwardly racist attitudes while their more covert forms of prejudice, such as dialect bias, remain untouched. As AI continues to integrate into our lives, and as businesses turn to AI models like these for smooth operations, it becomes increasingly critical to address these biases. The researchers argue that companies operating LLMs should pause their use until these glaring issues are adequately addressed.

Leave a comment

0.0/5