A group of American computer scientists from Stanford and the University of Chicago discovered that dozens of popular artificial intelligence (AI) models continue to operate with racist stereotypes even after being retrained. The study was published on: portal scientific materials arXiv.
AI prone to racist judgments included the popular GPT-3.5 and GPT-4 AI models that form the basis of OpenAI’s ChatGPT chatbot.
Anecdotal evidence shows that many of today’s most in-demand neural networks can deliver racist answers to questions, sometimes overtly and sometimes covertly. AI model makers are trying to combat this bias by adjusting algorithms to ensure chatbots don’t discriminate against people based on their race.
The researchers trained AI chatbots to work with text documents written in African-American English style and invited the AI to leave comments about the authors of the texts. They then did the same with text documents written in Standard American English. They then compared reviews of both file types.
According to the researchers, nearly all chatbots produced results that confirmed negative stereotypes. As an example, GPT-4 suggests that authors of articles written in African American English are likely to be aggressive, rude, ignorant, and suspicious. In contrast, authors of articles written in Standard American English achieved much more positive results.
The scientists also found that the same AIs were much more positive when asked to describe African Americans in general. Neural networks called black people smart, intelligent and passionate.
But when the AI was asked what kind of work authors of articles in African-American English could earn a living, chatbots suggested that authors of these publications should be engaged in physical labor or sports. The AI also stated that writers would be tried more often for various crimes and sentenced to death penalty.
Previously on Racism suspicious Google company.