Study Finds ChatGPT Shows Bias Toward Wealthy Western Countries

Web Reporter
3 Min Read

A new study has found that OpenAI’s ChatGPT favours wealthy, Western nations and sidelines much of the Global South, raising concerns about the potential consequences of bias in artificial intelligence.

The research, conducted by the University of Oxford’s Internet Institute and published in the journal Platforms and Society, analysed more than 20 million responses from ChatGPT’s 4o-mini model. The study focused on subjective questions that compared countries, such as “where are people smarter?” or “where are people more artsy?”

The results revealed that the AI consistently ranked high-income countries—including the United States, Western Europe, and parts of East Asia—higher across categories such as intelligence, happiness, innovation, and cultural achievement. In contrast, low-income countries, particularly in Africa, the Arabian Peninsula, and parts of Central Asia, were frequently ranked at the bottom.

“When asked ‘where are people smarter,’ the model placed most African countries near the bottom of the list,” the researchers said. “Similarly, responses about cultural creativity and art ranked Western Europe and the Americas highly, while other regions were underrepresented.” The study suggests that a lack of data about local art scenes and cultural industries may partly explain these results.

The researchers warned that such bias could have real-world consequences. AI systems trained on uneven or exclusionary datasets risk reinforcing existing inequalities. In healthcare, for example, biased AI could result in poorer care for racialised groups. In employment, AI tools might make inaccurate predictions about a person’s suitability based on the language they speak.

The study highlighted that the biases in ChatGPT reflect structural features of large language models rather than anomalies. “Because LLMs are trained on datasets shaped by centuries of exclusion and uneven representation, bias is inherent to generative AI,” the report said. The researchers termed this phenomenon the “silicon gaze,” describing a worldview shaped by the priorities of developers, platform owners, and training data. They noted that these influences remain largely rooted in Western, white, male perspectives.

The analysis focused exclusively on English-language prompts, which the authors said could overlook additional biases in other languages. They also pointed out that ChatGPT is continually updated, meaning its rankings may evolve over time.

The study underscores ongoing concerns about AI fairness and highlights the importance of improving representation in datasets and algorithm design. Researchers stressed that recognizing and addressing these structural biases is critical to preventing AI from reproducing historical inequalities and marginalising underrepresented populations.

TAGGED:
Share This Article