GPT-3 to have a discrimination bias

A new research paper written by academics in the US and Canada showed that OpenAI’s GPT-3 language-generating model has a serious discrimination-bias against Muslims, more than any other religious group.

Since the AI-model was trained with human-made text from the internet, the data had Western origins and biases. Indeed, it was founded out that the word ‘Muslim’ is associated with ‘terrorist’ in 23% of test cases, while ‘Jewish’ is associated with ‘money’ in 5% of test cases.

The company then built a system to filter out this kind of language, which shouldn’t alter GPT-3’s algorithm. However, this implies that the biases can’t be removed in the model itself and will remain part of it.

 

More
articles

Menu