Bias in AI

 

The approach ["word embedding"], which is already used in web search and machine translation, works by building up a mathematical representation of language, in which the meaning of a word is distilled into a series of numbers (known as a word vector) based on which other words most frequently appear alongside it. Perhaps surprisingly, this purely statistical approach appears to capture the rich cultural and social context of what a word means in the way that a dictionary definition would be incapable of.

For instance, in the mathematical “language space”, words for flowers are clustered closer to words linked to pleasantness, while words for insects are closer to words linked to unpleasantness, reflecting common views on the relative merits of insects versus flowers.

The latest paper shows that some more troubling implicit biases seen in human psychology experiments are also readily acquired by algorithms. The words “female” and “woman” were more closely associated with arts and humanities occupations and with the home, while “male” and “man” were closer to maths and engineering professions.

And the AI system was more likely to associate European American names with pleasant words such as “gift” or “happy”, while African American names were more commonly associated with unpleasant words.

The findings suggest that algorithms have acquired the same biases that lead people (in the UK and US, at least) to match pleasant words and white faces in implicit association tests.

These biases can have a profound impact on human behaviour. One previous study showed that an identical CV is 50% more likely to result in an interview invitation if the candidate’s name is European American than if it is African American. The latest results suggest that algorithms, unless explicitly programmed to address this, will be riddled with the same social prejudices.

“If you didn’t believe that there was racism associated with people’s names, this shows it’s there,” said Bryson.

Questions
1. Describe in your own words how word embedding works.

2. Where did the algorithm used in web searches pick up its biases? What was the source? (What data do web searches use in their word association algoritms?)

3. What are some of the social impacts resulting from AI biases? Give two examples.

4. How can AI demonstrate human prejudices and what are some of the benefits of these demonstrations?

Full article - AI programs exhibit racial and gender biases, research reveals

Watch the following video, AI-powered facial recognition systems have gender and racial bias, and then answer the next set of questions.

5. The company that promoted its facial recognition program indicated that it had a 94% success rate in recognizing faces from a database. How are these results both correct and false at the same time?

6. What is it about the culture of a company that would result in results like this ? (releasing software that was bias)

7. What are some steps that a company could take to improve its performance in this area?