The latest paper shows that some more troubling implicit biases seen in human psychology experiments are also readily acquired by algorithms. […] The AI system was more likely to associate European American names with pleasant words such as “gift” or “happy”, while African American names were more commonly associated with unpleasant words.
These biases can have a profound impact on human behaviour. One previous study showed that an identical CV is 50% more likely to result in an interview invitation if the candidate’s name is European American than if it is African American. The latest results suggest that algorithms, unless explicitly programmed to address this, will be riddled with the same social prejudices.
Dubbed the Geena Davis Inclusion Quotient (GD-IQ), the tool not only can identify a character’s gender, but it knew to a fraction of a second how long each actor spoke, and were on-screen.