Amazon’s facial recognition software shows gender, racial bias

Amazon’s facial recognition software, known as Rekognition, is facing new criticism from researchers.

A recently released study by researchers at MIT Media Lab is providing more evidence of how bad facial recognition technology is at accurately identifying dark-skinned faces, especially when it comes to identifying dark-skinned women.

Error rate

MIT researchers said in their tests, Amazon’s facial recognition software misidentified women for men 19% of the time. By contrast, the results for dark-skinned women were much worse, with the technology misidentifying darker toned women as men 31% of the time.

MIT also found that darker-skinned men had an error rate of only 1%, while light-skinned males had none.

The new study, which was released on Thursday last week, raises more questions about race-and gender-based bias in the algorithms of Amazon’s facial recognition software, which has frequently been used by police and other federal agencies.

Researchers compared Amazon’s software to the facial recognition technology from IBM and Microsoft and found Amazon’s to be the least accurate.

Joy Buolamwini and Deborah Raji, the authors of the study, said that IBM and Microsoft have also vowed to improve the accuracy of their facial recognition software.

Amazon, meanwhile, hasn’t made any changes following the report.

In a statement to Verge, the company said that researchers weren’t using an up-to-date version of Amazon’s Rekognition. 

‘It’s not possible to draw a conclusion on the accuracy of facial recognition for any use case – including law enforcement – based on results obtained using facial analysis,’ Matt Wood, general manager of deep learning and AI at Amazon Web Services, said in a statement.

Ethical guidelines needed

Buolamwini and Deborah Raji believe that companies should continuously check their software to prevent any algorithmic bias.

“Consequently, the potential for weaponisation and abuse of facial analysis technologies cannot be ignored nor the threats to privacy or breaches of civil liberties diminished even as accuracy disparities decrease,” they wrote.

“More extensive explorations of policy, corporate practice and ethical guidelines are thus needed to ensure vulnerable and marginalised populations are protected and not harmed as this technology evolves.”

Racial profiling

Authors of the report warn that Amazon should stop marketing its Rekognition service because of concerns that it could lead to racial profiling and other injustices.

Earlier this month, a group of shareholders at Amazon also filed a letter demanding the company to stop selling its facial recognition software to the police and other governmental agencies, citing concerns of potential civil and human rights risks.

“It’s a familiar pattern: a leading tech company marketing what is hailed as breakthrough technology without understanding or assessing the many real and potential harms of that product,” Michael Connor, executive director of Open MIC, said in a statement. “Sales of Rekognition to the government represent a considerable risk for the company and investors. That’s why it’s imperative those sales be halted immediately.”

In October of last year, over 450 Amazon employees also signed a letter protesting against the company’s decision to sell to the police.

The American and Civil Liberties Union (ACLU) has also called on Amazon to stop marketing the product. In July 2018, the organisation uploaded mugshots onto Amazon’s software and conducted a test of how accurate the software is.

According to the ACLU, the facial recognition software falsely matched 28 members of Congress with mugshots to other people arrested for a crime.


Related Posts