top of page

Algorithmic Justice League: Gender Shades

How well do IBM, Microsoft, and Face++ AI services guess the gender of a face? The Gender Shades project evaluates the accuracy of AI powered gender classification products.

This evaluation focuses on gender classification as a motivating example to show the need for increased transparency in the performance of any AI products and services that focused on human subjects. Bias in this context is defined as having practical differences in gender classification error rates between groups.


1270 images were chosen to create a benchmark for this gender classification performance test.


The subjects were selected from 3 African countries and 3 European countries. The subjects were then grouped by gender, skin type, and the intersection of gender and skin type.


Gender Labels

Gender was broken into female and male categories since evaluated products provide binary sex labels for the gender classification feature. The evaluation inherits these sex labels and this reduced view of gender which is a more complex construct.


The dermatologist approved Fitzpatrick skin type classification system was used to label faces as Fitzpatrick Types I, II, III, IV, V, or VI.


Then faces labeled Fitzpatrick Types I, II, and III were grouped in a lighter category and faces labeled Fitzpatrick Types IV, V, and VI were grouped into a darker category.


Three companies - IBM, Microsoft, and Face++ - that offer gender classification products were chosen for this evaluation based on geographic location and their use of artificial intelligence for computer vision.


While the companies appear to have relatively high accuracy overall,there are notable differences in the error rates between different groups. Let's explore.


All companies perform better on males than females with an 8.1% - 20.6% difference in error rates.


All companies perform better on lighter subjects as a whole than on darker subjects as a whole with an 11.8% - 19.2% difference in error rates.


When we analyze the results by intersectional subgroups - darker males, darker females, lighter males, lighter females - we see that all companies perform worst on darker females.IBM and Microsoft perform best on lighter males. Face++ performs best on darker males.


IBM and Microsoft perform best on lighter males. Face++ performs best on darker males.


IBM had the largest gap in accuracy, with a difference of 34.4% in error rate between lighter males and darker females.


IBM Watson leaders responded within a day after receiving the performance results and are reportedly making changes to the Watson Visual Recognition API. Official Statement.


Error analysis reveals 93.6% of faces misgendered by Microsoft were those of darker subjects.


An internal evaluation of the Azure Face API is reportedly being conducted by Microsoft. Official Statement. Statement to Lead Researcher.


Error analysis reveals 95.9% of the faces misgendered by Face++ were those of female subjects.


Face++ has yet to respond to the research results which were sent to all companies on Dec 22 ,2017


At the time of evaluation , none of the companies tested reported how well their computer vision products perform across gender, skin type, ethnicity, age or other attributes.


Inclusive product testing and reporting are necessary if the industry is to create systems that work well for all of humanity. However, accuracy is not the only issue. Flawless facial analysis technology can be abused in the hands of authoritarian governments, personal adversaries, and predatory companies. Ongoing oversight and context limitations are needed.


While this study focused on gender classification, the machine learning techniques used to determine gender are also broadly applied to many other areas of facial analysis and automation. Face recognition technology that has not been publicly tested for demographic accuracy is increasingly used by law enforcement and at airports. AI fueled automation now helps determine who is fired, hired, promoted, granted a loan or insurance, and even how long someone spends in prison.


For interested readers, authors Cathy O'Neil and Virginia Eubanks explore the real-world impact of algorithmic bias.


Automated systems are not inherently neutral. They reflect the priorities, preferences, and prejudices - the coded gaze - of those who have the power to mold artificial intelligence.


We risk losing the gains made with the civil rights movement and women's movement under the false assumption of machine neutrality. We must demand increased transparency and accountability.


Learn more about the coded gaze -algorithmic bias - at www.ajlunited.org

Dive Deeper:

Test Inclusively:



Comments


bottom of page