Discriminatory bias in software algorithms

We_Can_EditFollowing a presentation about “Societal Responsibility in Internet business Innovation” I recently gave at the Responsible Innovation Conference 2015, a fellow attendee at the conference drew my attention to the NY Times “When Algorithms Discriminate”, from July 9th 2015. This article briefly summarized the results from a number of studies each of which exposed race, gender or other discriminatory biases in search engine results.

One study by researchers from Carnergie Mellon University revealed that Google’s advertising algorithm showed high-income job ads much more frequently to men than to women. In a separate study, researchers from Harvard University showed that, when using a US based IP address, searching for distinctively black-American personal names significantly increased the chances of getting Google Ads for services to perform criminal record background checks, as compared to searching for typically white-American names. In a third study, performed by the University of Washington, the researchers exposed gender stereotyping in Google Images search results for “C.E.O.”, which yielded 89% images of men and only 11% images of women, even through 27% of CEOs In the US are women.

As pointed out in the NY Times article, it is difficult to identify exactly why the algorithms produced these discriminatory biased results. The results from Google Ads for instance will depend strongly on the instructions that the advertiser gave regarding the target demographic for a particular advertisement. Other possible factors that might bias the results from an algorithm include attitudes and biases of the programmers, which they might not even be aware of. A benign example of such an implicit bias is for instance the finding that a state-of-the art Deep-learning based computer vision algorithm associated ‘school bus’ with the colours yellow and black because the US derived image database it was trained on contained primarily US-typical yellow-black school buses.

The main take-home message from these studies is to highlight once again that search engines, and other information processing algorithms we interact with online, hold strong editorial powers over the information we encounter online. It is therefore important to develop tools and research methods that can help to make the behaviour of these algorithms more transparent. Interestingly an “outline [for] an empirical methodology to analyse the representation of topics in search engines” was published in a paper in “First Monday” at almost exactly the same time as the NY Times article.

One thought on “Discriminatory bias in software algorithms”

Go on, leave us a reply!