House of Lords vs. Algorithm Overlords!

Lords_v_AIAs a researcher who has spent some time working on AI and Robotics I naturally tend to notice when AI gets discussed in the news. Over the last couple of years, social media and software companies have all started to invest heavily in AI research such as Google’s purchase of the robotics company Boston Dynamics, Facebook, as well as Microsoft, Apple and IBM’s Watson and Deep Blue of course. Partially in response to this, research institutes (e.g. Future of Life institute, Future of Humanity institute, Machine Intelligence Research Institute (MIRI)) and well known scientists and industrialists (e.g. Stephen Hawking, Max Tegmark, Elon Musk and Bill Gates) have launched various campaigns and given media interviews to raise their concerns about the possible extinction level threat posed by the possible rise of Superintelligent AI.

For my part, I am very sympathetic to the core argument that AI researchers need to consider not only the technical possibilities but also the societal ramifications of their work, basically the RRI for ICT agenda. I am, however, less concerned about the imminent rise of human-level (or higher) general AI and more with the impact of algorithms that are already mediating much of our (online) lives. I also think that if we want to seriously engage policy makers and the wider public in the discussion about moral and ethical responsibility in IT development, including robotics and AI, we will gain much more traction focusing on things that are affecting us now, rather than possible threats that will arise at an unknown time in the future.

Given this context, I was please to discovered the following question last week among the call for comments by the House of Lords “Inquiry into online platforms and the EU Digital Single Market”:

“Q11. Should online platforms have to explain the inferences of their data-driven algorithms, and should they be made accountable for them?”

Data-driven algorithms are an increasingly important element in determining the customer experience when using online platforms. The algorithms filter and rank which information is presented to the user and where it is placed on the interface, both of which affect the likelihood that a customer will notice and interact with the information. The high volumes of data available online means these algorithms are vital for enabling users to find the relevant information, be it search results, news stories, or product offers. Consumer decisions, ranging from who to vote for, down to choices in music to listen to are all influenced by the information people are exposed to. This is the basis of advertising as well as state propaganda. Lack of transparency about the way in which algorithms manage this information introduces the potential for abusive manipulation. This can take the form of censorship, such as suppressing negative comments about the platform provider, or anti-competitive business practices such as the alleged manipulation by Google of ranking their own products higher in search results. Accountability of algorithm inferences, or lack thereof, affects the development process behind the creation of the algorithms. In the current environment where the platforms are not accountable for algorithm behaviour, there is little incentive to focus on the interpretability of algorithmic processes. Due to the large number of parameters that are used by the algorithms, even the engineers who constructed the system are often not able to explain why the algorithms makes specific decisions. This problem is magnified even more in the case of adaptive systems that learn from continuously evolving example data sets (e.g. deep-learning). All data-driven systems are susceptible to bias, based on factors such as the choice of training data set. Since the dominant online platforms are US based, it is for instance likely that training data will generate biases that reflect US culture. Google’s deep-learning based image classification system, I mentioned in a previously post, being a nice example with its implicit assumption that the colours of school busses are always US standards yellow-black. The other example I mentioned last time, where it was shown that Google Ads presented significantly more ads for criminal background checks when an African-American name is entered in the search instead of a White-American name, is obviously more likely to have direct negative repercussions. Since this kind of a search on a person’s name is a common practice when evaluating job applicants, the Ad Words bias has the potential to subconsciously promote racial discrimination in employment practices. There is no reason to assume any deliberate discriminatory intentions by the developers of the Ad Words algorithm. The algorithm is probably data-driven, based on statistics of Google searchers correlated with the names. The resulting recommendations generated by the algorithm, however, are likely to influence user choices. In this case further increasing the probability of requesting a criminal background check when searching an African-American name. This in turn reinforces the correlation that caused the algorithm to make the discriminatory recommendation. By this way, small initial biases can become self-reinforcing and magnify themselves. As demonstrated by the racial-discriminatory behaviour of the Ad Words algorithm, even supposedly neutral algorithms that are based purely on observations of internet usage statistics are not value-neutral. Rather they tend to reinforce an existing status-quo which might not be in the interest of the values that society is striving for.

On the topic of automated decision making by algorithms, the UK Data Protection Act’s ‘principle 6’ specifies that “the right of subject access allows an individual access to information about the reasoning behind any decisions taken by automated means. The Act complements this provision by including rights that relate to automated decision taking. Consequently:

  • an individual can give written notice requiring you not to take any automated decisions using their personal data;
  • even if they have not given notice, an individual should be informed when such a decision has been taken; and
  • an individual can ask you to reconsider a decision taken by automated means.

These rights can be seen as safeguards against the risk that a potentially damaging decision is taken without human intervention.” Importantly, however, these rights arise only if “the decision has a significant effect on the individual concerned”. This caveat means that virtually all of the filtering/recommendations made by online platforms are currently considered to be exempt.

An important factor that needs to be considered, however, is the magnitude of exposure to algorithm decisions that we are increasingly being confronted with. Even if no single decision by any of the algorithms is violating the “protection against having significant decisions made about an individual by wholly automated means”, the accumulated effect can be difficult to estimate.

Here’s a Halloween themed thought, next time you use a search engine, automated newsfeed or get interested in a recommendation from an online platform, ask yourself this: who is the puppet and who is pulling the strings?

One thought on “House of Lords vs. Algorithm Overlords!”

Go on, leave us a reply!