What makes an algorithmic system trustworthy?

FairAlgosIn an age of ubiquitous data collecting, analysis and processing, what determines the trustworthiness and perceived ‘fairness’ of a system that heavily relies on algorithms?

Over the last couple of weeks, Facebook has repeatedly had to defend itself against criticism about the way in which it makes editorial decisions when selecting stories that will appear on its ‘Trending Topics’ and ‘News Feed’.

While the initial story in Gizmodo and much of the subsequent coverage in sources like the Guardian and the BBC focused on the role of human editors at Facebook in adjusting/tweaking which news stories are presented to the users by inserting stories into the selections made by Facebook’s algorithm, the performance of the algorithm itself has drawn far less attention.

With a claimed users base of more than 1.5billion active Facebook users and a growing role as primary channel for news distribution, a close investigation of editorial practices at Facebook and scrutiny of alleged partisan censorship is clearly important. Based on the statement from the Facebook General Counsel, Colin Stretch, however it appears that the internal investigation that Facebook conducted as a result of these allegations focused only on the conduct and procedures for the human Trending Topics teams.

“As soon as we heard of these allegations, we initiated an investigation into the policies and practices around Trending Topics to determine if anyone working on the product acted in ways that are inconsistent with our policies and mission. We spoke with current reviewers and their supervisors, as well as a cross-section of former reviewers; spoke with our contractor; reviewed our guidelines, training, and practices; examined the effectiveness of our oversight; and analyzed data on the implementation of our guidelines by reviewers. We also talked to leading conservatives, to gain valuable feedback and insights.” By Colin Stretch, Facebook General Counsel

A proper investigation of the fairness of Facebook’s news editorial process should surely also include a close look at the algorithm that does the primary selection?

One of the few articles that seem to have delved deeper into the algorithm side of the process was the report by Kalev Leetaru in his Forbes article “Is Facebook’s Tending Topics Biased Against Africa And The Middle East?” in which he analysed the 928 RSS feeds that the Trending Topics algorithm apparently uses to determine the major news stories that could be of interest to Facebook users and the 1000 trusted news outlets that it uses to verify identified stories. The results of this analysis led Kalev Leetaru to conclude that Facebook provides an intensely western-centric view of world news as he strikingly visualized on the map below

Map of number of outlets per country found in Facebook’s trusted news sources list (Credit: Kalev Leetaru)
Map of number of outlets per country found in Facebook’s trusted news sources list (Credit: Kalev Leetaru)

Why is it that the investigations into possible bias of Facebook’s news selection system have focused so heavily on the human editorial team and not the selection algorithm? What is it about the idea of a service being rendered by an algorithm that makes it seem more trustworthy than a service by humans? Is it the romantic notion, frequently expressed by Star Trek’s Spock, that humans might allow emotions to cloud their judgement resulting in irrational behaviour? A pseudo-magical belief in the infallibility of systems that are digital?

If we look at the way in which Facebook promotes its Trending Topics and News Feed to users on the “How does Facebook determine what topics are trending?” and “How does News Feed decide which stories to show?” help pages, the key selling point is the idea that ‘it is based on data’. The implication is that the algorithms are more ‘fair’ and reliable because the output is computed from data that is fed into the system. But human judgement is also based on data. Unless one believes in some kind of divine inspiration, the human brain is just as much a data processing system as any digital computer running an algorithm (the philosophically inclined may now wish to initiate a discussion about Determinism and Free Will).

If there is anything that makes an algorithm based system more trustworthy than a human based one, surely this can not simply be the use of data alone but rather comes down to audit-ability. An algorithm is a piece of code that can be inspected and analysed. All elements that go into the decision making process are revealable. If we know the equation we can follow the chain of logic that leads from the inputs to the output. Moreover, all the inputs that are used in the process are identifiable. Clearly, in order to provide this trustworthiness to subjects of the algorithm’s decision making this requires transparency. This reasoning is at the heart of legal protections, such as Principle 6 of the Data Protection Act: “The right of subject access allows an individual access to information about the reasoning behind any decisions taken by automated means”.

The reality of the user experience is however often very far removed from this transparency. When using online services users are generally given next-to-no information about the algorithms, or even the data that is used and are instead expected to blindly trust the service provider. In part this is due to the commercial interests of the service providers who are competing with each other and consider their algorithms as key intellectual property in this contest. Increasingly however there is also the problem that the complexity of the algorithms, which can now include many hundreds of parameters and possibly incorporate machine-learning elements, can make it very challenging indeed for the companies to transparently understand their own systems.

Some indications about the kinds of concerns that are raised by the increasing use of such complex and data-driven/machine learning algorithms can be seen in the recent  White House report “Big Data: A Report on Algorithmic Systems, Opportunities, and Civil Rights“, published in May 2016, which specifically focused on the problem of avoiding discriminatory outcomes.

The report is well worth reading for anyone interested in a clear analysis of the potential sources of “intentional or implicit [unintentional] biases that may emerge from both the data and the algorithms used as well as the impact they may have on the user and society.” In the introduction section of the report they summarize the “challenges of promoting fairness and overcoming the discriminatory effects of data” by grouping them into two categories:

1) Challenges relating to data used as inputs to an algorithm

  • Poorly selected data,
  • Incomplete, incorrect, or outdated data,
  • Selection bias,
  • Unintentional perpetuation and promotion of historical biases,

2) Challenges related to the inner workings of the algorithm itself.

  • Poorly designed matching systems,
  • Personalization and recommendation services that narrow instead of expand user options,
  • Decision-making systems that assume correlation necessarily implies causation,
  • Data sets that lack information or disproportionately represent certain populations,

“To avoid exacerbating biases by encoding them into technological systems” the White House report recommends the development of “a principle of ‘equal opportunity by design’—designing data systems that promote fairness and safeguard against discrimination from the first step of the engineering process and continuing throughout their lifespan.”

For a somewhat chilling report on the discriminating real-world consequences that can occur when this principle of ‘equal opportunity by design’ is not implemented I also recommend the analysis by ProPublica of the algorithmically derived ‘risk assessment scores’ used in many parts of the US justice system.

To return to our original question of what determines the trustworthiness and perceived ‘fairness’ of a system that heavily relies on algorithms? I propose the answer is transparent audit-ability. What do you think?

Starting in September 2016 we will commence work on the EPSRC funded project “UnBias: Emancipating Users Against Algorithmic Biases for a Trusted Digital Economy” which will look at all of the issues above in much greater detail. A large part of this work will include user group studies to understand the concerns and perspectives of citizens. So if you have any thoughts on the topic, do get in touch or leave us some comments below.


Go on, leave us a reply!

This site uses Akismet to reduce spam. Learn how your comment data is processed.