Alan Turing Institute workshop on Algorithm Society

ati-site-banner-2xOn February 17th and 18th the Alan Turing Institute held a two day ‘scientific scoping workshop’ on Algorithm Society with the tag-line: “If data is the oil of the 21st century then algorithms are the engines  that animate modern economies and societies by providing  reflection, analysis and action on our activities. This workshop will look at how algorithms embed in and transform economies and societies and how social and economic forces shape the creation of algorithms.

The workshop started with three talks covering FinTech (by Prof. Donald MacKenzie), human attitudes/expectations and willingness to use/trust algorithmic decisions (by Berkeley Dietvorst) and a proposal for a “Machine Intelligence Commission” to investigate and interrogate algorithm bias and compliance with regulations (by Geoff Mulgan).

From a CaSMa perspective, the most interesting elements of these talks were the results from Berkeley Dietvorst showing how people’s exaggerated expectation of near perfect outputs from algorithms leads them to under-use the algorithm results if they have seen examples where the algorithm failed. Even if the algorithm performed statistically better with a higher probability of correct outputs than the human, and people have observed failures by the human as well as the algorithm, they will usually favour the human. In a sense, people exhibit a greater willingness to forgive human error than they do algorithm error. The exact reason for this is not yet clear, is it because people assume that humans will learn from their mistakes and improve or that they empathise and come up with possible reasons to ‘explain away’ human error, or something else? The experiment did not test the effectiveness of providing an explanation of the algorithm outputs to the users. It did however show that when people are given the ability to modify the algorithm output, i.e. combine it with their own judgement, they were more willing to use the algorithm even if there are restrictions on the degree of output modifiability. These results may have important implications foe the way in which algorithms are used in devices like self-driving cars.

From a policy perspective, Geoff Mulgan’s proposal of a Machine Intelligence Commission was obviously very interesting. At its core is the observation that there is currently a lack in institutions for making sure that the public interest is served by developments in Big Data, e.g. health care data pooling, algorithms and other digital developments. How can the public get a more nuanced understanding of these systems? How can we safeguard algorithmic pluralisms that gives freedom of choice, and associated dynamic society, against system lock-in? Does regulation need to be embodied in a person, someone who can be held responsible, in order to get public acceptance even if algorithms are doing the work of testing other algorithms? – An interesting comment regarding this was the observation that the aviation industry is perhaps the strongest example of international cooperation and success in regulation. for example, in stark contrast to internet companies that will often hide the fact that their systems were hacked, for fear of loosing customer trust, the aviation industry has strict regulation requiring that any failure or near miss must always to reported to the aviation community.

Following the talks, the workshop was split into discussion groups for two sets of discussions. The first set had groups on the topics of: Sharing Economy – Work; Health; and FinTech.The second set had the groups on the topics of: Ethics and Governance; Reconfiguration; Sorting, Matching, Ranking, Representations and Interactions; and Implications of delegating to algorithms.

For the first set I joined the discussion on Sharing Economy/Work and for the second, Ethics and Governance. In both cases the discussion ranged somewhat beyond that specific topic. Some of the issues that were raised included the two sides of algorithms in auditing and assuring compliance with regulations. On the one hand, the  fact that algorithmic processing requires machine readable data, usually (still) in pre-specified formats often results in a cleaner ‘data trail’ for auditing.  On the other hand, complex algorithms, especially adaptive ones where the behaviour changes according to exposure to data, can be difficult to cleanly audit and assure regulatory compliance. This latter element of transparency, required for accountability, however risks exploitation by ‘gaming the system’.

A recurring element that came up during various points in the discussion was the need to be able to deal with ‘messy bits’, such as differences between the way in which a system was originally intended to be used and the way in which people actually use it, or the fact that lack of trust in a system may lead people to choose to act according to a modified version of an algorithm output (see Berkeley Dietvorst’s presentation above).

More directly related to the topic of ‘Work’, there was mention of an Algorithmic working class, e.g. mechanical-turk workers who label data sets to training of algorithms; workplace monitoring, e.g. sensors to monitor time spent at the desk and computer activity logs to monitor web-browsing behaviour at work; when an algorithm is used to optimise a work-flow, who decides the optimization parameters? e.g. production efficiency vs. worker burn-out.

One topic that was discussed more more detail was the role of ‘reputation systems’. Discrepancies between indented granularity of ranking systems and actual use, e.g. effective e-Bay rankings are binary 5stars = good, anything else is terrible. Difficulties for newly joining an established platform that uses rantings since people don’t trust listed items with no ratings. Associated with this, the problem of platform lock-in when ratings are non-transferable from one platform to another. A problem for which there are some work-arounds such as when freelance programmer use ratings on Stack Overflow instead of the hiring-platform based ratings as evidence of their skills. Another issues related to ratings is the power asymmetry that arises when for instance Amazon Mechanical Turk workers have to maintain performance ratings but employers who list jobs do not get rated, which in practice has resulted in the creation of off-platform community sites where ‘turkers’ rate employers.

  • Many more issues were raised, below a short summary of some of them:
  • Individual/Community resistance to algorithm systems
  • Legislation – investigating use of algorithms & social impact
  • Public policy values – outsourcing to users of platforms – reputation system
  • Social forms of regulation, peer-to-peer regulation – not well studied yet
  • Changing nature of trust relationship
  • Need for awareness of international context
  • Fair-trade data science
  • How do we know which algorithms are running?
  • Understanding issues like the political force of platforms
  • Loops and rank/sort/match awareness
  • Subversion/gaming/going outside the frame
  • Methodological issues – validating verification empirical testing
  • Design dimensions: information asymmetry, closed vs. cooperative, range of responsive algorithms
  • Roles of ethics, organizational, legal expertise in design process – two-way dialogue
  • Develop & articulate “difficult cases” e.g. delegation to algorithms in health supporting delivery at multiple organizational levels
  • Mapping how policy and law relate to algorithms what is adequate – where are there anomalies?
  • Better understanding/regulation of data commons

 

 

Go on, leave us a reply!