Human Rights and Technology. The 2030 Agenda for Sustainable Development

UPeaceThe United Nations mandated University for Peace (UPEACE) recently launched a call for papers to contribute to an e-book “that examines how the uses of current technologies, and the development of new ones, can contribute to guarantee and protect human rights within the 2030 Agenda for Sustainable Development framework. Hence, considering that every Sustainable Development Goal aims to protect one or more human rights, and that states will rely on the use of technological innovations and global interconnectedness to implement the 2030 Agenda, we are looking for articles that explore one or more of the following topics:

  1. The role of technology in promoting a culture of peace in the 2030 Agenda.
  2. The achievement of gender equality (SDG #5) through the use of technology.
  3. Technology as a tool to overcome discrimination in the 2030 Agenda.
  4. The role of technology in achieving economic, cultural and social rights in the 2030 Agenda.
  5. The role of technology in achieving civil and political rights in the 2030 Agenda.
  6. The role of the private sector in the promotion and protection of human rights and the implementation of the 2030 Agenda.”

In response to this call we decided to submit abstracts, which are now in the Blind Peer Review process. By June 30th we should find out if either of them was considered a suitable match to the envisioned e-Book.

Support framework for Conscientious Leakers and Hackers in the fight against institutional corruption
by Y. Hatada & A. Koene

Corruption is a contagious infection that weakens and kills sustainable development and breeds human rights abuses. Institutional frameworks like democratic government bodies and separation of powers between executive, legislative and judiciary branches of government are not sufficient to protect against corruption if there is a lack of transparency for true direct public scrutiny.

Revelations by conscientious leakers like Edward Snowden have had major impacts on national and international debates about surveillance and the protection of personal data. Other leaks, like the Panama Papers, revealed the $1 trillion that are siphoned off from developing economies into tax havens. Importantly, the public nature of these revelations has pressured world leaders to finally take the actions against tax evasion that the OECD has been struggling to push forwards for many years.

To properly harness the potential of conscientious leakers and hackers it is necessary to provide them with the technological means to safely engage in this important activity with minimized risk to their person.

In this chapter we will discuss the means by which conscientious leakers can be established as structural anti-bodies to counteract the infection of corruption.

To do this we envision a framework of support function, maintained under the auspices of an internationally recognized neutral institution like the ICC or a new specialized UN institution, including:

  • an encrypted online dead-drop system, similar to the technology used by Wikileaks;
  • a guide for ‘how to leak information safely’ that is made accessible via an anonymizing access system;
  • a ICC/UN ‘sanctuary passport’ that places the submitter of leaked information under ICC/UN jurisdiction, similar to diplomatic immunity;
  • a team of qualified impartial unbiased analysts who can evaluate submitted information and provide guidance and support for the publication of leaked information when it is found to be in the interest of human rights.

Biases in social media information filtering and recommendation algorithms
by A. Koene & Y. Hatada

People increasingly rely on social media for communication and as their primary source of information. The flow of information on these sites is increasingly mediated by filtering and recommendation algorithms that select and prioritize the messages that are presented to users. Although critical in shaping the user experience, these algorithms and their effects on the information flow are generally opaque to users. The lack of transparency gives rise to a number of concerns, ranging from compliance with regulations, e.g. anti-discrimination rules, to depriving the users of agency over their online experience, and potentially exposing them to manipulation or other anti-competitive practices. In the case of machine learning based algorithms the lack of transparency exposes the system providers to risks of inadvertent violation of regulations. Unless special precautions are taken, learning algorithms adjust their behavior to optimize the results for the majority cases, and treat minorities as outliers. To the algorithm, minority populations are indistinguishable from noise-data, which are ignored in order to improve the fit to the ‘real’ data. In the context of global development this becomes further problematic when one considers that most of the major social media companies are based in a very small region in the US, around Silicon Valley, and therefore are likely to use very similar, culturally biased, data sets for their algorithm training and evaluation procedures.

On the other hand, given the proper levels of system transparency and a pro-active effort to design the systems to counteract existing discrimination patterns, algorithm based information systems can help to overcome deep seated discrimination within, and between, communities.

In this chapter we will discuss methods for assessing biases in social media algorithms, the consequences these biases may have on users and potential remedies to counteract the negative effects.

Go on, leave us a reply!