The Privatization of Human Rights: Illusions of Consent, Automation and Neutrality

CIGI_coverWhile exploring the Internet And Human Rights Resources Center at the Internet Society, I encountered a highly informative report from the Global Commission on Internet Governance (CIGI), which was published in January 2016, on the extent to which the management of individuals’ fundamental rights, e.g. privacy and free speech, is in the hands corporations.

The report presents an excellent overview of the various ways in which the dominance of a small set of companies that control the web platforms where most people spend the majority of their time online has produced a situation where these companies have become major actors in determining the state of human rights online.

One large contributing factors to the human rights impact of web platforms are the business models of the companies themselves and the way in which they manage, collect and use personal data of users. Much of the data is collected through automated tracking and meta-data that most users are not consciously aware of (most of the time). This data is subsequently processed with Big Data analytics that sift through individual people’s data to profile them, for advertising purposes and to ‘personalize’ their services.

When Max Schrems made a data ‘subject access request’ to Facebook in 2011, the data he received (1200 pages of data) included each ‘Like’ he had ever clicked on. A 2013 study by Kosinski, Stillwell and Graepel showed that “Likes alone can accurately predict a range of highly sensitive personal attributes”, including discriminating between homosexual and heterosexual men in 88% of cases, African American and Caucasian American in 95% of cases and Democrat and Republican in 85% of cases.

Content that users take for granted as being neutral — search results, friends’ updates — are filtered or prioritized by algorithms, whose workings are kept secret. Users do not understand why Facebook thinks a user prefers one friend over another. Search engines “restrict or modify search results for many…commercial and self-regulatory reasons, including user personalization and enforcement of companies’ own rules about what content is acceptable to appear on their services” (MacKinnon et al. 2014). Since it is not clear how those decisions are made, it is not clear if/how this editorial power of the platforms represents an “arbitrary interference with correspondence” or an interference with the “freedom of expression and imparting of information and ideas through any media” as protected by articles 12 and 19, respectively, of the Universal Declaration of Human Rights.

Unfortunately, there exist no legal statutes that directly require companies to conform with international human rights standards. They are instead required to comply with national laws, which should be consistent with human rights conventions. At the present time, however there are clearly gaps in the guiding national standards for these companies and in the ways in which national laws apply to internationally operating platforms. Terms of service, especially for ‘free’ platforms do not incorporate concepts such as ‘necessity and proportionality’ which moderate intrusion into rights of privacy under human rights law.

The “Ranking Digital Rights” project and the Electronic Frontiers Foundation (EFF) annual “Who Has Your Back?” report map the impact that large Internet platforms can have on individuals’ fundamental rights.

Analysis of the standard terms of agreement of Google (including YouTube), Facebook, Yahoo, Twitter and Amazon showed that the terms give the providers unfettered rights to access, delete and edit user data, including location data, and to share user data with unspecified third parties (for example, advertisers). None of the providers have clear deletion policies for user data or metadata, with the limited exception of Twitter, which states in the privacy policy that log-data is deleted within 18 months.

CIGI_Tbl2Especially troubling is the intrusion into communications that are usually expected to have a high level of privacy, such as email communications where Google’s terms of service for Gmail provides no restrictions on its ability to scan email content, which potentially includes:

  • Communication between journalists and sources
  • Communications protected by attorney-client privilege.
  • Communications between medical practitioners and patients, discussing sensitive medical data.

It has been suggested that popular online service providers should be designated “information fiduciaries,” thereby creating obligations — similar to those of lawyers or doctors — not to use the information entrusted to them for outside interests (Balkin 2014).

Clearly there is an elephant in the room that we haven’t addressed yet, namely the fact that all of these ‘terms of service’ are presented to users so that the decision to tick the ‘I agree’ box is legally seen as informed consent by the users. As such it is argued that there can be no human rights problem, since the users willing chose to accept the terms of service.

For most users, however, the reality is often more akin to an illusion of consent produced by the Hobson’s choice of either agreeing to their ‘terms of service’ or not using a platform, where due to the dominant positions of such a small number of companies, the choice of not using the platform would mean a real loss of social connections and the loss of an important channel of expression. In addition to this, there is of course the well know problem regarding the length and comprehensibility of the language used in the ‘terms of service’, which we will not go into here other than to refer to an earlier post we did on this topic back in June 2015.

It is important to recognize however that governments have also played a role in manoeuvring corporations into positions of arbitrating on human rights.

On the one hand, the Snowden revelations confirmed a long standing suspicion that national intelligence agencies are actively seeking (and getting through programs like PRISM) access to the masses of personal data help by corporations.

On the other hand, the erosion of ‘mere-conduit’ protections and ever-increasing liability of the platforms for hosting illegal content, such as images of child abuse, or copyright infringement, have pushed the platform providers into making decisions to remove or moderate content. Following the Court of Justice of the European Union’s ruling that Google has to respect individuals’ “right to be forgotten” by removing from search engine results links to historic web content, Google saw itself forced to create a system to handle such complaints. To date, Google’s system has handled more than 250,000 requests relating to 900,000 URLs (Williams 2015).

Neither Google nor any other large platform provider actively sought this work. They had it thrust upon them by a mixture of inaction by states and ad hoc court decisions. But, having taken on the job, Google should be applying rule-of-law principles (open justice, conflict of interest, transparency, appeal). Sadly, platform providers have so far mostly shied away from such transparency, opting instead to encourage what Sarah T. Roberts terms a “collective hallucination that these things are done by a machine rather than people, perpetuating a myth of the Internet as a value-free information exchange with no costs.” As described by Sarah T. Roberts (2015), the reality is quite different, involving tens of thousands of staff — often subcontracted through technical outsourcing companies such as Mechanical Turk or UpWork — who are removing abusive content, including hard core pornography and beheadings from users’ newsfeeds: “Companies like Facebook and Twitter rely on an army of workers employed to soak up the worst of humanity in order to protect the rest of us. And there are legions of them…well over 100,000…about twice the total head count of Google and nearly 14 times that of Facebook.”

What needs to happen?

States need to review and, if necessary, reassert their human rights obligations in the online environment, rather than rely on ad hoc mediation of these rights by private companies.

Companies need to differentiate between private and public communications in their terms, and to limit their intrusion into private communications to what is necessary, proportionate and pursuant to a legitimate aim. Rather than treating all types of user data as homogenous (and fair game), policy makers need to recognize that not all data is created equal and that certain types of communications, such as legally privileged, intimate, confidential information and emails, need to be kept away from prying eyes — even of the platform providers. Meanwhile, other types of communications that are inherently ephemeral in nature should automatically expire and be deleted from the platform providers’ systems (see Figure 5 for a possible model).

CIGI_fig5For more details and further expansion on these issues, see the original text at: https://www.cigionline.org/publications/privatization-of-human-rights-illusions-of-consent-automation-and-neutrality

Go on, leave us a reply!