Have you ever actually read the terms and conditions before signing up to a website or ordering something online? These long, wordy documents are a form of consumer protection designed to make sure we are fully informed when we agree to an online contract. They are supposed to ensure we are making a conscious decision to sign up to a service with full knowledge of the consequences.
In the shadow of the ongoing debate over other Investigative Powers Bill, a debate where much of the rhetoric has been predominantly framed in terms of anti-terrorism and national-security, the National Crime Agency is currently busy with its own internal ‘future scoping’ exercise to examine the UK law enforcement community’s efforts regarding interceptions of communications and associated data. At the heart of this exercise is the question of identifying the boundaries of acceptability of such communications interceptions that delimits ‘policing by consent’ in the fight against serious and organized crime in a democratic society.
At the heart of current online consumer protection is the concept of informed consent where by the prospective consumer makes a conscious decision to sign up to a service with full knowledge and consent to the consequences of doing so. Even in the newly signed EU General Data Protection Regulation, which will go into effect in 2018, this will not fundamentally change. For anyone who has ever used a commercial internet service however, and this included policy makers, it is glaringly obvious that there is a fundamental flaw in this approach, namely the assumption that the consumer has a good understanding of the contract that is being entered into.
While exploring the Internet And Human Rights Resources Center at the Internet Society, I encountered a highly informative report from the Global Commission on Internet Governance (CIGI), which was published in January 2016, on the extent to which the management of individuals’ fundamental rights, e.g. privacy and free speech, is in the hands corporations.
The report presents an excellent overview of the various ways in which the dominance of a small set of companies that control the web platforms where most people spend the majority of their time online has produced a situation where these companies have become major actors in determining the state of human rights online.
According to the General Data Protection Regulation (GDPR,) information society services that wish to process any personal information related to a child under the age of 16 years will require parental/guardian consent. The GDPR is the European Commission’s tool that will unify data protection in the EU and there are plans for it to be adopted in 2018. In the most recent GDPR draft released by the European Council, the age limit where parental consent is mandatory has raised from 13 to 16 years. The implications for children digital rights are not well understood and, at the moment, nobody knows if this regulation will protect children or by the contrary make them more vulnerable. Something certain is that, until now, minimal consultation to incorporate the children voice has taken place and consequently, children’s digital rights are not being treated with the respect or seriousness they deserve.
In celebration of Data Protection Day (also known as Data Privacy Day), please join us for the launch of our #AnalyzeMyData campaign on Twitter. Through this campaign we hope to increase public awareness of the ways in which data is used/misused and establish an evidence base of public opinion on these issues that can be used to support future policy discussions around improved guidelines and regulations for data access consent.
For those of us who might not be in the UK, or have too many other things to think about, a brief reminder. Care.Data is the name of the programme in the UK that aims to bring together into a central database the patient data that is currently held distributed through the country at each separate GP surgery.
Starting some time in the middle of last week, much of the social media related news coverage has been dominated by the so called ‘positivity app’ Peeple that proposes to let people give ratings about other people, and the outright negative response it has elicited in the vast majority of people (including us). Since any such endeavor obviously steps into a massive “ethics minefield”, CaSMa was naturally attracted to looking into this a bit more.
This week Facebook launched its bid for capturing and building the market in personal digital assistant services (for now only available to select groups of people in San Francisco). Facebook’s ‘M‘ interacts with the user via the Facebook Messenger app, but as with the competitors Siri (Apple), Now (Google), Cortana (Microsoft) and Echo (Amazon), the serious work is done through cloud services.
The history of human experiments often focuses on biomedical research and the gradual changes in acceptable practice and ethical considerations. But another class of human experiments that has had its own share of controversies is the study of human behaviour.
Internet Mediate Human Behaviour Research (IMHBR) is primarily defined by its use of the internet to obtain data about participants. While some of the research involves active participation with research subjects directly engaging with the research, for example through online surveys or experimental tasks, many studies take advantage of “found text” in blogs, discussion forums or other online spaces, analyses of hits on websites, or observation of other types of online activity such as search engine histories or logs of actions in online games.