UnBias: Emancipating Users Against Algorithmic Biases for a Trusted Digital Economy

Contrary to public opinion, young people care about their personal data and want a digital world more transparent, a digital world they can trust. For example, little is known about how Amazon is able to tailor advertisements and recommend products that are actually interesting for potential online customers, or how Facebook decides which news Facebook users may be more inclined to read. All the mechanisms that support this filtering of information and products is obscure and internet users would like to know more about it, such as possible bias in their behaviour and, more importantly, have some control over these recommender systems.

This project aims to closely work with young people to further understand how aware ‘digital natives’ are about algorithm bias, their attitudes and main concerns and recommendations when interacting with such systems. This information will help us to better understand the way young people interact with such systems and identify youth-led solutions for teaching critical thinking toward digital information systems. We will apply different engagement tools and methodologies including focus groups, workshops and youth ‘juries’ to facilitate discussion, reflection and a deeper understanding of youth online behaviour and youth-lead software solutions.

Relevant for this project is the development of ‘fairer’ algorithms with young people or non-experts as well as with experts. To do that, we intend to run a series of hackathons or workshops in which expert programmers and young people will work together to produce and evaluate new fairer algorithms that will produce more transparent and less creepy outputs. We will also run an ‘ethicon’; an innovative approach in which scientists and ethicists work together to produce ethically justifiable algorithm design.  During this process the project will be able to identify a series of principles that will contribute to the development of a platform that will allow users to have more control over the existing online filtering systems. In other words, this project will provide citizens with the skills and tools to better judge when and how much to trust the information they are given, understand the digital identity that algorithms create from user’s personal data as well as protecting the users’ privacy and online security.

This project will provide policy recommendations, ethical guidelines and a ‘fairness toolkit’ co-produced with young people and other stakeholders that will include educational materials and resources to support youth understanding about online environments as well as raise awareness among online providers about the concerns and rights of young internet users. This project is relevant for young people as well as society as a whole to ensure trust and transparency are not missing from the internet.

The results will be widely disseminated to a variety of audiences ranging from academic peer-review journals to community groups of interest such as secondary schools and youth clubs.

For more information and updates about the UnBias project, see the UnBias blog pages and/or follow the project on Twitter.


 

This project is funded from an EPSRC grant under the Trust, Identity, Privacy & Security in the Digital Economy (DE TIPS) call.

The project is a collaboration between the University of Nottingham, University of Oxford and University of Edinburgh.

The Principle and Co-Investigators on this project are:
Prof. Tom Rodden (University of Nottingham)Prof. Derek McAuley (University of Nottingham)
Prof. Marina Jirotka (University of Oxford)
Dr. Michael Rovatsos (University of Edinburgh)
Dr. Ansgar Koene (University of Nottingham)
Dr. Elvira Perez (University of Nottingham)
Dr. Helena Webb (University of Oxford)

Citizen-centric approaches to Social Media analysis