On March 1st I participated in a debate on Digital Ethics organized by the Digital Enlightenment Forum (DEF). The debate was a follow-up of previous discussions at the DEF in 2015 and brought together lawyers, engineers, economists, social scientists and philosophers to discuss challenges and possible framework for digital ethics that might help people, organizations, businesses and societies deal with the fast and complex ways in which digital technologies are impacting human lives. What follows is an abbreviated summary of the event. A more complete version is available from the DEF website.
The debate started with a series of statement from invited panellists to summarize the questions of what is Digital Ethics? and What is at Stake? The former was briefly broken down into the need to identify if Digital Ethics refers to ‘ethics that has gone digital’, ‘ethics for digital systems (with or without humans in the loop)’, or an ‘ethics for digital systems themselves’. The latter however required more thorough unpacking with highlighting of 7 key points.
- Disempowerment and the loss of human agency. Here the focus was placed on the rise of ‘digital feudalism’ by the digitally facilitated agendas of neo-liberal globalization that is weakening the notion of autonomy and of respect for norms of equality, fairness, justice and democratic practices through the transition of power from nation states to multinational corporation, on one side, and international institutions like the European Court of Justice and the WTO, on the other with few democratic control over either side. A key indicator of this ‘digital feudalism’ is the imbalance of power between service users and service providers that causing individuals to loose agency over their digital selves. A recurring topic of concerns in our CaSMa work.
- Rethinking our fundamental rights. Linked to the above mentioned loss of democratic checks and balances is the associated crisis in human rights. Here it was argued that the digital age is laying bare inherent conflicts and contradictions between the fundamental rights to security, to privacy and human dignity, and to freedom of expression and information, which suggest a need for a fundamental rethinking of human rights. Along with these, it is necessary to accept that there can be no ‘total security’. For security, as for the human rights like privacy it is best to think in terms of a continuum and threshold with appropriate check and balances. All are evolving concepts that change through history and depend on social, cultural factors. One such change is the blurring between public and private spaces through digital technologies. Nevertheless, this should not be construed as the ‘end of privacy’ but simply as a recognition of the contextual and multi-levelled nature of privacy.
- Algorithms as the cornerstone of digital technologies. This are of discussion clearly resonated strongly with the new ‘UnBias’ project we are starting in September. With the all-pervasive ness of algorithms in digital technologies, they are the medium that steers human behaviour. In the hands of the neo-liberal corporate system, the algorithms prescribe which information and which adverts to show users, demanding compliance with the corporate agenda, ‘disempowerment by design’.
- The ethics of platforms and ecosystems. Social media is becoming the medium of human connectedness, especially among the young. This puts those who operate the online platforms in an extremely powerful position. They are becoming the curators of public, and private, discourse and values, and yet, the ethical implication of this have received little attention so far.
- Benefits and costs. This issues was raised primarily in relation to the apparent disconnect between the high, but deferred and remote, costs (e.g. intrusion of privacy, security infringements, monetisation of personal data) that people are apparently willing to pay for the relatively minor, but immediate, benefit of interacting online with their social circle. A behaviour that is strongly increased by the way in which the costs are often not known, or understood as costs at all, making it impossible for people to make informed choices.
- Ethical constraints on innovation. On this point, the discussion turned towards the oft expresses assumption that ethics is a hindrance to innovation, and the need to change this view to think instead how ethics as an innovation challenge to drive technologists to develop solutions that reinforce ethical behaviours.
- Ethics and law. The discussion on this point turned on the positions of ethics and law relative to each other. The general sense being that ethics should come before regulation, as a way to challenge what is being produced. In this view, laws codify the ethical choices we make based on the type of society we aim to achieve. To this end we must create institutions that embody ethical practices.
For the second half of the event, the debate turned towards Potential Solutions and Directions. What is to be done? How are we to progress in this debate? What sort of world do we want to leave to our children? Once again this was unpacked into the following items:
- A contemporary virtue ethics as a perspective for digital ethics. Due to emphasising the importance of achieving contentment in life, as well as the importance of personal judgment, virtue ethics was proposed to provide valuable perspectives for a new digital ethics.
- Re-imagining justice for the digital era. Starting from the observation that justice is the cornerstone of democracy, it was argued that “we should continue to draw on Europe’s long tradition of constructive technology assessment, based on dialogue between multiple stakeholders, so as to ensure that we talk to citizens and not about them. Nor should individual users/citizens be left to shoulder the responsibility alone. Some responsibility has to be shifted back to service providers themselves, so that they too share the burden.”
- Data traceability and accountability. The concept discussed on this topic was the need to articulate in policies on data ethics for businesses and governments that “consent needs to be transferable and to travel with the data wherever possible. Data should only be able to be bought and used if its previous origin, purpose and consent mechanism are known.”
- Guidelines and codes of conduct. Guidelines and codes of conduct can be useful in translating principles into day to day business practice. Having guidelines at an early stage provides a frame of reference for technologists to interact with stakeholders which the law does not provide. Thus, we need to think in terms of an iterative approach: experimental guidelines framing early interactions with stakeholders, which in turn lead to robust and practical guidelines as solutions come to market.
- New system architectures. The new digital ethics should not be confined to today’s solutions. It has to be robust enough to encompass also new and disruptive technologies and system architectures that are on the horizon.
- Beyond compliance. Too often ethics is seen as being simply about compliance, ending up with meaningless checklists. Compliance is linked with ideas of ownership that no longer fit the digital world. Today, privacy relates not just to data about me but also to data about people like me. In the age of big data, what matters is not just personal data but collective data – data about others. We can be the subject of actions based on data about us as part of a collective rather than as individuals. Hence, we need to think in terms of ‘collective privacy’: at present the law has huge difficulties coping with such concepts. Focusing on the ethical dimension should help us to move beyond these compliance issues to a more rounded view of ethics in the digital world.
Whilst it is too early to think in terms of detailed prescriptions, the Workshop identified a number of potential solutions and directions that may be considered promising:
- Think of ethics as an innovation challenge and encourage technologists to come forward with solutions that reinforce ethical online habits and behaviours.
- Promote inter-and multi-disciplinary approaches and multi-stakeholder dialogue.
- Promote measures to make systems more transparent and accountable. For example, restoring data traceability by making consent transferable and enforcing new accountability mechanisms and policies at corporate levels.
- Adopt an iterative approach to guidelines and codes of conduct, so as to provide technologists with an ethical framework at an early stage in the development cycle.
- Move beyond compliance issues to a more rounded view of ethics in the digital world.
- Try to make ethical approaches future-proof, so that they are able to accommodate new and disruptive technologies and system architectures on the horizon.