Robotics and AI, Commons Select Committee inquiry

AI_Robotics_inquiryThe Science and Technology Committee is currently undertaking an inquiry into robotics and artificial intelligence (deadline for written submission was April 29th) as part of the continuing national strategy for Robotics and Autonomous Systems (RAS) innovation. In 2012 the UK Government identified RAS as one of the ‘Eight Great Technologies’, leading to the establishment of a ‘RAS Special Interest Group’ and the RAS national strategy in 2014. In 2015 the Special Interest Group published The UK Landscape for Robotics and Autonomous Systems and The Engineering and Physical Sciences Research Council also launched an UK-RAS network.

What follows is the response that was submitted by Ansgar Koene and Yohko Hatada.

    1. The implications of robotics and artificial intelligence on the future UK workforce and job market, and the Government’s preparation for the shift in the UK skills base and training that this may require.

      1. With the exception of jobs that are fundamentally defined by social human to human interactions, robotics and AI are set to encroach on all sectors of work that consist of predefined activities, in other words activities where the person doing the job is not engaged with creative ‘out of the box’ thinking.
      2. As a consequence, it will be necessary to re-evaluate the value, respect and social status that the UK, as a society, assigns to jobs that depend on technical/intellectual skills, such as office workers compared to jobs that depend on human/empathetic skill, such as nurses or teachers.

       

    2. The extent to which social and economic opportunities provided by emerging autonomous systems and artificial intelligence technologies are being exploited to deliver benefits to the UK.
      1. Robotics is now at a stage where just about anything that happens in predictable controlled environments, like a factory or warehouse, can be fully automated. Importantly, the flexible multi-functionality of modern robotics systems means it is becoming increasingly feasible to justify the purchase/lease and installation costs even for businesses with relatively rapidly changing products.
      2. Semi-controlled environments, like roads with clearly defined rules of behaviour and relatively well defined paths for achieving the goal, are also rapidly coming into the realm of possible fully autonomous systems. The progress towards full implementation of robotics system, like autonomous vehicles, must however be contingent on increased safety requirements that need to be applied because of potential for interaction with an unsuspecting public that does not know what to expect from the robots.

       

    3. The extent to which the funding, research and innovation landscape facilitates the UK maintaining a position at the forefront of these technologies, and what measures the Government should take to assist further in these areas.
        While the UK undoubtedly has a strong position in robotics and AI research at this time, with much of the leading work on Deep-Learning coming from the UK, this strong position of UK research is heavily dependent on continued participation in international and European research networks. EU funding through the Horizon 2020 programmes provides larger and longer term funding, typically 3-5 years whereas UK research council funding is typically for 2-3 years projects, which provides the space for early career researchers to establish a track record. EU projects are also important because the international consortium and academic-corporate collaboration nature of these projects provides vital opportunities for young post-doctoral research fellows to establish international networks and networks that go beyond academia.

       

    4. The social, legal and ethical issues raised by developments in robotics and artificial intelligence technologies, and how they should be addressed.
        1. Social issues:
        2. New personality development needs to establish self-identity in 21st century cyber-society. The previous generations of individuals did not necessarily need to consider understanding the world as a part of self-identity in pre-internet unconnected era. In the dramatic societal economical technological political paradigm shift that is likely to accompany the Robotics/AI revolution, it is increasingly difficult to establish self-identity ‘who I am’, ’what I want to be’, ‘what kind of social system I most value’ etc., since our and others decisions and actions inevitably affect us and our future, directly. This makes it hard to feel confident to pursue with true passion and conviction the relationship between individuals and society one wants to create in the future. In order to do so, it is further necessary to know what one wants to act on and engage with in the current society / state. What is the individual, citizenship, international citizenship, global citizen? What are the relationships between individual, society, state, international society and the globe? One must answer these to engage for cyber international society building through cyberspace and real world experience, observation and information.
        3. Individual growth is enhanced by free (online/offline) lifelong learning programs.
          In a society where traditional ‘production’ jobs are performed by robots and office-type ‘service’ jobs are performed by AI, education has to change away from the top-down industrial model of ‘training for a job’ to a more bottom-up individual interest driven model. Such a bottom-up interests driven learning model should promote self-understand so that individuals can identify the work, and its relationship to their life, that would generate a sense of connectedness to their ‘self’ and true fulfilment to contribute to society.
          In recognition of the rapid pace of technological development and the dynamics of people’s individual development, it will be necessary for this learning / education to be provided as free lifelong (online) system.
        4. The need for a Universal Basic Income (UBI) type system.
          The future is currently unpredictable and unstable, actually even unsustainable. Various research advances and innovations, technology industries, societal responses, cultural differences, economic consequences, national political responses, international relations, global environmental effects, etc. all of these are being affected by the current digital revolution. If the disappearance of large-scale industrial employers is to be met with increased entrepreneurship, people will need a UBI style safety net to enable them to develop creative and humane contributions to society. True creative entrepreneurship (not pseudo-entrepreneurial serfdom to ‘sharing’ economy platforms) requires a safety net to enable exploration, experimentation and ‘failing forward’ (i.e. using past failures as learning experiences for achieving success).
        5. Why the short life of ‘sharing’ economy style pseudo-entrepreneurship, will not generate mass jobs.
          AI and robotics are driving a new technological revolution. As with previous such revolutions, like the industrial revolution, whole sectors of jobs are being made redundant and it is not at all obvious where new jobs could come from. ‘Sharing’ economy style pseudo-entrepreneurship of the type that is pushed on Uber drivers is clearly not a structural answer since even those jobs are of a type that will also be automated, e.g. self-driving cars.
        6. Consequence of failing to provide mass job creation mechanisms.
          A society in which industrial and service sector production are based on robotics and AI could yield broad benefits if the wealth resulting from this automation is shared. If, however, the benefits are concentrated in companies and the capital of their shareholders, there is a real danger of social disintegration and conflict pitting a capital owning rich class vs. unemployed destitute class, with accompanying rise of a criminal black-economy and many other chaotic consequences.

       

        1. Legal issues:
        2. On a general level, the introduction into common use of autonomous systems clearly raises a whole host of legal issues, especially if the system behaviour uses machine learning methods to adapt to its environment. A detailed analysis of issues around liability and other general legal aspect, in the European context, was provided by the RoboLaw project, which was an EU FP7-science-in-society project that ran from 2012 to 2014. The final report “D6.2 Guidelines for Regulating Robotics” is available to download from the project website robotlaw.eu .
        3. A specific issues that will require attention is automated decision making by (AI) algorithms about matters concerning individuals. The UK Data Protection Act’s ‘principle 6’ specifies that “the right of subject access allows an individual access to information about the reasoning behind any decisions taken by automated means. The Act complements this provision by including rights that relate to automated decision taking. Consequently:
          • an individual can give written notice requiring you not to take any automated decisions using their personal data;
          • even if they have not given notice, an individual should be informed when such a decision has been taken; and
          • an individual can ask you to reconsider a decision taken by automated means.

          These rights can be seen as safeguards against the risk that a potentially damaging decision is taken without human intervention.” Importantly, however, these rights arise only if “the decision has a significant effect on the individual concerned”.  With the increasing prevalence of smart algorithms for service personalization and other information processing an important factor to consider is the magnitude of exposure to algorithm decisions that citizens are confronted with. Even if no single decision by any of the algorithms is violating the ‘protection against having significant decisions made about an individual by wholly automated means’, the accumulated effect can be significant. This is similar to concerns about the cumulative effect of things like tobacco advertising.

        4. Not exactly a ‘legal’ issue, more of a ‘law enforcement’ issue, is the rise in cybercrime and the changing nature of the implication of criminal hacking if the system that is being hacked is a robot and not a computer server. According to the 2016 Verizon Data Breach Investigations Report, have risen 48% year-over-year and are likely to continue to rise. Combined with the notoriously terrible state of cybersecurity of IoT devices, the introduction of robotic systems into the general population is almost guaranteed to result in some of them being compromised by cyber-criminals. Unlike a hacked internet service however the physical embodiment of a robotic system means the potential harm from hacking such a system can involve more than a mere loss of data to direct physical or even life threatening threats.

       

      1. Ethical issues:
      2. Sophisticated data-driven algorithms are an increasingly important element in determining the customer experience when using online platforms. The algorithms, which are likely to soon incorporate AI methods, filter and rank the information that is presented to the user and where it is presented. The high volumes of data available online means these algorithms are vital for enabling users to find the relevant information. Lack of transparency about the way in which algorithms manage this information introduces the potential for abusive manipulation. This can take the form of censorship, such as suppressing negative comments about the platform provider, or anti-competitive business practices such as the alleged manipulation by Google of ranking their own products higher in search results.
      3. Due to the large number of parameters that are used by AI algorithms, even the engineers who construct the systems struggle to explain specific algorithm outcomes. This is even more so in the case of adaptive systems that learn from continuously evolving example data sets, as is the case with AI/machine learning systems. We do know however, that all data-driven systems are susceptible to bias based of factors such as the choice of training data sets, which are likely to reflect subconscious cultural biases. In the case of data-driven AI systems that adapt and learn from their interaction with people, the systems can become susceptible to abuse by online groups, as was recently shown by the Microsoft chat-bot Tay that got twisted into uttering racist and messages by a small group of cyberbullies who spotted an opportunity.
      4. The use of ethical research frameworks, such as the EU supported Responsible Research and Innovation agenda, that consider the broader societal implications of research & innovation is of vital importance in the future development of robotics and AI. This is in essence what the Open Letter warning about the possible negative consequences of AI that was signed by Stephen Hawking, Elon Musk, and many others (including Dr Hatada and Dr. Koene) was about [http://futureoflife.org/ai-open-letter].

Go on, leave us a reply!