“You cannot play God then wash your hands of the things that you’ve created. Sooner or later, the day comes when you can’t hide from the things that you’ve done anymore.” (Adama, Battlestar Galactica)
AI and Robotics are not new. For many years, centuries even, humanity has been fascinated with the concept of non-human intelligence. The idea of inanimate objects becoming conscious, behaving like humans, or being sufficiently skilled to perform tasks ordinarily performed by humans can be traced back to Homer and Plato and, in more recent years, to people such as Karel Ĉapek, Issac Asimov, Alan Turing, Philip K Dick, Arthur C Clarke and Iain M. Banks.
It is only recently, however, that “the dream is finally arriving” (Bill Gates) and the building blocks for this type of technology – increased processing power at reduced charges, better sensors, better algorithms and programming, and vast amounts of data – have become sufficiently developed to become of practical use. AI and robot based services are pervasive and disruptive: Siri, Alexa and Cortana are now practically members of the family, drivers routinely use adaptive cruise control, automatic parking and adaptive suspension systems without a second thought, and robotics are used to assist surgeons performing complex and difficult surgeries and the military by removing the need for humans to diffuse bombs or carry out high risk missions. The current pace of development shows no signs of slowing and experts estimate that:
- 16% of all U.S. jobs will be replaced by AI over the next decade (Forrester). Bill Gates recently suggested that a tax on robots may be one way to deal with the consequences of large numbers of job losses caused by AI and robotics.
- by 2020, 85% of customer interactions will be managed without a human (Gartner); and
- the market for AI will grow from $420 million in 2014 to $5.05 billion by 2020 (Markets and Markets).
To date, companies that develop, test, sell and/or use AI and robotics technology have not been subject to significant extra government or regulatory control or interference. While this approach has allowed for significant technological progress, it is not without its critics: in late 2014, Elon Musk called for regulatory oversight to “make sure we don’t do something very foolish” as, in his view, AI is “our biggest existential threat”.
It is against this background that a number of governments and regulatory bodies (for example, see: here for recent UK activity in relation to the government’s imminent proposal for rules applicable to driverless car, here for the UK government’s recent announcement of a review into AI, and here for recent US government policy in the same sector) are reviewing the sector and considering what changes may need to be made to law and regulation to ensure that the users of robotics and AI are protected while allowing the technology to develop as organically as possible.
The Legal Affairs Committee of the European Parliament recently advised that the European Parliament should intervene in the sector and create rules applicable to the development and use of robotics and AI. It suggested that the following items should be considered as relevant to “create a robust European legal framework” (and endorsed the content of a draft report to the Legal Affairs Committee in 2016 addressing these and other issues):
- a new agency should be established to provide technical, ethical and regulatory guidance and expertise;
- an ethical code of conduct should be created to regulate accountability for the impact of robotics and to ensure that AI and robotics creations operate in compliance with legal, safety and ethical standards;
- harmonised rules should be adopted throughout the EU on driverless cars; and
- the European Parliament should consider whether it would be appropriate to create specific legal “electronic person” status for sophisticated robots so that those “sophisticated autonomous robots … [have] specific rights and obligations, including … making good any damage that they may cause, and applying electronic personality to cases where robots make smart autonomous decisions or otherwise interact with third parties independently.”
The European Parliament plenary adopted the report on 17 February 2017 and asked the EU Commission to propose rules on robotics and AI.
Unsurprisingly, it is the last of these suggestions that made the headlines in the immediate aftermath of the Committee’s announcement. It is not a new question, however. Whether or not robots should be liable for their own actions has been the subject of much discussion and debate and a couple of real-life examples exist:
- in 2015, the Swiss police “arrested” or “confiscated” a robot, Random Darknet Shopper, for its purchase of fake goods and ecstasy pills on the Darknet. Ultimately, neither the robot nor its human developers were prosecuted because the robot was part of an art installation meant to explore the deep web and the prosecutors deemed the purchase a “reasonable means for the purpose of sparking public debate about questions related to the exhibition”; and
- in 2016, the Russian police arrested and attempted to hand-cuff a robot, Promobot, at a political rally in Moscow. The robot was recording the opinions and reactions of the crowd and apparently co-operated with the police.
While both of those examples seem humorous, the tendency to anthropomorphise robots is unhelpful when considering questions of ownership, authority and liability in relation to their activities; a robot – at its simplest – is a machine driven by software code and data.
Viewed on that basis, all robots have a human or corporate owner and operator and that owner should ultimately retain some level of responsibility for its creation. Therefore, it is highly unlikely that we will have to recognise robots as having legal personality in their own right, rip up the rulebook and start again from scratch when legislating for AI and robotics.
New issues of interpretation may arise, new causes of action may need to be developed, and new regulations may need to be introduced in certain sectors, however, existing legal rules regarding product liability, ownership and use of intellectual property rights, ownership and use of data, contract law and tort law have in the past proved flexible enough when dealing with new technology and will continue to remain relevant in the age of robots.
At a practical level, to reduce the risk of legal uncertainty regarding the use of this disruptive technology, providers and users of this technology should consider, and contractually address, the following issues when using AI and robotics:
- who (individual or corporation) is the entity that has made the AI system and/or robot available for use/to purchase?
- what industry is the AI system and/or robot to be used in?
- is it a regulated industry and thus subject to approvals by a regulatory oversight body prior to sale and/or use (e.g. medical devices require approval prior to being sold)?
- are there industry wide standards that need to/should be adhered to (e.g. ISO 10218-2:2011 Robots and robotic devices — Safety requirements for industrial robots, ISO 13482:2014 Robots and robotic devices — Safety requirements for personal care robots and other standards under the responsibility of ISO/TC299, the ISO technical committee on robotics)?
- will it be used by consumers or in a B2B context?
- what control does the user have over the training, operation and use of the AI system and/or robot?
- does the manufacturer seek to distinguish between the types of use that may be made by the user? If yes, to what extent does the manufacturer accept liability for each use type? In what circumstances does it seek to exclude or limit its liability?
- does the manufacturer allow the user to alter the coding? can the user teach the AI system and/or robot? Is the AI system and/or robot capable of machine learning? If so, the provider and user should consider whether and, if so, when the user should accept liability in relation to its training and/or operation of the AI system and/or robot.
- What outputs will be created by the AI system and/or robot? Who owns the intellectual property in and to such outputs? Who can use such outputs and for what purposes?
- Will the AI system and/or robot collect personal data? How and where will such data be stored? Who can access it and use it and for what purposes?
- Who has the authority/power to override or switch off the AI system and/or robot? In what circumstances?
For a more in-depth look at the legal aspects of AI and robotics, please see our White Paper.