What?

On 12th January, MEPs voted for a set of regulations to be drafted to govern the use and creation of robots and artificial intelligence, hot off the back of the UK government setting up a commission to look at the issues surrounding artificial intelligence. Across continents, the law is unclear and differing and is likely to evolve in this area. Charlotte Walker-Osborn, Head of Technology, Media and Telecoms Sector, with input from Christopher Chan, Intellectual Property law Partner, both at global law-firm Eversheds Sutherland, give us a brief perspective on the current status.

What is AI?

Artificial intelligence is the simulation of human intelligence processes by computer systems and other machines. These processes include machine learning (essentially the acquisition of data and rules for using the data), reasoning and use of the rules to reach conclusions as well as an element of self-correction.

In late 2016, in the UK, the Commons' Science and Technology Committee published a report on robotics and artificial intelligence (AI). The report recommended that a standing Commission on Artificial Intelligence be established to examine the social, ethical and legal implications of recent and potential developments in AI. As of 12th January, MEPs from the parliament’s legal affairs committee passed Mady Delvaux’s report into robotics and AI. As a result, the European Parliament will vote on draft proposals in February for the creation of specific regulation around the use of robots and AI.

So what?

Many countries’ laws are currently catching up with the need for changing laws as a result of the changing risk profile and social and other impacts which result from use of AI and robots. It will be important to watch the status of legal and policy changes in order to consider your own company’s potential legal coverage and/or liability whether you are “selling” AI/robots or adopting their use.

Machine generated ideas: who owns the intellectual property?

So, pending further regulation, where are we in relation to intellectual property and AI currently in the UK and beyond? Given the differing legal systems, this article touches upon the position in just 3 key countries of interest: UK, the US and Japan as well as some discussion as to the European position.

Copyright, AI and the law

Starting with the position in the UK, the Copyright Designs and Patents Act 1988 sets out that: “In the case of a literary, dramatic, musical or artistic work which is computer-generated, the author shall be taken to be the person by whom the arrangements necessary for the creation of the work are undertaken”, and that “computer-generated” means “the work is generated by computer in circumstances such that there is no human author of the work.” There is currently little clarity (whether in case-law or otherwise) as to what these necessary arrangements mean and, so, clarity around ownership is not clear-cut. It is arguable that the organisation who set up the rules for the system has made the arrangements necessary for the creation (and is therefore the owner) but this is not a definitive conclusion. Many other countries have similar provisions and are similarly struggling to give effect to what this means in terms of ownership.

In the US, copyright law does not envisage ownership of work generated by a machine, but the law has recently addressed whether such work is eligible for copyright. US Copyright Office rules state that it “will register an original work of authorship, provided that the work was created by a human being.” Generally, absent a written agreement, the author of a work owns the work. In 2016, in response to a US court ruling in a copyright infringement case involving a monkey who had taken a selfie using a camera that a British photographer had setup, the US Copyright Office updated its rules to clarify that “copyright law only protects ‘the fruits of intellectual labor’ that ‘are founded in the creative powers of the mind’.” The rules listed specific examples of works that do not qualify under US law for copyright protection, which included “works produced by a machine or mere mechanical process that operates randomly or automatically without any creative input or intervention from a human author.” While it follows that a work solely created by a machine is not eligible for US copyright, US law is silent on the issue of ownership of a work created solely or jointly by a machine. Assuming the machine’s contribution to a joint work with a person could qualify for US copyright, would the owner of the machine jointly own such work with the other person, or would the other person be the sole owner of the work? It would seem reasonable that the owner of the machine would be a joint owner of the work, but there is no explicit US law or case dictating copyright ownership for work generated by a machine.

Over in Japan, the government’s intellectual property task force stated in 2016 that Japan’s existing copyright law did not cover creations produced by AI. The Japanese government is in the process of putting in place new measures in 2017 to seek to give protection in this area.

Are inventions created by AI systems patentable?

Patents are also relevant in the field of AI. If a machine invents something new, can it be patented? Turning to the US first, the law envisages an individual as the inventor who contributes to conception of an invention and, as of yet, there is no concept of a computer being able to conceive of a patentable invention. While the term “individual” appears to exclude companies or legal entities from being named an inventor, “conception” is defined by the US Supreme Court as “the complete performance of the mental part of the inventive act” and “the formation in the mind of the inventor of a definite and permanent idea of the complete and operative invention as it is thereafter to be applied to practice.” Under US law, currently, a machine is not likely to be named an inventor since it is not an “individual” and the “conception” standard appears to contemplate inventorship by a person rather than a machine. However, there is no specific prohibition on patenting inventions created by AI, and no US court has yet ruled on the issue.

In the UK, for over a decade, there has been discussion as to whether inventions which are conceived using computers can gain patent protection. The Patents Act 1977 expressly carves out from patent protection inventions which are implemented by computer programs if they “relate to that thing as such”. Traditionally, that has meant that only certain types of patent applications which involve computer systems will be granted and these need to have a certain “technical” contribution. If this hurdle is overcome, the Act sets out that the inventor is the devisor of the invention, albeit that there can be joint inventors. So, arguably, AI inventions of a certain type are patentable but there are barriers to patentability to be overcome. Indeed, whether robots can create something which is patentable and own that is subject to debate by the European Parliament following the vote mentioned above.

Japan already holds numerous patents in AI and, as at November 2016, was reported to have more patents in this area than any other country in the world.

Liability for Acts and Omissions of Robots and AI

Turning away from Intellectual Property ownership for one moment, the legal implications as to who is liable for the acts and omissions of robots and AI delivered outcomes paints a similar story and clearly if robots and/or AI is operating in a “connected” environment there are increased security and hacking risks (obviously not uncommon to any internet-enabled technology). You are unlikely to be surprised that the law is unclear in many territories in relation to an “owner”, manufacturer and/or user’s liability for acts and omissions by robots and AI. 2017 is the year in which many countries are seeking to generate legislation which will give at least some framework in these areas. For example, the 2017 report from Mady Delvaux examines if robots should have legal rights and be given legal status as an “electronic person” as well as whether a robot can be held liable for accidents. There is much talk about whether robots should have a ”kill switch” so they could be switched off if needs be. The EU report sets out some proposed principles which include:

A robot may not injure a human being or, through inaction, allow a human being to come to harm. You can see immediately this is likely to be abused – for example, drones have already been used in warfare and use of robots would give a much bigger advantage to warring nations.

A robot must obey the orders given it by human beings except where such orders would conflict with the above. This seems strange given robots may be given their own legal status. It will be interesting to keep track of further discussion in this area.

A robot must protect its own existence as long as such protection does not conflict with both of the above.

In an Annex to the report, liability is looked at in some detail. The report sets out that “civil liability for damage caused by robots is a crucial issue which also needs to be analysed and addressed at Union level in order to ensure the same degree of efficiency, transparency and consistency in the implementation of legal certainty throughout the European Union for the benefit of citizens, consumers and businesses alike”. The report continues that whatever legal solution is applied to the civil liability for damage caused by robots in cases other than those of damage to property, “the future legislative instrument should in no way restrict the type or the extent of the damages which may be recovered, nor should it limit the forms of compensation which may be offered to the aggrieved party, on the sole grounds that damage is caused by a non-human agent”. The report comments that: “in principle, once the parties bearing the ultimate responsibility have been identified, their liability should be proportional to the actual level of instructions given to the robot and of its degree of autonomy, so that the greater a robot's learning capability or autonomy, and the longer a robot's training, the greater the responsibility of its trainer should be; notes, in particular, that skills resulting from “training” given to a robot should be not confused with skills depending strictly on its self-learning abilities when seeking to identify the person to whom the robot's harmful behaviour is actually attributable; notes that at least at the present stage the responsibility must lie with a human and not a robot”. This is clearly going to be a very difficult area to legislate for if a dividing line is to be struck – can or will liability be separated from the human at a certain point?

Linked to the above, it will be very interesting to see the position being taken by the insurance industry in relation to AI and robotics as both the technology and the law develops. The report itself suggested that a potential solution “to the complexity of allocating responsibility for damage caused by increasingly autonomous robots could be an obligatory insurance scheme, as is already the case, for instance, with cars; notes, nevertheless, that unlike the insurance system for road traffic, where the insurance covers human acts and failures, an insurance system for robotics should take into account all potential responsibilities in the chain”. This has been an issue being grappled with by insurers in terms of driverless vehicles for some time and many of the same principles apply more widely. There are a number of recommendations in the report which are worth reading, whether your business is “selling” AI and/or making robots for sale or intending to use them in their businesses.

International aspects

The report itself promotes the need for strong international cooperation in relation to considerations relating to the societal, ethical and legal challenges and setting regulatory standards.

Conclusions

It is clear in numerous countries across the world that 2017 is the year that they will look to grapple with and update their laws to deal more comprehensively with AI. The UK, US, EU and Japan all indicated they will look at the legal implications of AI (including in relation to intellectual property and liability) in 2017. The UK will have the added complexity of Brexit such that if new EU laws seek to deal with AI, the UK may look to keep those (if implemented before the UK leaves the EU) or may need to apply its own.

Current conclusions in relation to commercial arrangements where an organisation is procuring AI systems or consultancy from a third party provider or are offering these services are that it is essential to seek to give clarity in the contract(s) to the parties’ intentions around ownership, licensing and exploitation as well as product and other potential liabilities. We recommend anyone working I this area (if they are not already) carefully considers the position with your legal team(s) before entering into those contracts.