Breaking AI down into separate parts is difficult in practice. For the purposes of this article we have used the following split:
- the untrained algorithm is the raw software;
- data is then used to train the algorithm;
- machine learning is the ability for the algorithm to learn by itself, or where the machine, rather than a human, writes the 'code'. In its simplest form, again this is just software. It is fiendishly clever software – but it is just software;
- the trained model is the untrained algorithm with the new 'code' written by the machine. Again this is software.
Under English law, no one can own data. There are no rights in data, there are only rights in relation to data. That is, the data in question must fall into a specific class to be protected – such as confidential information, personal data or database rights. There is not general ownership of data. Therefore to protect data is must be practically and contractually controlled.
Therefore from a legal point of view, all rights and requirements must be explicitly set out in the contract – including usage, return, deletion requirements etc. If any areas are not covered, then you will not be protected.
The untrained algorithm + machine learning + trained model. Copyright is the main IP protection for software under English law. Copyright applies to AI software in the same way it applies to any other form of software.
The key issue here is who owns the trained model? Although the trained model will be protected by copyright, it is not clear who owns those rights as this has been created in part by the machine itself – not by a human.
The same difficulty arises for works created by AI.
IP and ownership in AI creations
Who is the owner of works created by AI? Under English Law IP rights assume the creator, inventor or author is a human. However this is not the case here.
The Copyright Designs and Patents Act 1988 does refer to computer generated works. It states that where a work is computer-generated, the owner is the person who undertakes the necessary arrangements.
What is meant by "undertaking necessary arrangements"? Unfortunately there is not enough case law to clarify what this means in the context of AI. So it's not clear what necessary arrangements an organisation should take to ensure it owns the AI creations.
As a consequence, traditional IP rights cannot be relied on and standard IP clauses will need to be revised to explicitly deal with who owns what, what rights are granted, what is classes as infringement etc.
Therefore for both data, the trained model and AI works, traditional rules do not apply.
Tied in with the issue of ownership is the question of liability.
Who is liable when AI goes wrong?
Let us take the example of Tesla, whose vehicles have been involved in two similar fatal crashes since 2016. In both cases the vehicle failed to see a lorry cross its path and travelled into the lorry shearing off the top of the car, thereby causing both drivers to suffer fatal injuries. Should Tesla be liable for the crash? At what point should a driver no longer have any liability for what the car is doing?
At the moment the Department of Transport in the USA adheres to the automation standards set out by the SAE which run from “level 0” (no automation) to “level 6” (full automation). It is accepted that Tesla’s Autopilot driverless software system is no more than a level 2 or 3 on this scale, both of which require the driver to remain in control of the vehicle when driving. So from a public law perspective at least, Tesla is not being held liable for the two crashes that occurred if, as appears to be the case, the drivers were not in control of the vehicles at the time they crashed. It is accepted that the drivers should have been in control of the vehicles. However Tesla is continuing to push the envelope and has promised to produce full driverless automation within the next year. As and when that happens new liability questions will be posed.
With fully driverless cars operating at level 6 on the SAE scale, there will definitely be an argument to say that certain notions of personhood or human agency should be ascribed to the driverless software, once it gets to a level where it is truly driverless, in its capacity as being responsible for the vehicle on the road. But what happens if there is a crash and the driverless software is at fault? Back in the legal world such disputes will have to be resolved by reference to legal persons and as the driverless software will have no legal personhood beyond its ability to drive a car, the law will have to find another way of resolving justiciable disputes as the software will not by itself be capable of generating a remedy recognisable in law.
So for the time being at least, one has to assume that from a practical perspective whoever is the proprietary owner of the software will also be liable if it, in its AI aspect, creates legal harms that require a remedy.