AI, smart robots and automation are a tangible reality – as discussed in a previous post – which, although not yet fully addressed from a legal perspective, should be addressed in medium- to long-term technology contracts.

Here are the top five contract issues to consider when dealing with services based on robots or artificial intelligence:

Service levels and failure

Broadly speaking, current service level models are devised to incentivise suppliers to avoid ‘low grade’ issues that might arise if staff do not follow proper processes. This is because human beings are by definition fallible, and will be more or less efficient depending upon a large number of factors.

The same principles apply for liability limits and exclusions, as well as confidentiality, data protection, security and audit provisions, which mainly assume that human fallibility can be avoided by supervision.

This is not the case with AI-based services, where if failures happen, they are more likely to be catastrophic rather than minor incidents. Because AI-based systems tend to work at a demonstrable accuracy level or will fail in a significant way and fall well below the relevant standard, it is far less likely that such systems will degrade by small margins as human-provided services might.

We are not saying by any means that the above clauses will no longer be relevant, but they will have to be adapted to reflect the different failure modes of AI services. Certain liability cap standards may be reviewed, which will not necessarily mean a standstill in negotiations.

Such issues will probably be addressed by more detailed insurance provisions, with the relevant costs to be taken into account from inception within the pricing models. In this context, the compulsory insurance scheme proposed by the EU Parliament in its Draft Report to the Commission on Civil Law Rules on Robotics may be of interest.

Ownership of trained AI and data feeding

Many machine learning-based AI systems improve over time. How to improve and who will be the owner of such improvement will have to be regulated, and current standard IPR clauses may not suffice. For instance, we already have an AI-based contract due diligence system in use at DLA Piper.

That system can be trained to automatically find examples of particular types of clauses that we might want to find as part of a typical corporate due diligence exercise – such as any clauses prohibiting assignment or transfer of the contract, or rights for the counterparty to terminate upon a change in control or ownership of the target.

To train the system, it needs to be shown an array of examples of the types of provisions it should search out. However, it would be equally easy to corrupt the system with bad data – if you give to the system “bad examples” – if the system is told repeatedly that indemnity clauses are assignment clauses, say, then instead of improving the system, you would gradually make it worse.

This type of machine learning approach is common to many AI-based systems, and therefore the possibility of pollution with bad data exists. Extending this basic example to a standard supplier / customer relationship, the supplier may give access to an AI system to a customer (such as fraud detection), and the customer might then, albeit inadvertently, feed the system with bad examples.

If the supplier uses that system as a shared platform to service multiple customers, one customer feeding in bad data would then make the system worse not only for the relevant customer, but also for all other customers that use the same AI system. The supplier may want to avoid such events with specific provisions on the quality of the data feed that the customer will provide.

Audit and technology

Customers often ask for audit rights because of particular regulatory obligations that apply within their business sector – for instance, a bank may incur substantial sanctions from its regulators if it cannot audit and monitor the work of its services providers.

Such monitoring is easier within the traditional sourcing environment, when a supplier can be audited mainly through a review of documents, reports and procedures. Any work done by a human can relatively easily be checked by another human. In this new context it is more difficult to work out how the AI system is working (and evolves) through the service.

If a machine learning-based system has formulated its own pattern-matching approaches to determine the probability of a given action being the correct response to particular inputs, human auditors will not necessarily be able to derive or follow the underlying logic and reassure themselves in the same way that they might be interviewing workers to check their level of training and competency etc. It may well be that instead of the traditional accountants and audit professionals, additional forensic IT experts should be added to the team that performs the audit.

HR and knowledge transfer

It is accepted practice, at least in places where the Transfers of Undertakings Directive or similar legal principles may apply, that employees involved in providing a given service that is to be outsourced may transfer to the supplier upon the commencement of service provision.

Generally when a customer transfers its employees to the services supplier, it will expect to transfer those of the supplier’s employees who were providing the services either back to the customer or onward to a replacement supplier where the services are terminated. This is aimed at ensuring the customer can continue with the services directly (or with third parties) with the same standards of services and with the benefit of relevant know-how.

Where an AI is involved in services provision, some or possibly all of the employees previously providing the services within the customer organization may have become redundant during the period of service supply. Therefore there may be few, if any, employees to transfer back to the customer or onward to a new supplier, and a resulting loss of know-how transfer.

Upon contract termination, if the AI system is licensed software, it may well remain with the supplier, along with the experience and machine learning that it has developed during the provision of the services. In that context, it is important to address at the outset in the contract how to reimport information into a new AI system, so as to have an accelerated period of learning. Exit provisions are accordingly becoming more relevant, and also need to address who will own the IPR in the data generated through the education provided by the customer to the robot.

Data protection and informed consent

Artificial intelligence and smart robots pose some obvious data protection concerns (and we will address such topics in more detail later in this series of articles). Such concerns take on a new relevance once we take into account the substantial sanctions that may be applied under the new European General Data Protection Regulation.

The main concerns stem from the fact that any AI system by definition is based on the processing of a large volume of data. Such data may initially not be personal data within the meaning of the Regulation, but can become personal data (ie attributable to a specific person) or even sensitive data, as a result of deep pattern matching techniques and other processing that an AI might perform.

This may result in data being processed in a manner for which consent had not been granted, without any other relevant justification being applicable, or beyond the boundaries set out by earlier consent. Furthermore, the AI may end up making its own decisions about the data management, thus changing the purposes laid out by the data controller who should be ultimately responsible for the data processing.

Furthermore, depending on the complexity of the system and the ability to detect “unusual” activity, it may be harder to determine when an AI-based system is being hacked, with a consequent data breach. All such issues will have to be carefully addressed both in designing how an AI will function and what technical controls can be applied, and in any agreement between parties involved in using that AI to process data.

Last but not least, and this is a rather pervasive point, it should be carefully determined between the parties who is responsible for what, if there is any dependency, particularly considering all parties that may incur liabilities when dealing with smart robots or artificial intelligence. We will further address AI and liability in a future article in this series.