Originally published in Hong Kong Lawyer

A.I.  These 2 simple letters may conjure up images of robots dominating the world, but really—what is it?

A.I. (artificial intelligence) is exactly what its name indicates. Intelligence that is artificial- or synthesized—which allows a program/robot to perform tasks usually performed by living humans and animals.  Though there is no set definition for A.I., it commonly boils down to the ability to associate one thing with another by the use of algorithms, equations, big data learning, and past experiences.

Though we may not notice it, A.I. has quietly permeated our lives. We see it in our smart phones (with programs which can respond and “talk” to us), in office personal assistant programs, in chess games and in vehicles. All these new technologies use A.I. and are meant to make our lives more efficient. The potential for it is boundless.

However, these conveniences come bundled with some complicated legal issues surrounding regulation and liability.  The need to understand these developments, consider whether to regulate within one’s own territory, and how to reconcile the development and use of A.I. products amongst the other jurisdictions, are important issues to explore.

This article outlines issues on liability that we should examine in hopes of creating a framework to allow A.I. development and protect the safety of consumers and users of A.I. products.  All the examples presented in this article are hypothetical and are not based on any existing products or research and development, but serve to broach the subject of how to consider liability.

Regulation and Liability

The need to create laws in relation to A.I. has been discussed much in the past year, but the concepts are still very vague, with the extent of the regulations (if any at all) shrouded in uncertainty.

Ultimately, regulation is being considered, to put in place accountability for liabilities.  Who should be liable for the faults, negligence or damage that may be caused by A.I. programs?

The A.I. Robot?

Can the A.I. robot (or program, if there is no physical form) be liable? This discussion goes into ethics and the extent we are willing to recognize A.I. robots/programs as beings   accountable for their own actions. What could they give to compensate damage and loss suffered?  In theory, if A.I. robots/programs are to be held liable, that would likely mean that they have first been given rights. This concept encompasses a plethora of fundamental rights questions, including whether they should in the future be entitled to rights similar to our human rights. 

Let us consider this idea in the scheme of the Belt and Road. Spreading across more than 70 jurisdictions, the Belt and Road embodies a vast and rapidly expanding trading platform with different religions and cultures– factors of which affect policy and law making. These nations may have different views on whether A.I. programs themselves can be held accountable, and cross border disputes on this may be complicated if bodies of people have different fundamental perspectives concerning A.I. programs. 

The law should be able to provide for certainty to the furthest extent possible. If some countries have regulations holding the A.I. robot liable whilst others hold the operator/manufacturer/coder liable, there will not be any clarity and certainty in the laws with regard to cross border trade, especially if the damage caused could amount to a criminal offence which cannot be contracted out of by the parties.

The Operator?

Assuming that we do not want to explore holding robots and programs accountable, we have to consider whether the operator, the manufacturer or the coder is to be liable for loss or damages suffered by other humans.

In some cases, the operator of the A.I. robots/programs could simply be an unwitting person who presses a button to start up the A.I. program. Or it could be a passenger who happens to be sitting in the front seat of an autonomous vehicle.  This person may not know the latent defects in the system, or may not be able to control the learning of the A.I. robot/program (which could possibly learn something wrong and amplify it). 

Let us consider the following hypothetical example:-

Presume that an A.I. powered baking robot, which is operated by an operator who turns on a button, baked a wedding cake and incorrectly added salt, rather than sugar.  The cake was not the taste that the consumer had contracted for and this problem could not be rectified in time for the wedding. Assuming the operator had supplied all the sufficient raw materials to the baking robot, would the operator be liable?  What if the robot baker had “learnt” from big data (which may not always contain accurate information) that foods are healthier with less sugar and deliberately put the salt in it?  Would the operator still be liable?

The Coder and the Manufacturer?

If there are situations where the operator cannot control the A.I. robot/program, and damage and loss is caused to a third party, can the coder be liable?  The coder might have only created the program and given it the power to learn from big data. Once the program leaves the hands of the coder, the coder may no longer be able to control it.

When we consider the coder’s potential liability, we may also need to consider whether the liability will fall on the shoulders of the manufacturer of the hardware which allows the program to be executed with.

Let us consider a hypothetical closed-circuit television security system with A.I. that is linked to a robot and able to subdue threats via a number of methods including issuing verbal warnings, police notification and in extreme circumstances, can fire bullets at threats.  The program is coded to allow for the system to identify a threat (e.g. if someone is screaming whilst being held at knife point) and it can also learn what new threats are.  If the program mistakenly learns that any loud noises amount to an extreme threat, it may inappropriately trigger the bullet shooting ‘subdue threat’ function. For example, suppose a group of people were conversing loudly to each other near the premises where this security system was installed and the noise level was assessed by the system as being an extreme threat. The security system then shoots at the group and injures them-- who would be liable? What if the accompanying bullet-shooting robot had a defect, causing it to shoot off target, thus injuring a passerby, who would be liable? 

Further to consideration of defects in the learning and execution of the A.I. robot, the range of the ‘subdue threat function’ will also have to be tailored for different jurisdictions, as the method for protecting one’s premises may be different amongst different jurisdictions. This may be an issue to consider if the A.I. program can learn and develop its own assessment and reaction.

Further to what was discussed above, there may be other parties who might be liable, and the above hypothetical examples only bring out some of the issues that should be considered. Issues in relation to foreseeability and causation also need to be explored.

Building the Framework

These are tough questions, and how we view them will greatly depend on our laws and our view of humanity and ethics.  

Perhaps it may be possible to establish a government department consisting of technology savvy persons, policy makers, social scientists, legal professionals and also persons in the respective industries that the A.I. technology may be being developed for, to consider ways to regulate and develop A.I. technology so that it protects us but is not be duly restrictive to technological progress.

Some of the existing laws can apply to certain situations to cover use of A.I. Guidelines might need to be issued to further supplement those.

As the technology is likely to be used across borders, especially in relation to programs using A.I., we may also need to consider setting up an international panel to regulate and prepare conventions on the use of A.I. to ensure the accountability is clear. Even though the technical development of A.I. might be without bounds, there should be controls in place to help ensure potential damages to our society are kept within bounds and liabilities clear.