Who is liable when AI fails? Issues of liability for autonomous systems are not new.

In the 1980s, the Therac-25 radiation system delivered at least 6 fatal or near-fatal doses of radiation to cancer patients.  This was due, in part, to a glitch in the computer coding.  There were also issues with the design of the system.  Further problems were introduced when the system was upgraded at various hospitals.  The correct apportionment of liability is still debated today.

So far, in Australia, the legislative focus has been on regulating liability for limited and specific forms of AI, such as autonomous cars and drones.  There is no legislation dealing with liability for damage caused by AI generally.

Tortious liability

In the absence of regulation in this space, if an AI system fails (and there is no contract regulating liability), the courts will likely assess liability using traditional duty of care concepts.

Take a situation where an AI program in an MRI machine determines, based on past patterns, that a person has breast cancer.  But it is wrong.  Who do you sue?

Assuming a duty of care arises, the answer will depend on what has gone wrong. 

Where an AI machine is designed to think differently to a human, and to draw conclusions that a human would not make, how can it be judged whether or not the AI machine is acting reasonably?

It is important to remember that just because something bad happens – it doesn’t mean the AI system is at fault.  For example, difficulties arise where AI is embedded in a product.  Has the error arisen due to a product defect, or an error of the AI system?

In relation to the MRI example, some of the possible liability options are as follows:

  • The person who wrote the AI program – if there is flawed programming.
  • The person who manufactured the MRI machine – if there is a product defect.
  • The operator of the MRI machine – if the machine is operated badly.
  • The person who owns the data used by the AI program – if there is bad or incomplete data. ƒ No-one – it is your problem for relying on a machine for such advice.
  • Some combination of the above.

Some academics have suggested that the owner of the AI system should be held strictly liable for the actions of the system.  This is similar to the position taken under animal liability law.  The owner of a “wild” animal is strictly liable for any injury or damage caused by that animal.  

There have also been suggestions to give AI systems a right of personhood, like a corporation.  This would allow an AI system to sue and be sued, to enter contracts, to own and dispose of property etc.

Often, an AI machine is providing information for a human, and it is the human who makes the ultimate decision.  In such circumstances, does the human decision-maker break the chain of causation if the AI machine’s decision is wrong?

Liability for not using available AI

The functionality of AI systems is improving rapidly. 

In the future, it may be negligent not to use AI systems where they are available.

By way of example, at the moment, many law firms use AI to assist in the document discovery process.  Final review and sign-off is done by a lawyer.  There may (soon) come a time where a law firm will be negligent if the firm does not use the discovery tools available.