Legal personality is an artificial concept. Certain non-human entities also have legal personality?

An autonomous vehicle, fully operated by an Artificial Intelligence (“AI”) program, crashes and injures an innocent bystander.  Who can the blameless victim sue?

A company wants to include an AI program as a member of its Board.  Can the company do this when the Corporations Act requires that directors be adult individuals?

One possible solution to both scenarios would be to grant the AI program legal personality or “personhood”.  This is something that Estonia (Skype’s homeland and the location of NATO’s cyber-defence centre) has been investigating for a number of years.  The Baltic nation has emerged as a leader in new technologies, introducing a paperless government, internet voting and e-residency.  It was also the first country to declare Internet access to be a human right.  

Legal personality is an artificial concept.  It is generally defined as the capacity to hold and exercise rights, and to incur and perform obligations. With it comes the ability to hold property, and to sue and be sued.  You, the reader, have legal personality as a natural person.  Certain non-human entities also have legal personality.

By the late 19th century, governments had decreed a right of personhood for corporations.  This means that, despite not being composed of flesh and blood, a corporation can hold property and contractually bind itself under its common seal.  A corporation can also incur liability to other persons, such as tortious or criminal liability.

Take Apple as an example.  It is a legal entity, with similar rights and obligations as a citizen.  It has the right to defend itself in court and the right to free speech.  If Apple has legal personality, should Siri also qualify for such a status?

Doing so would acknowledge Siri’s autonomy, as every act conducted by Siri would be in the name of Siri, and not Apple (or Siri’s human creators).  Additionally, it would overcome the difficulties involved in identifying the various contributors to an AI program, and then dividing up rights and responsibilities amongst those contributors.

If an AI program has full legal personality, then it can own property, enter into contracts, operate a bank account, commence legal proceedings, and employ people to assist it. The AI program can create, own, buy and sell intellectual property. But with these rights come responsibilities. 

There is an important distinction to be made between a company with legal personality and an AI program with legal personality.  The actions of a company have to be carried out by a representative of the company.  Decisions are ultimately made by human beings.  In the case of an AI program with personhood, there is no representative acting on the AI program’s behalf.  The AI program will act autonomously, based on how it has been programmed. 

Granting legal personality to AI programs raises a myriad of issues. If an AI program becomes a defendant, what property could any potential claimants seek to claim against for damages if the AI program has no assets? The situation may be similar to that of an underfunded company, where claimants will seek to ‘pierce the corporate [or in this case, electronic] veil’ and obtain judgment against the natural persons behind the company, or in this case, the AI system.

A solution that has been proposed to the potential lack of assets is the institution of a compulsory levy across relevant stakeholders (ie. those investing in and profiting from AI programs) which would be used to fund a compensation pool for victims injured by an improperly operating AI program.  Spreading the cost of liability across a wider group in this way would promote the development of new technologies by minimising the cost to the individual behind the AI program, while still enabling claimants to recover.

Recently, the European Commission’s Expert Group on Liability and New Technologies – New Technologies Formation (“NTF”) published its “Report on Liability for Artificial Intelligence and Other Emerging Technologies” which considered, among other things, the question of granting legal personality to AI programs. The Report found that doing so would be unnecessary for the purposes of liability. The NTF preferred the current system of risks attributable to natural persons (or currently existing legal persons), with any gaps to be filled by laws directed at individuals.

In addition to the possibilities that have been proposed around granting legal personality to AI programs, commentary has emerged around “robot rights”, that is, the concept that people should have moral obligations towards their machines (such as AI programs), similar to human rights. Is it a crime to disconnect or delete an AI program that has legal personality?