Robot liability insurance may become compulsory in the future under new plans by the EU to classify robots as “electronic persons” with their own legal liability.

The recent draft European Parliament motion calls for an “obligatory insurance scheme” in order to solve the complex problem of allocating responsibility for damage caused by increasingly autonomous robots. Comparison is made with compulsory motor insurance, although rather than an insurance covering human acts and failures, the proposed obligatory scheme is primarily envisaged to compel robot manufacturers to take out insurance for the autonomous robots that it produces. The draft also calls for the creation of a fund to ensure claimants are adequately compensated for damage in cases where no insurance cover exists.

The key issue identified by the report is its suggestion that the current understanding of legal liability may become insufficient in future, as robots become more autonomous and no longer “simple tools in the hands of other actors”. In this regard, the report suggests that robots could be held liable for damages caused when they malfunction. The draft also calls for the creation of a central register of autonomous robots and even suggests that robots should be have intellectual property rights and make pension and social security contributions.

Some of these more far-fetched ideas have led to the report being criticised by some commentators as “lunacy”, “too early” and “too complicated”. There are also fears about unemployment and wealth inequality and concerns that the development of robotics could be stunted by overly intrusive EU legislation. The report certainly uses bold language, declaring that “humankind stands on the threshold of an era when ever more sophisticated robots, bots, androids and other manifestations [AI] seem poised to unleash a new industrial revolution”. It even suggests that Isaac Asimov’s “Three Laws of Robotics” as developed in his short story Runaround should be given some kind of legal basis. These laws state that “A robot may not injure a human being or, through inaction, allow a human being to come to harm. A robot must obey orders given it by human beings except where such orders would conflict with the First Law. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.” However, the report fails to explain how the Three Laws of Robotics might actually be incorporated into the legal systems of EU member states.

The report was filed by the Luxembourg MEP Mady Delvaux and is available at http://tinyurl.com/grqzxsw. In the short term it is unclear whether the motion will pass. In the medium term it is also uncertain whether and when EU legislation will cease to apply to the UK when ‘Brexit’ goes ahead. However potential developments such as these will need to monitored regularly, since the UK insurance market is the largest in Europe.