In this two part analysis, Michaela Herron, Senior Associate, considers the complex area of liability for autonomous and driverless cars. Part 1 will look at recent developments in this ever growing area, the newly enacted Automated and Electric Vehicles Act 2018, and the application of criminal law offences to autonomous and driverless cars. Part 2 will examine the existing product liability framework, its relationship with the framework under the Automated and Electric Vehicles Act 2018 and its application to autonomous and driverless cars.
The first reported fatal accident involving a pedestrian and an autonomous car occurred on Monday 18 March 2018. A pedestrian was pushing her bike across a quiet road in Arizona at around 10pm when she was fatally struck by a Volvo Autonomous Uber car. The car allegedly failed to slow down or swerve to avoid hitting the pedestrian.
This accident came almost two years after the widely reported Tesla accident on 7 May 2016 which led to the death of the driver of the Model S. It is understood that the driver was using the car in autopilot mode which can control the car during motorway/highway driving. However, the car’s sensor system allegedly failed to distinguish a large white 18-wheel truck and trailer crossing the Highway, against the bright spring sky. The car attempted to drive at full speed under the trailer with the bottom of the trailer impacting the windshield of the Model S.
In January 2017, Tesla Motors announced that their investigation of the car found no defects in the system that caused the accident and said Tesla’s autopilot-enabled vehicles did not need to be recalled. The investigation by the US National Highway Traffic Safety Administration (NHTSA) concluded that the camera failed to recognise the white truck against a bright sky but essentially found that the driver was not paying attention to the road. It was determined that he set his car’s cruise control at 74 miles per hour about two minutes before the crash and that he should have had at least seven seconds to notice the truck before crashing into it. Neither the autopilot nor the driver hit the brakes. The agency held that although the autopilot did not prevent the accident, the system performed as it was designed and intended, and therefore did not have a defect.
A subsequent investigation by the US National Transportation Safety Board (NTSB) announced its final report in October 2017. The NTSB reached generally the same conclusions as the NHTSA and absolved Telsa of responsibility. In other words, it was the driver’s fault.
Within a week of the death of the pedestrian in Arizona, Uber had reportedly reached a settlement sum with her family. Several agencies are still investigating the collision and it remains to be seen what those final reports say. What is interesting, is how quickly any potential claim was apparently settled by Uber. It is particularly unusual in circumstances where no reports or investigations to attribute blame or liability could have yet been concluded. It may be that, following Volvo’s announcement in October 2015 that they would bear liability for any injuries arising from their self-driving cars that, as part of their operations with Uber, early settlement by Uber is mandated who in turn will be indemnified by Volvo. However, this is mere speculation. Further, the desire to settle the claim quickly and to effectively mute prolonged media coverage of the accident may also have been driven by fears that a fallout from the accident would stall development of autonomous vehicles.
Following the crash, Uber announced that it had suspended its autonomous testing programme across North America. The company did not renew its permit to test in California which expired at the end of March 2018 and confirmed they would not be renewing their licence to do so until the investigation into what happened in Arizona has been completed.
According to preliminary results of the NTSB’s investigation, if the emergency brake system had been on it would have reacted about 1.3 seconds before impact. Whether that would have been enough time to prevent the crash was not addressed. Full results of all the investigations are awaited.
Developments in the UK – the Vehicle Technology and Aviation Act 2018
By 2020 it is anticipated that advanced driver assistance technology (ADAS) will be commonplace. Forms of ADAS already exist in many models of cars, for example emergency braking, lane assistance, cruise control, and self-parking. As the technology develops it will become more sophisticated and will be able to deal with more complex situations, eventually reaching a point where the cars become driverless. By 2025 it is predicted that we will see truly driverless cars. The European Commission announced new legislation in March 2018 mandating that all vehicles be equipped with autonomous emergency-braking systems and forward-collision warning systems. By 2035 the Department of Transport expects the industry to be worth £50 billion to the UK economy.
On 19 July 2018, the Automated and Electric Vehicles Act (AEVA) was enacted although it has not yet commenced (its commencement date is subject to appointment by the Department for Business, Energy and Industrial Strategy). One of the Government’s aims with the AEVA is to try to make the transition to ADAS and then to advanced driverless technology vehicles as smooth as possible. In summary, AEVA deals with amendments to the existing compulsory third party insurance framework by extending it to cover use of automated vehicles in addition to dealing with electric and hydrogen powered vehicle charging.
For the purposes of this article, the approach to insurance is the most relevant issue. AEVA provides at section 2(1) that:
- An accident is caused by an automated vehicle when driving itself on a road or other public place in Great Britain,
- the vehicle is insured at the time of the accident, and
- an insured person or any other person suffers damage as a result of the accident, the insurer is liable for that damage”.
The previous drafts of the bill did not include the following qualification “on a road or other public place in Great Britain”. This is an odd late-stage amendment as it makes it entirely unclear what the insurance and liability position is if, for example, the accident occurs on a private road or on someone’s private estate. Is it the case that such accidents will not be insured at all? What if a car is reversing out of your driveway, malfunctions and continues to reverse across the street and into the neighbour’s garden and crashes into their car or house? The additional language was presumably included to mirror section 143 (insurance against third-party risks) and section 145 (requirements in respect of policies of insurance) of the Road Traffic Act 1988 (the RTA). Under those provisions, there is no obligation in the legislation on drivers to insure their car for use on private land, nor is there any obligation for insurance policies to cover private land. The AEVA therefore attempts to follow this.
“Road” is not defined in AEVA. However, in the RTA it is defined as:
“(a) in relation to England and Wales, means any highway and any other road to which the public has access, and includes bridges over which a road passes, and
(b) in relation to Scotland, means any road within the meaning of the Roads (Scotland) Act 1984 and any other way to which the public has access, and includes bridges over which a road passes.”
This would therefore seem to exclude private driveways, private roads, and roads on private estates.
However, a 2014 ECJ ruling (Case C 162/13, Damijan Vnuk v Zavarovalnica Triglav d.d) ruled that under Directive 72/166 (the ‘Motor Insurance Directive’), mandatory motor insurance must cover any motor vehicle in its normal use, in any location.
In November 2017, judicial review proceedings granted a declaration in principle that certain sections of the RTA limiting the requirement of insurance to the use of a vehicle on a road or other traffic place is incompatible with EU law following the ECJ's decision in Vnuk. Amendments to legislation were required but the court stated that to set aside any part of the domestic legislation would cause chaos. In addition, the court was not prepared to read words into the legislation to refer to the Directive. Further the scope of the judgment in Vnuk was unclear. An EC consultation has since taken place and the Commission is considering legislative amendment of the Directive. Since then, there does not appear to be any update from the Commission or the Government about resolving this point, but judges are apparently giving the RTA a broader interpretation in some cases. It is therefore interesting that AEVA sought to adopt the same wording which leads to the same issue of confusion and potential incompatibility with EU law as in the RTA.
“Damage” is defined as meaning death or personal injury and any damage to property other than the automated vehicle, goods carried for hire or reward in or on that vehicle or trailer (whether or not coupled) drawn by it or property in the custody of or under the control of (i) the insured person or (ii) the person in charge of the automated vehicle at the time of the accident.
Under the AEVA, there will be a “single insurer” approach – meaning the insurer of the automated vehicle effectively steps into the shoes of the vehicle manufacturer. This means that injured parties can continue to claim against a motor insurer and will not face having to make potentially complex product liability claims against vehicle manufacturers. Under section 5, the new legal regime is to be completed by giving the insurer a right of recovery against the manufacturer which closes off the liability ring. Section 2(7) explicitly states that the imposition of liability on the insurer does not affect any other person’s liability in respect of the accident.
The adoption of a single insurer model is a welcome change in direction from the Government which, in consultation last year, seemed to indicate that drivers would be expected to buy a clumsy combination of compulsory motor insurance and top up product liability cover.
According to AEVA, insurers will therefore be liable by default for death, personal injury or damage to certain property which stem from accidents caused by automated vehicles in self-driving mode, provided the vehicle is insured at the time of the accident. A vehicle is "driving itself" if it is operating in a mode in which it is not being controlled, and does not need to be monitored, by an individual. Insurers would be free to try to recover the cost of damages pay-outs from the vehicle manufacturers.
Under section 2(4) liability of the insurer or owner of the vehicle for damage to property caused by or arising out of any one accident involving an automated vehicle, will be limited “for the time being” specified in section 145(4)(b) RTA (limit on compulsory insurance for property damage).
Contributory negligence (by both the “driver” and other road users) is a key issue that comes to mind when considering liability arising from accidents involving automated vehicles. This is an issue specifically set out in the AEVA at section 3 which specifies that the amount of liability arising under section 2 is subject to whatever reduction under the Law Reform (Contributory Negligence) Act 1945 would apply to a claim. The section goes on to state that the insurer or owner of the automated vehicle will not be liable under section 2 to the person in charge of the vehicle where the accident that it caused was wholly due to the person’s negligence in allowing the vehicle to drive itself when it was not appropriate to do so. Insurers would not be liable at all if an accident involving a driverless car was caused by the owner’s “negligence in allowing the vehicle to drive itself when it was not appropriate to do so”. The insurer’s liability could also be limited if an injured party is responsible in any way for the accident or the damage caused by it.
Section 4 of the AEVA relates to exclusions of liabilities for an accident where there have been unauthorised alterations made or a failure to update software. An insurance policy is permitted to exclude or limit the insurer’s liability under section 2(1) for damage suffered by an insured person if the accident arose as a direct result of (a) alterations to the vehicle’s operating system made by the insured person, or with the insured person’s knowledge, that are prohibited under the policy, or (b) a failure to install software updates to the vehicle’s operating system that the insured person is required under the policy to install or to have installed. Except as provided for under this section, liability cannot be limited or excluded by a term of an insurance policy or in any other way.
These proposals are consumer friendly as they allow injured parties quick access for recovery of damages with the full attribution of liability occurring in the background. However, the exceptions to automatic liability in section 4 could cause problems in the future where cover is not provided when the owner, through no fault of the injured party, fails to update the software. One has to wonder how the software updates will work in reality and whether there will be a time period, i.e. a grace period, permitted within which for the update to take place. However, how would this work with serious software updates related to safety issues? In those circumstances it may not be appropriate to have a grace period and the update may be required urgently. To avoid issues or accidents related to software, it could be argued that driverless cars could or should be temporarily disabled until the software update has been completed. It remains to be seen how car manufacturers and software developers are going to approach this issue.
Criminal liability – Road Traffic Accidents and Offences
In Part 2 of this series we will consider the applicability of product liability laws to autonomous vehicles but a similarly important issue is how criminal law and specifically road traffic offences will apply to autonomous vehicles and if totally autonomous with no driver, who will ultimately have culpability for road traffic violations. What happens if the autonomous vehicle runs a red light or is caught speeding? Should the car owner/passenger (as there is no driver) be held responsible for such offences? Who would pay the fine? Would there even be a fine? Where would the points on a licence go? Would the manufacturer be fined? Or the software developer? Can a driver be guilty of a criminal or road traffic offence if the car began to break the law and he failed to assume control of the car and prevent the further commission of an offence?
Under the Road Traffic Act 1988 the “user” is generally liable for the car’s actions but clearly this will require reconsideration in the context of self-driving cars. The applicability of criminal law and road traffic offences is going to require careful consideration. Autonomous vehicles cannot be excluded from the requirement to obey road safety laws but how they will be policed and enforced in the absence of any human involvement is a difficult issue and one which existing legislation cannot provide for.
Furthermore, what kind of licence will be required to operate entirely autonomous vehicles? Will drivers still be required to take a standard driving test when in wholly autonomous vehicles they will essentially be a passenger? Will the legal age for driving still be appropriate? What training should be mandated for users of autonomous vehicles and who will conduct this training? Furthermore, certain road traffic offences would seem to be obsolete in application to autonomous vehicles such as driving while using a mobile phone. Would it be permissible for the “driver” to be under the influence of alcohol or drugs?
The other criminal issue which comes to mind is liability for hackers of driverless cars. With the use of software and technology it is possible that this will attract hackers. How will hackers be held responsible? Is the existing legislation sufficient to capture this group of perpetrators?
The English Law Commission in collaboration with the Scottish Law Commission is to conduct a joint study into British driving laws with the aim of reviewing who should be blamed for road accidents caused by driverless cars and criminalising hackers who target autonomous vehicles. Insurance companies have allegedly already told the Government they will refuse to pay out if autonomous vehicles run up speeding fines. The Law Commission is to look at “how to allocate civil and criminal responsibility where there is some shared control in a human-machine interface” as well as the creation of new criminal offences for “novel types of conduct and interference” or hacking of driverless cars. The Law Commission expects to publish the scoping paper for the consultation before the end of 2018. Although noting that data protection, theft and cyber security, and land use policy are integral to delivering effective policy, those issues are predominantly outside of the scope of the study.
As technology in relation to autonomous vehicles advances at a seemingly exponential rate, there remains much to be done in a legislative context to prepare for the future. The AEVA is the first legislative step the government has taken to pave the way for autonomous vehicles and it will certainly not be the last.