The Situation: The United Kingdom is positioning itself as the "go to" location to develop, test, and drive automated vehicles, but questions remain as to how its existing product liability regime should respond to this developing technology.
The Plan: The UK Government expects to see fully self-driving cars on UK roads by 2021. It has already enacted legislation to provide an insurance model for automated vehicles. Earlier this year, it commissioned a detailed and wide-ranging review of applicable regulations with the aim of ensuring legislation keeps pace with technological developments.
Looking Ahead: We can expect there to be a number of legislative changes in the coming years to reflect and facilitate innovation in the area of driverless cars, which may serve as an example to other European countries.
The diverging European regulations on automated driving are addressed in an earlier Jones DayCommentary, which also addressed the Commission Report of the European Product Liability Directive 85/374/EEC. This Commentary looks more closely at the United Kingdom.
UK Legal Regime Is Better Developed Than Most
The UK Government indicated its commitment in 2015 to developing a light-touch, minimally regulated approach to the testing and development of these technologies, in a bid to position the United Kingdom as a premium global location for their development. It has announced that it expects to see fully self-driving cars on UK roads by 2021, and automated vehicles form a key part of the "Future of Mobility Grand Challenge" commenced in July 2018 as part of the UK Government's Industrial Strategy.
Attempts are being made to ensure that legislation keeps pace with technological change and that it facilitates rather than impedes innovation. Real-world testing of self-driving vehicles on public roads is already possible, and plans are in place to allow testing without a safety driver. Legislative changes introduced earlier this year allow drivers to use technology like remote control parking on British roads.
The Automated and Electric Vehicles Act 2018—which introduces an insurance framework for these types of vehicles—received Royal Assent on 19 July 2018 but has not yet come into force. Under the Act, an insurer will have default liability for death, personal injury and property damage (other than damage to the automated vehicle itself or goods carried for hire) resulting from an accident caused by an automated vehicle when driving itself, with certain carve-outs. The Act envisages that, if there is an accident, the compensation route for the injured party would be through the insurer, and the onus would be on the insurer to recover from the responsible party (potentially including vehicle manufacturers) under existing common and product liability laws.
Product Liability Regime in the United Kingdom
In March 2018, the Law Commission began a far-reaching review of the United Kingdom's legal framework for driverless vehicles, looking at everything from road traffic legislation to product liability.
As things stand, vehicle manufacturers and other industry participants have to consider potential liability for defective products under the Consumer Protection Act 1987 ("CPA"), as well as in the tort of negligence, in contract and for breach of statutory duty (which are not examined in this Commentary). Like its counterparts in France and Germany, the CPA is based on the Product Liability Directive 85/374/EEC ("Directive"). It remains to be seen whether, following its withdrawal from the European Union, the United Kingdom will adopt a model which keeps it in step with EU product law, including any changes made to the Directive following the European Commission's report covered in our earlier Commentary.
Currently, the CPA imposes a strict liability regime on manufacturers, importers and "own branders" to pay compensatory damages to persons who suffer personal injury or damage to noncommercial property as a result of a defective product. The application of these principles to automated vehicles and the related technology give rise to complex issues which will need to be addressed. For example:
A product is defective where "the safety of the product is not such as persons generally are entitled to expect." One of the key anticipated benefits of this technology is a reduction in accidents (suggesting the safety of the average human driver might be too low a comparator), but it is not clear where on the spectrum (between human driver and the notional "infallible" driver) the standard will be set.
It is a defense to demonstrate that the state of scientific and technical knowledge at the time did not allow discovery of the defect. In an area of rapidly changing technology, the position of this defense will also need to be developing constantly; its application to software which includes self-learning algorithms will be of particular interest.
It is not clear how the law will deal with situations in which autonomous-vehicle software takes a deliberate action which inflicts a particular type of damage in order to avoid another (potentially more serious) incident, particularly when that choice occurs as a result of the vehicle operating correctly in accordance with its coding.
With consumer expectations high, but knowledge and understanding of these complex new technologies low, proper training and explicit and detailed warnings clearly brought to the attention of users will be of critical importance.
It is hoped that the Law Commission will make recommendations which deal with these issues, but if not, we can expect them to be a source of future litigation.