It may seem straight out of the pages of a science fiction novel, but Uber’s recent pilot of their self-driving fleet suggests that driverless cars may be with us sooner than we think. With the new possibilities comes the potential for risks that we as lawyers and society generally have never faced before. Will the law need to be more hands on before driving can be hands off?

A recent government consultation paper (Pathway to Driverless Cars: Proposals to support advanced driver assistance systems and automated vehicle technologies) defines driverless cars as cars “that can drive themselves for some or all of the journey without human intervention”. The technology can be seen to be developing in two different ways: advanced driver assistance systems (ADAS) and automated vehicle technology (AVT). With ADAS, only some of the system is automated and a driver must still actively monitor system performance and retains ultimate control. This is a concept that we are already familiar with and includes Tesla’s Autopilot driver assistance system; allowing a car to automatically stay in lane, adjust its speed relative to other vehicles and change lanes. AVT seeks to remove the need for any driver intervention, creating a truly ‘driverless’ car. This is the technology that companies such as Google and Uber are developing.

With Uber’s pilot, driverless cars are now available to the public. From mid-September 2016, select customers undertaking journeys in downtown Pittsburgh, Pennsylvania, between 7am and 10pm have been collected by one of Uber’s driverless Ford Fusion Hybrids. The cars are fitted with mounted cameras, lindar and radar modules and antennas providing GPS positioning; using lasers to map out the cars surroundings, enabling it to “see” and avoid obstacles. However, currently the cars will be assisted by two Uber technicians: one sitting behind the wheel to take over when needed and one to monitor the car’s performance.

Driverless cars raise interesting questions that governments and lawyers will need to address.

In the event of an accident, the law currently focusses on the actions of the driver. If the car’s computer has complete control of the vehicle, the law may consider that fault should lie with the car and therefore the car’s manufacturer. This will be resisted by all car manufacturers, plus is it appropriate for the car “operator” (the person who formerly would be regarded as the driver) to abdicate all responsibility? What if the operator is the holder of a UK driving licence and sees the potential for an accident; should they stop the car? What if they are not able to do this? Driverless cars may also need updates to be downloaded to maintain optimum performance and, to ensure that cars are safe, it may be necessary to impose liability on owners for failure to have the most recent software.

The idea of a “driver” (i.e. the person operating the vehicle) will change and may cause challenges. A stated advantage of driverless cars is that they will give access to people who currently cannot drive, including children and people with certain health conditions. In theory, this would allow operators who the law does not currently consider capable of controlling a car and who could not be held to the same standard as a current driver. Driverless cars may also be used by people under the influence of alcohol or drugs, in conflict with current criminal law. However, in the example above, if it remains technically possible for the operator to intervene, then it seems likely that the law would require that such a person has sufficient training in relation to the particular operating system, and at all times retains the capability to so intervene. Ironically, it may be that such legal responsibilities will only be avoided if the operator does not have the ability to intervene. At this point, notions of driving or operating a car would become irrelevant; all those travelling in such cars would simply be passengers.

The law governing car accidents has developed over many years to adapt to human decision making and human failings. For example, for the offence of causing death by dangerous driving under the Road Traffic Act 1988, it must be considered whether a person meets the standard of a “competent and careful driver”. How does the law judge a machine’s decision making and do we know enough about the failings?

There is also the issue that what a computer “decides” (i.e. how it reacts in a given situation) is not determined in the moment but set in advance by programmers. This creates a moral dilemma in a situation where an accident is inevitable. If, for example, a driverless car could not avoid hitting one of two pedestrians in its path, who should the programmers decide to save? Similarly, should the car prioritise the safety of its passengers over other road users?

There are also additional security concerns. Driverless cars may need to record and store data to help determine fault in the event of an accident, raising data security concerns, and there is evidence that the technology behind driverless cars can be hacked.

These issues are pressing as the technology is still highly experimental and, as Uber has admitted in their pilot, accidents are inevitable. In early testing, Uber required users to sign a waiver accepting that riding in the car “may involve the potential for death, serious injury, and/or property loss”, exempting Uber from liability. Such a waiver would be prohibited under English law, where it is impossible to exclude liability for death or personal injury in a contract.

The UK Government’s response so far

The UK Government has expressed a commitment to developing and testing driverless cars in the UK and the recent public consultation by the Department of Transport and Centre for Connected and Autonomous Vehicles entitled “Pathway to Driverless Cars: Proposals to support advanced driver assistance systems and automated vehicle technologies”, considers whether this can be supported by the current regulatory regime.

The report identifies ADAS systems (i.e. driver monitored systems) as likely to come to market in the imminent future and therefore only proposes currently changing the insurance framework. This would extend compulsory motor insurance to require drivers to have product liability insurance; covering any injuries to themselves and third parties due to a fault with the car. Motorists and manufacturers will need to rely on the common law rules of product liability to determine fault.

However, the proposals rely on a qualified driver remaining behind the wheel and being able to take control, therefore most of the current law, for example prohibitions on drink driving, is still applicable.

The consultation paper admits that more changes will be necessary but that the course of development, and therefore what will be required, is difficult to predict. It recommends a rolling programme of reform, with the Government and lawyers working with the industry as more advanced products come to market to identify the challenges and implement solutions.

It looks like driverless cars will have to get even closer to market before we see how the law will adapt.

The Pathway to Driverless Cars: Proposals to support advanced driver assistance systems and automated vehicle technologies consultation paper can be found at: https://www.gov.uk/government/consultations/advanced-driver-assistance-systems-and-automated-vehicle-technologies-supporting-their-use-in-the-uk.