The English prefix auto is derived from the Ancient Greek αὐτo, meaning 'self', 'same' or 'spontaneous'. Fittingly, the story of technological developments in the automotive industry is of a long march towards automation. The latest, but probably not the last, chapter in this story is the advent of the driverless car. A trending topic of debate in this area is whether our current liability laws are up to the task of handling the potential consequences of the driverless car. And if they are not, to what extent, and how, do they need to be reshaped or recast to provide appropriate legal certainty and protection, without stifling innovation?

In our previous article on robotics law (see Rabota, roboti: towards a new definition for a new age of robotics) we emphasised the need to ask, when discussing each new advancement in robotics and AI, if that advancement is really any different to what has gone before. This starting question seems especially relevant when thinking about whether fundamental changes to the liability law landscape are necessary to deal with a new technology. So, with this in mind, we should first try to assess whether the driverless car is really so different to some of the other game-changing automotive technologies that are now fundamental to the driving experience.

The seatbelt, the airbag, anti-lock braking and cruise control were all designed to improve automotive safety and were implemented without requiring any radical reshaping of our liability laws. This is not to say there has not been litigation in relation to each such technology; it is just that our liability systems seem to have adapted and developed to deal with them. With those aforementioned devices and technologies, it has usually been clear which category of liability laws would apply to cover the resulting damage caused. Cruise control is driver-operated, so damage resulting from its operation would usually be a matter of personal liability for the driver (perhaps tortious if the driver has been negligent, or criminal if the actions constitute dangerous driving). A defective seatbelt or airbag would typically be a product liability issue for the manufacturer. So it would follow that a driverless car crashing and causing injury due to a defect would also be a product liability issue – section 2(a) of the Consumer Protection Act 1987 would operate to impose strict liability on the manufacturer of a driverless car if it was not as safe as “persons are generally entitled to expect”.

Such an analysis is possibly overly simplistic in that it ignores the fact that, the more complex the technology, the larger the gap between people's expectations of it and the reality. People using driverless cars may, at least during their initial usage, have unrealistic expectations, or claim they did not (or could not) fully appreciate their safety features. Equally, manufacturers may claim driver-operators could (and should) intervene to avoid damage more than they do so. There are likely to be ‘grey areas’ where the contributing causes of damage will be unclear. This problem is usually most acute when a new technology is most nascent; the risks to manufacturers would reduce as people become more familiar with it, but at the start we can expect a lack of clarity and, accordingly, the potential for dispute.

Does this potential for uncertainty as to liability distinguish the arrival of the driverless car with the advent of other game-changing automotive developments? The Department for Transport (DfT) doesn’t seem to think so. As part of its recent review of laws and regulations relating to driverless cars, it said; "for cars with high automation, we consider that the situation would not be significantly different to the current situation with technologies such as ABS [anti-lock braking] and ACC [adaptive cruise control], where malfunctioning can cause collisions and injuries. It is anticipated that the regime of strict manufacturer liability would continue to apply.”

It is important to note that the DfT's remarks above are based on cars with high, rather than full, automation. In the former case, technological failures trigger 'failsafe' features which warn the driver that they need to take back control of the vehicle. In contrast, the fully automated driverless cars of the type Google are demonstrating often do not have a human presence and need to be able to ‘sense, plan and act’. This is a key distinction between what has gone before. All the other developments and technologies in this industry have involved, or at least permit, a degree of driver orientation or control. However if there is not a human presence in the car, then this is clearly not possible.

It is this distinctive feature of fully automated driverless cars that warrants discussion. They store and use information, make decisions independently and physically act upon those decisions. This means that, as tangible physical objects autonomously navigating public spaces, fully automated vehicles are capable of causing real-world damage independently of their manufacturers, owners or operators. Who is liable for such damage? In a world in which the dividing lines between things and humans have historically been clearly drawn, traditional legal doctrines used to apportion blame to legal persons, such as product liability and the law of negligence, do not appear to provide obvious solutions. One answer might be to assign some form of legal personality to automated vehicles; a recognition that they can commit damage, and cause loss, in their own right. Does attributing legal personality to intelligent, self-determining vehicles help to allocate liability?

Without an accepted, established set of practical enforcement methods, any attribution of legal personality to robotic systems is purely theoretical. What would be needed is an effective way to cover liability in the event of legal claims against robots or their owners by those who suffer damage or loss. Perhaps the best route through these issues is the route around them – we can avoid the question of legal liability of a 'person' (be that human, corporation or robot) altogether by taking existing constructs we use to cover damage and loss and applying them to this context. These range from insurance and strict liability, to no-fault liability for robot-to-robot collisions and trust-backed liability schemes. The application of such systems to cover liability for driverless cars requires law and society to recognise that our interactions with robots equate, to a greater or lesser degree, to our interactions with humans and companies. For example, in the case of insurance, we are talking about the liability of the car itself that needs to be insured, rather than liability of its manufacturer or owner as per traditional motor policies.

A possible solution could rest in Sweden’s current model. The compensation and accident prevention functions of insurance are separated under the Swedish system, entitling victims to be compensated by insurers and allowing insurers to decide whether to claim product liability based on objective methods (including ‘blackbox’-type devices fixed in vehicles). The Swedish system is not without drawbacks and is heavily reliant on state funding, meaning that transporting the concept to other jurisdictions may prove challenging.

In order for driverless cars to succeed as a technology, safety concerns are paramount. This is because, as product liability law is likely to form a basis for apportioning fault, manufacturers may be reluctant to enter, much less create, a market for autonomous vehicles with a product that does not meet objective safety and quality standards. Equally, insurers need to be satisfied that driverless cars are safer than the ordinary variety so they can be comfortable in insuring the risk. The insurer needs to be sure it will benefit from the technology by a reduction in the amount of payouts due to fewer accidents, as the victim of any accidents will be compensated by the insurer. This safety standard could be measured statistically or against an objective standard of ‘the best human driver’.

As with any new technology, the impact of driverless cars on public consciencousness is key; accidents will be inevitable, and those resulting in death will be likely to feature prominently in the media and public reaction. A comparison with passanger aircraft is useful here – statistically, air travel is significantly safer than driving a car, but the relative infrequency and the large number of people potentially affected by an air disaster attracts significant media coverage and has a deep-rooted affect on society. In relative contrast, it road users generally accept that car travel comes with inherent risks which means the majority of accidents go unreported. It is doubtful that accidents involving driverless cars would be affored this luxury.

In the meantime, governments trialing driverless cars are busy reviewing the fitness for purpose of existing legislation to cover some of the potential issues posed by this new technology. The UK Government recently announced that projects across London, Bristol, Coventry and Milton Keynes will host the testing of driverless cars on public roads from 1 January 2015. The Bristol project aims to investigate whether autonomous vehicles are able to improve road safety and ease urban congestion. The consortium behind the project includes the insurance group AXA, and there is an emphasis on measuring the reaction of the public, as well as discussing issues of insurance and liability. Sooner or later, governments and manufacturers will need to confront the liability issues arising from intelligent, autonomous, self-determining vehicles, and in doing so strike a balance between protecting the public without unduly affecting creativity and innovation in this rapidly emerging field.