The United States Senate Committee on Commerce, Science, and Transportation has approved AV START ACT, allowing the manufacturers of autonomous cars to produce and test up to 80,000 vehicles a year over the next three years. It releases them from the current safety standards so that technological firms and the car industry can advance more rapidly in the innovation and development of these technologies; particularly artificial intelligence and the components to process the huge amounts of data that the autonomous car needs to operate safely and efficiently. Such development is progressing quite a bit more quickly than what is generally thought. Several manufactures claim that they would be able to launch independent vehicles within three or four years. After data collection and work by federal regulators, the US expects to have the safety criteria for this means of transport finalized and ready in the next few years.

However, legal regulation is not evolving at par with the technological advances. In part this is due to a fear of over-regulating this emerging market, thus undermining its potential for growth and adoption; but it is also due to the aim of providing consumers with sufficient protection in a very delicate environment. Fascinating and far-reaching legal questions arise in relation to this revolutionary model. The movie "I Robot" comes to mind when one reflects on the dilemma that cars with artificial intelligence will have to deal with when deciding how to manage an unavoidable accident. In the case of these vehicles, will it be legal for a driver to drive under the influence of alcohol? What is the limit of intrinsic human liability, considering the fact the car has a wheel?

Apparently, some governments are guiding this sector toward self-regulation, while aiming to reach a fundamental balance that ensures reasonable safety standards. What/who is understood to be the "driver" in these vehicles requires definition, as well as who is liable in a case of accident: is it the manufacturer of the autonomous car or the person that is transported therein (given that it supervises the actions of the vehicle)? The vulnerability of these cars in a case of cyberattacks or hacking must also be taken into account, which explains why one important manufacturer has announced its alliance with start-ups specialized in cybersecurity. Manufacturers will hold a great deal of responsibility for ensuring precautions are taken to impede malicious attacks on the relevant computer systems. Furthermore, when analyzing accidents, insurance companies must have access to the personal data generated by drivers; information that would fall under personal data protection regulations and require legislative development.

The Spanish government supports the Amsterdam Declaration, which is the first EU text related to cooperation involving automated and connected driving. It seeks to establish a common European framework that stipulates the necessary technical and legal requirements for such sector by 2019. In addition, the General Traffic Directorate established a regulatory framework to carry out tests on automated vehicles on public roadways in 2015 ( Instruction 15/V-113).

In 2016 the Spanish Public Prosecutor's Office stated that it was necessary to prepare new regulations regarding civil liability and insurance, as the Prosecutor's Office foresaw a change of liability in the future, from the drivers of the car to the companies that manufactured it, created its software or even those that developed its cartographic maps. In any case, it is also important to analyze these measures from an overall and economic standpoint, because an inefficient liability model could disincentivise the development of this type of technology.

In any case, the safety issue is one of the essential keys to this matter, if not the most important one. According to the California State Government (based on experiments performed in said State last year), one autonomous car prototype made a mistake every three hours, on an average. Manufacturers performing tests must issue a detailed report every January to inform regarding the number of errors detected and their causes. The reports must identify situations where the driver had to take control of the car due to hardware or software failures, or because the driver sensed there was some type of problem. Overall, the nine US companies involved in this type of project reported 2,578 incidents in 2016.