Self-driving or “autonomous” vehicles are probably one of the greatest technological advancements of our day. The technology has huge potential to save lives, make travel more efficient, and improve the environment. Here at Weil’s Product Liability Monitor, we have kept a close eye on developments in this exciting area. For a few of our recent posts, see here, here, and here. In our prior posts, we have discussed some of the legal implications of “driverless” technology and how this is yet another instance where technology gets “out in front” of the law. The current “regulatory patchwork” certainly needs to “catch up” — and it appears that the National Highway Traffic Safety Administration (“NHTSA”) has taken an arguably “big step” in that direction in a recent letter to Google relating to its self-driving cars. In NHTSA’s letter, available here, the agency said the artificial intelligence system operating Google’s self-driving car could be considered the “driver” under federal law. As reported by Reuters, this is a “major step toward ultimately winning approval for autonomous vehicles on the roads.”
According to NHTSA’s letter, the agency “agree[s] with Google its SDV [self-driving vehicle] will not have a ‘driver’ in the traditional sense that vehicles have had drivers during the last more than one hundred years.” The letter continues, “If no human occupant of the vehicle can actually drive the vehicle, it is more reasonable to identify the “driver” as whatever (as opposed to whoever) is doing the driving. In this instance, an item of motor vehicle equipment, the SDS [Self-Driving System], is actually driving the vehicle.”
The Agency’s decision to treat Google’s operating software as the “driver” has significant implications. As a recent report in the Washington Post, aptly titled “Google’s driverless cars are now legally the same as a human driver,” explained:
“The decision by the National Highway Traffic Safety Administration marks a huge moment for Google and the rest of the auto industry as it races to build the first fully autonomous motor vehicle. While most other carmakers are building their vehicles with steering wheels, brake pedals and other machinery in mind, Google imagines that its robot car will have none of these things.
That raised questions about how the government would view those cars. In November, Google filed a letter with NHTSA asking as much, calling for greater clarity about the word ‘driver’ and what federal requirements Google might be subject to as a result, ranging from rearview mirrors to turn signals.”
Just by way of example, NHTSA’s Federal Motor Vehicle Safety Standards (FMVSS) contain provisions requiring the “operator” of a vehicle to use turn signals; a “driver” to be able to switch between low and high beam headlights; that a vehicle have a “driver-operated accelerator control system”; transmission shift position sequence (e.g., “park,” “reverse,” “drive”) must be “displayed in view of the driver”; that the vehicle include brake pedals and parking brakes that operate independently of one another by “hand or foot control.” The list goes on and on and it is easy to see how there can quickly be confusion when applying these regulatory standards to a vehicle that is not being operated by a human.
While equating Google’s software with the “operator” or “driver” — and agreeing that the car itself can control the brakes (since the vehicle lacks “hand or foot” control) — certainly eases some of the regulatory hurdles, there are still other requirements that NHTSA suggested Google “may wish to petition [the agency] for exemption” from (e.g., a requirement that a turn signal “pilot indicator” be visible to the driver when a turn signal is activated).
NHTSA’s regulatory support is obviously a significant step, but there are, of course, lots of “liability” questions that remain to be answered. A recent article in the Washington Post — “The big question about driverless cars no one seems able to answer” — contains a thorough discussion of just some of the potential liability issues that will need to be sorted out. These include:
Who is responsible for a driverless car crash? The auto manufacturer?
Will auto insurance requirements change? Will auto insurance rates change?
While NHTSA has weighed in at the federal level, what about State laws that are inconsistent or non-existent? What if all 50 states define “driver” in different ways? What if a State (as California is currently proposing) passes legislation requiring that all self-driving cars be “driven” by a licensed driver behind a physical steering wheel at all times?
Will there be an impact on who can obtain a drivers’ license?
What if a human driver is in a “driverless” car and sees that the vehicle is about to get into an accident? Does the human bear any responsibility for attempting to “take control” or “reasonably prevent” the accident if (unlike in the Google driverless car) there is some ability for a human to take over the controls?
What will be the impact of warnings by automakers to drivers that the driver remains responsible for maintaining control of the vehicle at all times?
How should the law treat “drivers” of autonomous vehicles who would not be able to “drive” but for the assistance of the new technology (e.g., the elderly or the blind)?
These are just some of the issues that regulators, legislatures, and ultimately the Courts will likely have to grapple with as “driverless” cars get closer to reality. We will be sure to monitor and report on developments here on our blog.