Last month’s Product Liability Monitor post, “Driverless Cars and the Law,” asked, “What happens when a driverless car is involved in an accident?” How will courts react when a serious, even fatal, accident occurs? These questions became more immediate when a Tesla Model S, driven by its owner, Joshua Brown – ironically, an advocate of Tesla’s “Autopilot” feature–was involved in a fatal crash while operating in Autopilot mode.
According to a Tesla company blog posting, while operating in Autopilot mode, the vehicle collided with a semi-trailer that suddenly drove in front of the vehicle in a perpendicular direction. Ordinarily, Autopilot mode uses a front-mounted camera to “see” and avoid cross traffic. Tesla says that neither the driver nor the Autopilot circuitry applied the vehicle’s brakes because neither saw the white tractor trailer against the background of the brightly-lit sky. The Model S subsequently drove through and under the trailer, severing the top half from the vehicle and killing Brown.
More recently, a Tesla Model X SUV hit a concrete median strip and rolled over, reportedly while the Autopilot system was engaged. Tesla says it cannot confirm this, however, and a police report indicates that the driver will be charged.
One of Tesla’s selling points is its excellent reputation for safety. In 2013, the Model S achieved the highest overall vehicle safety score of any car the NHTSA has ever tested. The first fatal accident involving an Autopilot vehicle was a blot on Tesla’s otherwise enviable record.[i] Tesla’s Autopilot feature was released in October, 2015, and has been closely scrutinized since that time by government regulators and rival carmakers. Tesla’s system is remarkable because it is the first with semi-autonomous driving capability, allowing drivers to completely remove their hands from the steering wheel. Although the Autopilot system is still in “beta testing,” thousands of vehicles equipped with it are already in service.
In its blog post, Tesla emphasizes that Autopilot is disabled by default and before it can be enabled, the driver must explicitly acknowledge that the system is “in a public beta phase,” that it “requires you to keep your hands on the steering wheel at all times,” and that “you need to maintain control and responsibility for your vehicle” while it is activated. Further, whenever Autopilot is engaged, it reminds the driver to “[a]lways keep your hands on the wheel [and] [b]e prepared to take over at any time.” Tesla explained that Autopilot frequently checks that the driver’s hands are on the steering wheel and audibly and visually alerts the driver if his/her hands are not detected. The car gradually slows down until the driver’s hands are detected on the wheel again.
So far, federal safety regulators’ response has been measured. They have not issued a recall or a public bulletin to not drive the vehicles.[ii] Instead, on June 29, 2016, the NHTSA opened a preliminary investigation into the design and performance of Tesla’s Autopilot function.
To date, no lawsuits have been filed in connection with such accidents, but the growth of autonomous vehicle systems in service means the body of incidents in which the technology is implicated will also grow. As a result, new liability theories against manufacturers based on defective software design, failure to warn or to adequately warn concerning the systems’ capabilities and limitations, and others will likely evolve. Claims may be made that the systems degrade drivers’ ability to act quickly and effectively when necessary.[iii] Driverless car technology will surely bring forth new law to address it, as “horseless carriage” technology did over a century ago.