Artificial intelligence (“AI”) and autonomous vehicle technologies (“AVT”) have the potential to redefine how the aviation industry operates. While the operational changes that these technologies will bring are being widely explored, the legal issues raised by their rapid introduction into the industry are not. In this two part series, we will be looking at applications for AI in aviation and its effect on the legal liability and regulation of those who use it. See Part 1 here.
What are the legal issues?
The most interesting legal issue surrounding these technologies will not emerge unless and until a robot or other type of machine becomes self-aware. At that point, the world will have to deal with many ethical and philosophical questions that are well beyond the scope of this article. Many countries and governmental entities, however, are already on the road to regulating other aspects of AI.
Even though many of the emerging legal issues are starting to be recognized, one thing is certain: The law will significantly lag, not anticipate, the legal issues brought forth by this rapidly evolving technology because AI computes faster than legislatures can act. We can say that with metaphysical certainty because the law is always slow to adapt to technology, especially technologies changing as quickly as AI and AVT. That lag is where attorneys, who find themselves dealing with these issues, will earn their money and make a little legal history.
Regulation, Insurance and Public Policy
The European Union has created a resolution called the European Civil Law Rules in Robotics, which is the first step on the road to the regulation of AI in the EU. While the resolution is not binding, it raises intriguing policy questions that lawyers everywhere must consider.
For example, for accidents caused by AI applications or AVTs, should the liability regime be (a) “no-fault”; (b) negligence based; or (c) proportionate to the level of instructions provided by humans and the degree of autonomy given to the machine or application? Should the use of these technologies be complemented by a mandatory insurance scheme or a national or world-wide victim compensation fund? Should “smart streaming” (a standardized method of sharing data among transport infrastructure) be mandated or permitted to be a factor in setting insurance rates? There is going to be an extraordinary challenge proving causation in a fault-based system. Machine learning systems can quickly become removed from direct programming. In that situation, it will be an interesting challenge to attribute fault.
The EU resolution also delves into a broad number of ethical and public policy considerations such as:
(1) protecting humans against “manipulation” by robots;
(2) ensuring equal access to these technologies;
(3) “avoiding the dissolution of social ties”; and
(4) protecting human liberty and privacy.
These important policy issues are, however, well beyond the scope of this article, which is focused on the emerging legal issues in this area, rather than public policy questions that will be debated for years.
The Legal Risks
There are many obvious legal issues that will apply to every industry, not just to aviation. Among these are:
- Labor issues. People are going to be displaced (and displeased). This is going to cause problems under every existing union labor agreement. Don’t forget, however, that new technologies almost always create new jobs and careers.
- Your AI may illegally discriminate. AI is in large part about pattern recognition and algorithms. The “trained” algorithms may unintentionally discriminate against job or credit card applicants because they mirror the illegal preferences of their creators and users. Someone is going to have to make a decision as to whether to emphasize qualifications over diversity and such choices will have consequences that could be illegal.
- Privacy and “predictive analytics” will be ongoing issues. Facial recognition and other AI technologies will make it possible to track anyone moving through public spaces, and a data stream of those movements that could be used, just like cell phone records, for a wide variety of good and bad reasons. Additionally, the huge data sets necessary to train algorithms require collecting information from millions of consumers. This collection and aggregation of information already allows machines to analyze and predict what movies we watch, and they may soon be able to predict individuals’ movements, romantic preferences, and financial decisions.
- Intellectual property theft. Developing and training AI algorithms is expensive, difficult work, and companies investing in these applications must be vigilant against both insiders and competitors who might wish to steal or otherwise reverse engineer proprietary AI systems.
- Intellectual property ownership. Your AI might create intellectual property. Who owns it (the answer is not obvious if you bought the AI from a third party and there is no work for hire agreement or patent assignment in place)?
- Cyber threats. AI can and will be hacked; data will be dumped; people may be injured, robbed or killed as a result. Lawyers need to consider all of these threats and how to insure against them or, if you are a plaintiff’s attorney, exploit them.
The big immediate issues in aviation, however, are going to be: (1) theft of customized AI algorithms by competitors, cyber criminals, or nation states; (2) liability for accidents involving AI or AVT; and (3) liability for privacy violations. Here, however, we focus on just one of those legal risks – accident liability.
No matter how good this technology becomes it will never be infallible. Accidents will happen. Moreover, even perfect technology will not be able to avoid certain accidents, such as a person darting out into the road from behind a barrier; a tree falling on a moving car; or a meteorite striking an airplane. No matter how large the “big data” set of experience becomes, there will always be a new failure mode that no one has seen or could have reasonably considered. AI may also be forced to make impossible ethical choices, such as (in the event of an equipment failure) whether to crash land the plane in a remote area or attempt to land it on a highway where the landing will cause certain deaths on the ground, but fewer total deaths than if the plane crashed. Such cases will be a boon for the plaintiffs’ bar, but the bane of the defendants.
Just consider this thought experiment. Approximately 40,000 people die each year on American highways. If everyone used AVT’s, 30,000 of those deaths could be prevented. That is a remarkable benefit to society. Nevertheless, there could still be 10,000 wrongful death cases.
The inevitably of certain accidents will not even be a speed bump for the plaintiff’s bar, nor will the statistical arguments made above be persuasive with juries who will have no affinity for algorithms more closely identified with science fiction in an average jurors’ mind when deciding liability and damages. Until the law (and public understanding) catches up with this technology, and creates either strict liability regimes or victim compensation funds, lawyers need to plan for and advise clients as to how to mitigate these risks. The obvious answer is insurance, even though insurance companies will not have the data to price coverage accurately. A serious enough accident could have existential consequences for a company (and its insurance carriers) and impact the assets of company directors and officers. Lawyers – especially transactional lawyers – suddenly have to answer some very hard questions with uncertain answers.
Managing Legal Risk
Although the liability risks pose challenges, there are steps that can be taken to manage those risks. Specifically:
- Insurance coverage. As noted above, insurance is an obvious—if somewhat novel—first step. Brokers, underwriters and others in the insurance market are wrestling with these issues. Having a frank discussion with your broker or underwriter now regarding emerging technologies and coverage seems particularly prudent given the rapid pace of change in the aviation industry.
- Understanding the threat landscape. Talking to engineers and technologist can be challenging for lawyers, but as products grow increasingly complex, it is imperative that attorneys develop a detailed understanding of concepts like AI, machine learning, big-data, and autonomous vehicle technologies. In addition, consider the full spectrum of risks—operations are an obvious starting point, but IP, privacy, information security, international law, and HR related risks should not be ignored.
- Get involved early. Counsel should either get involved or be brought in early in the design and development of planned AI or automation projects. Such early involvement not only helps ensure a deeper level of understanding of the technology to be deployed, but it also provides an opportunity for early detection of legal risks that may be difficult to detect once a product is launched (such as built in bias in a machine learning algorithm). Additionally, an early assessment of legal risks can help management measure the true risks and costs of launching a new product, platform, or system.
- Follow and attempt to influence the regulatory action. Federal and state regulators are watching this space closely, and monitoring those regulators makes tremendous sense. Influencing the direction of the law is equally important. If you are active with an industry body, getting involved in lobbying efforts could also prove very beneficial.
The legal profession has been here before. New technologies like VCR’s, mobile phones, GPS, the internet, and even aircraft have posed novel legal challenges that took years to address. AI and AVT will be no different. And like earlier generations of technologies, the profession will (eventually) figure out how to regulate them. Shame on you if you get hit by an AVT train you can see coming.