The hopes, expectations, and fears surrounding AI are growing day by day. Hopes and expectations include safer roads through autonomous cars or better healthcare from improved disease diagnosis. Fears include losing jobs to AI or robots without human operators. The AI around us today still seems fairly primitive but increasingly prevalent. It does not take too much imagination to see the huge disruptive potential it will no doubt have on our day-to-day lives in the not so distant future. "AI" encapsulates a broad range of concepts and can be defined in multiple ways, but the following definition has merit: a field of computer systems which are able to perform tasks that traditionally require human intelligence without requiring additional programming. AI systems can learn from observations and data and act autonomously. With businesses in all sectors increasingly exploring and relying on AI, what are the key legal issues businesses deploying AI need to be aware of? No doubt, the key legal issues are not contained to one area of law but will span across the whole spectrum. In terms of the most unique or new issues, intellectual property, product liability, discrimination and privacy would need to be shortlisted. Intellectual property Machine learning means that AI will typically go through phases of creating, testing and selecting. That is, it will not just generate one output, but create a number of potential out-puts, test them, and then select the best out-put based upon what it has "learned". It is these latter two phases of testing and intelligent selecting that were not anticipated by most existing IP legislation which deals with machine-generated output. For the time being, AI is still largely a useful "tool" for humans. Works created through "use" of AI may still fit under existing IP legislation which often presumes a human being as the author of a work. But more and more, the question will arise as to how to protect content which is autonomously "created" by AI systems, independent of humans. If the work is of a sufficient quality so that it should be protected if a human created it, then not allowing it to be protected because AI created it does not seem logical. Should ownership vest in the AI company which developed the underlying algorithm? Or should it vest in the party which instructs the AI, and perhaps inserts variables? For the time being, contractually agreeing IP ownership is the most sensible approach. But going forward, it will be interesting to see how legislation in this area develops. Product liability If there is a faulty product or poor service that is delivered by traditional non-AI means, the laws are fairly well established on determining where liability should lie. The situation becomes less clear where AI is involved. Due to the complexity of AI systems, allocating liability raises many questions with court proceedings likely requiring costly expert advice. The AI developer may not know why the AI "selected" an output. Could there have been a fault with the algorithm? Could the data set have been corrupt? Was there insufficient data for the AI to make an informed decision? Could the user have misguided the AI somehow? While the reduction of 2 human involvement will in many ways reduce the risk of error, ironically, liability risks for manufacturers might actually increase. Product liability laws may also leave room for uncertainty. Can an AI action be regarded as a "defect" of the product AI as such? Discrimination Similar to the issues surrounding product liability, the lack of transparency of how an AI algorithm may reach its output could raise discrimination issues. The IT industry has been in the press lately on issues of lack of diversity, particularly in terms of software developers. Could a developer of an AI system somehow be influencing how the AI makes its selections? AI systems that may come under scrutiny here include systems that categorize people, whether for employment purposes, medical purposes, or for generating a criminal profile. With AI increasingly being able to detect sensitive information such as sexual orientation or health risks from a mere image, care is needed to ensure AI systems comply with anti-discrimination requirements. Where this is not the case, would it be a defense for a decision maker to simply state that they were using an industry standard AI program to make their decision? Privacy AI relies heavily on data processing and will in many ways clash with core data protection principles (such as data minimization and purpose specification). In the EU, the General Data Protection Regulation provides for safeguards in automated decision making and profiling, such as a right to obtain an explanation of the decision and challenge it. This would appear difficult for AI systems, where decisions are based on the AI's own complex learning process. Obtaining informed consent could also be a challenge, if the AI developer cannot explain how the AI will autonomously use data. Further risks arise in relation to the right to be forgotten - can AI fulfill the requirements, where it has already learned from the data? It is difficult to assess how practical the right to be forgotten could be in AI. Where to now? Regulators will have the option of being proactive in creating legal frameworks now, or being reactive by addressing issues as they arise. In Japan, the Japan IT Strategic Headquarters is considering legal responsibility surrounding autonomous car accidents, and a Japan IP taskforce is considering IP issues in AIcreated works. Similarly in the EU, the European Commission is currently assessing how AI issues may be addressed, following a request for legislation by the European Parliament, especially for civil law liability. Given the cross-border reach of AI, one would hope for international cooperation and regulatory harmonization. Until laws and regulations have caught up with technology, businesses would be prudent to take a forward-thinking approach to anticipate potential legal issues and appropriately allocate risk and responsibility. AI and legal services delivery As for legal services, we anticipate that the development of AI will have a major impact on how legal services are delivered in the future. While there is certainly an element of hype in the market, we too have to tackle issues such as the ones above. In the short to medium term, that involves deploying AI enhanced tools where it is ready for market, eg, launching eBrevia and Relativity globally, and preparing our data infrastructure now for the future possibilities to come. Looking out 3-5 years, we see a world where many legal business lines have been redesigned from the ground up to embed AI into the workflow. We will be deploying machine learning enabled judgement at scale. What will that ultimately look like? We do not know yet. How will we train and retain the lawyers of the future? We're thinking hard about that. What we do know is that we do not have all the answers. We're engaging with our clients, industry and regulators, academia and even with our competitors to figure it out. We see it as a shared challenge; and more importantly a shared opportunity. If you want to learn more about what we are doing, please get in touch.