The Counsel Connect+ roundtable focused on different angles surrounding the ethics of AI development, identifying the challenges from a regulatory, policy and a business perspective.

There is much commentary about how the advent of artificial intelligence (AI) is going to change the world, in some exciting and some challenging ways. So, what is AI? And what does it mean if you are sitting in the general counsel's (GC) hot seat, determined to support your company's success, while keeping on the right side of the line?

The important thing to understand about AI in its current form is that it is not a replacement for human intelligence, empathy, intuition, genius, etc. It is, in a sense, neither artificial nor intelligent. It is pattern recognition, something that has been around for thousands of years. What is different is that the patterns can now be recognised across extraordinary quantities of data sometimes seemingly unrelated and at extraordinary speed, driven by complex algorithms and unprecedented processing power.

Below are some examples of the sorts of questions a GC might be asked, or asking.

Market analytics systems will be increasingly capable of predicting market reactions to products and pricing, and will be able to drive key product and marketing decisions. If you unwittingly buy the same market analytics system as your closest competitor, and if the datasets available to analyse through the same algorithms are broadly the same, it is likely that you will make very similar product, marketing and pricing decisions as your competitor. What are the competition law implications of that? If you try to acquire exclusive access to a major dataset that vastly increases the sophistication of your market analysis against that of your competitor, is your behaviour monopolistic in its effect?

As data becomes the driver of competitive performance for virtually every type of economic activity in every sector, questions of data access and data sharing will assume much greater importance in commercial relationships. The GDPR regulates this for personal data, but there will be enormous quantities of non-personal industrial data, of huge potential value to a range of economic actors, as well as, in many cases, to society. Do you know how to value data? Do you know how to value data to your own organisation, as well as to another? How confident are you in contracting data relationships? To what extent would data trusts help to regulate data relationships?

You buy a fantastic AI-based system to manage your plant. With self-learning algorithms, it is continually improving efficiency and output. One day it goes wrong, causing a major pollution incident or a workplace accident. Where does the liability lie? It is your plant. The management system came from a supplier, but the data on which it has based its "decisions" is partly yours and partly from external sources. Has the incident happened because of faulty data in one of the data sources? If so, how do you find out which one? Was it badly written algorithms? Alternatively, was it a poor implementation of the system in your plant?

As you embed AI-driven robotics and management systems into your operations, you need fewer and fewer people. Other employers in your area are doing the same. Everyone is convinced that as technology squeezes existing employment, it also creates new opportunities. How quickly will those new jobs come? How well will the workforce that you and others have made redundant adapt to the new kinds of jobs that might be created? How will society (and, as a result, politics) respond if you are seen to be thriving by employing fewer people? Can the responsibility of a company in these circumstances be limited to fulfilling existing contractual terms with the employee? Or do you have a wider responsibility, acting with others, to help ensure that the workforce is fit for the future. If so, how?

These are just a few of the questions that you might face.