In our October Legal Update we commented on how regulators and policy makers were now engaging at a practical level with some of the challenges presented by artificial intelligence (AI) and that it was clear that underpinning some of the seemingly granular points were larger issues of ethics and accountability.
The European Commission has determined Europe will be at the forefront in seeking to develop a workable and developing ethical framework for AI and is forging ahead in this space. In June 2018 the Commission announced the appointment of a High Level Expert Group on Artificial Intelligence (the HLEG) consisting of representatives of academia, business, and civil society. The HLEG was tasked with making recommendations on how to address mid-and long-term challenges and opportunities related to artificial intelligence and with preparing draft ethics guidelines. In late December the HLEG delivered its draft of the guidelines (the Guidelines).
Trustworthy AI
In preparing the Guidelines the approach of the HLEG has been to focus on the key question; ‘how can we trust such “thinking” technologies?’, stating that:
“Trustworthy AI will be our north star, since human beings will only be able to confidently and fully reap the benefits of AI if they can trust the technology.”
By “trustworthy” HLEG means AI which:
- Respects fundamental rights, applicable regulation and core principles and values, ensuring that the purpose of the AI is to benefit humans, rather than some other end; and
- Is technically robust and reliable since, even with the best of intentions, technological failings or unforeseen deficiencies can still cause unintentional harm?
The approach of the Guidelines is practical. They do not set out an exhaustive list of core values and principles, but instead focus on how such ethical concepts can be developed and reliably embedded in AI systems.
The Guidelines follow a threefold framework, drilling down from high level questions of policy in Chapter 1, through the challenges of implementation in Chapter 2 and finally considering operational questions in Chapter 3. Looking at these in turn, some interesting and thought provoking points are raised a few of which are highlighted below.
Core Objectives and Principles
A core purpose enshrined by the Guidelines is that AI should be “human-centric”. AI is not an end in itself, it should be built and deployed to benefit humans. Developing on this the Guidelines stress that the possible effects of any AI on human beings and the common good should always be prospectively evaluated. The Guidelines also note that particular attention should be paid to situations involving more vulnerable groups such as children, persons with disabilities or minorities, or to situations with asymmetries of power or information, such as between employers and employees, or businesses and consumers.
Core concepts to be embedded would include fundamental rights, societal values and the ethical principles of “Beneficence” (i.e. do good), “Non-Maleficence” (i.e. do no harm), Autonomy of humans, Justice, and Explicability. The first of these seem self-explanatory as concepts (although doubtless there is huge scope for debate as to what they mean in any practical application. However the final three merit some further discussion.
When looking at Autonomy and respect for the individual the Guidelines note that:
“Autonomy of human beings in the context of AI development means freedom from subordination to, or coercion by, AI systems. …. If one is a consumer or user of an AI system this entails a right to decide to be subject to direct or indirect AI decision making, a right to knowledge of direct or indirect interaction with AI systems, a right to opt out and a right of withdrawal”
This could give rise to interesting questions, for example, around appropriate boundaries for “nudge” type behavioural design.
In commenting on Justice they note that, as well as avoiding active bias and discrimination:
“the positives and negatives resulting from AI should be evenly distributed, [so as to avoid placing] vulnerable demographics in a position of greater vulnerability and striving for equal opportunity in terms of access to education, goods, services and technology amongst human beings, without discrimination”
This acts as encouragement for wide horizon scanning, so that teams must consider outcomes with a broad perspective, not focussing only on the specific objective intended for a particular target consumer but also how this outcome could sit in a wider context of benefit and detriment. This may be a quantum shift for some industries.
Looking at Explicability they note that both technological and business model transparency matter from an ethical standpoint, so that as well as issues of audit and traceability as noted below:
“Business model transparency means that human beings are knowingly informed of the intention of developers and technology implementers of AI systems...Explicability is a precondition for achieving informed consent from individuals interacting with AI systems”
This will be a key point to consider for online consent models and application forms
The Guidelines also flag that developers should always acknowledge and be aware of the fact that, while bringing substantive benefits, AI can also have a negative impact. They should remain vigilant for areas of critical concern.
Implementation
Chapter II gives some very interesting guidance on how AI can be made to realise these goals, tackling them both from an ethical purpose perspective but also looking at technical robustness and both technical and non-technical measures to embed and reinforce ethical objectives.
It is stressed that these questions need to be front of mind from the earliest design phase, including when building the team to work on the system, ensuring diversity when setting up the teams developing, implementing and testing the product. There is also focus on ensuring participation and inclusion of stakeholders such as customers and employees in design and development.
In this Chapter the Guidelines also consider the testing environment and the potential applications of the system and touch on the importance of informing stakeholders about the AI system’s capabilities and limitations, allowing them to set realistic expectations.
“Traceability” of the AI systems workings and the consequences of its decisions is seen as a key element of robust implementation. The Guidelines recommend that AI systems should be auditable, particularly in critical contexts or situations. To the extent possible, systems should enable tracing individual decisions to various inputs; data, pre-trained models, etc.
The Guidelines also look at wider questions of organisational culture and accountability, recognising that the success of any such ethical project (which will stand or fall on many ongoing micro decisions) will be dependent on tone from the top, internal buy in, training, active responsibility and a commitment to the review of lessons learned and commitment to development on an ongoing basis.
Lastly the Guidelines stress that those developing or operating AI must be mindful that there will often be fundamental tensions between these many different objectives (transparency can open the door to misuse; identifying and correcting bias might contrast with privacy protections). Trade-offs will be needed but should be consciously made after careful evaluation. These should be communicated, recorded and re-evaluated in light of operational experience and outcomes.
Operation
Chapter III becomes yet more granular, encouraging stakeholders to adopt an assessment list for Trustworthy AI when developing, deploying or using AI, adapting it to the specific use case in which the system is being used. However, the HLEG also warns that
“an assessment list will never be exhaustive, and that ensuring Trustworthy AI is not about ticking boxes, but about a continuous process of identifying requirements, evaluating solutions and ensuring improved outcomes throughout the entire lifecycle of the AI system”
Takeaways
The Ethical Guidelines are well expressed and do not require vast technological knowledge to understand (far simpler to read than many a Science Fiction novel, with many helpful practical examples and illustrations). They make an extremely interesting read - click here for a copy – and would be well worth considering for those engaged in any industries deploying, or planning on implementing, AI technology.
It is from Guidelines such as these that future regulation will emerge and develop, and businesses would be well advised to be thinking ahead about how best to ensure that they are developing with, rather than against, the grain of ethical and regulatory thinking.