On 8 April 2019, the European Commission’s High-Level Expert Group on AI (“HLEG AI“) published its ethics guidelines for trustworthy artificial intelligence (the “Guidelines“). The Guidelines follow an earlier first draft in December 2018 and implement more than 500 comments received through an open consultation. The Guidelines aim to provide a framework for achieving “Trustworthy AI”: artificial intelligence that is (i) lawful, (ii) ethical, and (iii) robust. In particular, the Guidelines focus on offering guidance on fostering and securing ethical and robust AI.

Background

In 2018, the European Commission set out its vision for AI which supports “ethical, secure, and cutting-edge AI made in Europe”. To support the implementation of its vision, the European Commission created the HLEG AI, a group formed of 52 experts in the field of AI, to draft guidelines on AI in terms of both ethics and policy, and investment recommendations. While the Guidelines are not meant as a legally binding document, they aim to establish a set of guiding principles which can assist developers and deployers in achieving Trustworthy AI.

Foundations of Trustworthy AI

The Guidelines articulate the fundamental rights and ethical principles which are crucial to establish Trustworthy AI. While primary and secondary EU legislation such as the EU Treaties and GDPR establish legally binding rules by which AI must abide, Trustworthy AI should naturally follow the fundamental rights enshrined in the EU Treaties, the EU Charter, and international human rights law. In particular, the Guidelines identify four ethical principles, called the Ethical Imperatives, derived from fundamental rights which are crucial to ensure that AI systems are developed, deployed, and used in a trustworthy manner. These Ethical Imperatives are: (i) respect for human autonomy, (ii) prevention of harm, (iii) fairness, and (iv) explicability. The Guidelines foresee potential tensions between the above principles, and recommend that methods of accountable deliberation be established to deal with any conflicts.

Realising Trustworthy AI

The Guidelines offer seven key requirements (the “Requirements“) which follow the Ethical Imperatives and will assist developers, deployers, end-users, and broader society with implementing Trustworthy AI. The Requirements focus on principles of human agency, privacy, accountability, environmental and social well-being, transparency, diversity, and technical robustness. In particular, the Guidelines stress the importance of interplay between the various Requirements and their application throughout the entire life cycle of AI systems.

The Guidelines also set out technical methods and non-technical methods which companies can use in order to successfully implement the Requirements. Technical methods include procedures which are incorporated into the architecture of AI systems and ‘X-by-design’ compliance with ethical values and rules, while non-technical methods consist of regulations, codes of conduct, and further checks and balances which should be evaluated on an ongoing basis. Crucially, the Guidelines recommend transparency and communication with stakeholders throughout an AI system’s life cycle in order to develop additional methods to further ensure compliance with the Requirements.

Assessing Trustworthy AI

The Guidelines have drafted a pilot version of a non-exhaustive Trustworthy AI assessment list (the “Assessment List“) in order to operationalise Trustworthy AI and assist companies in assessing their compliance with the Ethical Imperatives and Requirements set out above. The Assessment List is primarily addressed to developers and deployers of IA, and particularly applies to AI systems which directly interact with users, as these present the highest risk of falling foul of the Ethical Imperatives. The Assessment List allows companies to test their compliance with the principles of Trustworthy AI through various questions and considerations relating to the Requirements. While the Assessment List does not offer concrete answers to its questions, the HLEG AI hope it will encourage reflection on how Trustworthy AI can best be operationalised and which steps should be taken by companies. As it is the Guidelines’ aim to promote the ethical handling of data, the Assessment List goes much further than mere compliance with existing legal requirements.

Conclusion and Recommendations

The HLEG AI foresee many great opportunities in AI in the field of climate change, sustainable infrastructure, health, and education, and believe the EU has a unique vantage point to exploit these opportunities. Yet there are also concerns, chief of which relate to violations of fundamental rights, democracy and the rule of law. With these Guidelines, the HLEG AI aim to provide guidance on how best to develop and deploy AI which is lawful, ethical, and robust. While not imposing legal obligations, companies are advised to study and implement the Guidelines in depth in order to prepare themselves for what comes next in terms of regulations. Crucially, the Guidelines’ Assessment List allows companies to evaluate their current position and identify key areas for change. In particular, companies are given a chance to implement ethical considerations which are not yet legally binding, and thus gain a head start on their competitors. Given the European Commission’s vision for the future of AI, it would not be surprising to see these Guidelines enshrined into law in the near future.

The Assessment List is still in its pilot version, and the HLEG AI intend for the Assessment List to be further developed in close collaboration with stakeholders across the public and private sector. As such, the HLEG AI invites all stakeholders to pilot the Assessment List and provide feedback on its implementation and relevance. The HLEG AI intends to use this feedback in order to propose an updated list to the European Commission in early 2020.