Noteworthy Developments in the AI Guidelines

In our January Update we commented on the draft guidelines for Trustworthy Artificial Intelligence developed by the EU’s High Level Expert Group on Artificial Intelligence (the HLEG and the Guidelines, respectively). As explained in our report, the objective of the HLEG and the draft Guidelines was to set European standards for “Trustworthy AI” meaning AI which is lawful, ethical and technically robust and reliable. Following extensive feedback from more than 500 contributors a revised version of the Guidelines was published in late June. Click here for a copy of the revised Guidelines.

The broad approach and structure of the of the Guidelines is substantially unchanged from the December draft, as described in our January Update. The Guidelines retain a practical focus on how ethical concepts can be developed and reliably embedded in AI systems. However, a number of areas have been further developed in light of the feedback exercise.

Core components of Trustworthy AI

The core components of trustworthy AI have now been split into 3 elements. Throughout an AI system’s entire life cycle it must be:

  • Lawful (complying with applicable law and regulation)
  • Ethical (ensuring adherence to ethical principles and values); and
  • Robust both from a technological and social perspective,

The revised guidelines do not address the law and regulation limb, but rather focus on the second two limbs. They note in this regard that there may be tensions between law and ethics, especially in periods of rapid change, and that legislators and policy makers will need to monitor the adequacy of existing regimes. The need for global policy initiatives is also flagged, as

Just as the use of AI does not stop at national borders, neither does its impact

The revised Guidelines now expand on the concept of robustness by including specifically the concept of “social robustness” as well as technical robustness. By this they mean that due consideration also needs to be given to the social context and environment in which AI operates to avoid unintended harm.

Core Ethical Principles

The Core ethical principles touched on in the draft guidance have now been refined into four core principles of;

  • Respect for human autonomy: AI systems should not “manipulate, condition or herd people” and leave “meaningful opportunity for human choice” “
  • Prevention from harm: systems should not cause or exacerbate harm, including collective harm to groups or intangible harm to social, cultural and political environments. As in the initial draft it is stressed that particular attention should be paid to power and information asymmetries.
  • Fairness: both substantive (in terms of ensuring equal and just distribution of benefits and costs, free from bias) and procedural (entailing ability to contest and seek redress for decisions made within AI systems); and
  • Explicability: processes should be transparent, capabilities and purposes openly communicated and so far as possible decisions to be explicable to those directly and indirectly affected.

In the original draft guidelines there was an additional fifth core concept of “beneficience” (i.e. a requirement actively to do good). This stipulated that AI systems “should be designed and developed to improve individual and collective well-being”. Whilst this concept remains an aspiration (and the scope for AI to address global concerns such as environmental damage is given significant attention), this has been removed as a positive requirement. In addition, there is a greater recognition of the need for proportionality with the guidelines noting that differing applications and sectors may call for a different approach to risk. As with the draft Guidelines it is stressed that situations involving more vulnerable groups and those historically disadvantaged must be given particular attention when looking at these principles.

Implementation and operation

In Chapter II the guidance on how AI can be made to achieve these objectives at a practical level have been refined, but much of the substance remains largely unchanged. However, additional attention and focus is given to consideration of (a) social and cultural impacts and (b) effects on democracy, reflecting current hot topics and concerns.

Assessment List

Chapter III of the guidelines encourages stakeholders to adopt an ethics assessment list when developing, deploying or using AI. In July the European Commission launched a pilot phase for testing their revised ethics checklist. The HLEG is conscious that implementation of the Guidelines will need to be adapted to particular applications. A sector by sector approach may be needed. They are therefore inviting stakeholders using the checklist in the pilot phase to contribute feedback. A revised checklist is planned for early 2020. Click here for the July pilot phase announcement with guidance on how to submit feedback.