As part of its Digital Single Market initiative the European Commission is putting forward a European approach to artificial intelligence and robotics. It deals with technological, ethical, legal and socio-economic aspects to boost the EU's research and industrial capacity and to put AI at the service of European citizens and economy.
In this article we take a look at the details of a recent Communication from the EU Commission to the Parliament, the Council, the European Economic and Social Committee and the Committee of the Regions on Building Trust in Human Centric Artificial Intelligence.
The EU wants to build an AI regulatory environment in its own image - one based on a set of fundamental values complemented by a strong and balanced regulatory framework. The ground work was completed in December last year when the AI High Level Expert Group delivered its draft Ethics Guidelines for Trustworthy AI. The guidelines focus on human centric and trustworthy AI that produces products that operate in a traceable and accountable manner and are based on a principle of ethics by design.
With the latest Communication, the EU is now moving to the next stage - a targeted piloting phase to ensure that the ethical guidelines for AI development and use can be implemented in practice.
EU AI strategy
AI can benefit the whole of society and the economy. It is a strategic technology that is now being developed and used at a rapid pace across the world. Nevertheless, AI also brings with it new challenges for the future of work, and raises legal and ethical questions. To address these challenges and make the most of the opportunities which AI offers, the Commission published a European strategy in April 2018. The strategy places people at the centre of the development of AI — human-centric AI. It is a three-pronged approach to:
- Boost the EU’s technological and industrial capacity and AI uptake across the economy
- Prepare for socio-economic changes, and
- Ensure an appropriate ethical and legal framework
AI and ethics
The EU’s view is that the ethical dimension of AI is not a luxury feature or an add-on:
“...it needs to be an integral part of AI development. By striving towards human-centric AI based on trust, we safeguard the respect for our core societal values and carve out a distinctive trademark for Europe and its industry as a leader in cutting-edge AI that can be trusted throughout the world.”
The Commission explains that trustworthy AI is based on the following key requirements:
- Human agency and oversight – AI systems should support human agency and fundamental rights and contain appropriate degrees of control measures including adaptability, accuracy and explainability.
- Technical robustness and safety – AI systems need to be reliable, secure and resilient and have a fall back plan in case of problems. Their decisions should be accurate and reproducible.
- Privacy and data governance - Privacy and data protection must be guaranteed at all stages of the AI system’s life cycle.
- Transparency – The decisions of AI systems should be traceable and it should be possible to log and document those decisions. Explainability mechanisms should be pursued. Explanations of the degree to which an AI system influences and shapes the organisational decision-making process, design choices of the system, as well as the rationale for deploying it, should be available.
- Diversity, non-discrimination and fairness – AI systems should be set up so as to avoid harm that will flow from inherent bias, incompleteness and bad governance models.
- Societal and environmental well-being - The impact of AI systems should be considered not only from an individual perspective, but also from the perspective of society as a whole.
- Accountability - Mechanisms should be put in place to ensure responsibility and accountability for AI systems and their outcomes, both before and after their implementation.
Pilot and next steps
To ensure the ethical development of AI in Europe in its wider context, the Commission is pursuing a comprehensive approach with its pilot including in particular the following lines of action to be implemented by the third quarter of 2019:
- Launch a set of networks of AI research excellence centres through Horizon 2020.
- Set up networks of digital innovation hubs focussing on AI in manufacturing and on big data.
- With Member States and stakeholders, the Commission will start preparatory discussions to develop and implement a model for data sharing and making best use of common data spaces, with a focus notably on transport, healthcare and industrial manufacturing.
Further the Commission doubled its investments in AI in Horizon 2020 and plans to invest EUR 1 billion annually from Horizon Europe and the Digital Europe Programme.
The EU’s AI plan is ambitious and rigorous in approach. No doubt it will be attacked by critics for being too slow and cumbersome. It could also be criticised for potentially widening the existing gap between the advances in the field of AI being made in the EU and other jurisdictions. However, one could reasonably argue that the long game being played by the EU is a canny approach. In fact, it could trump its US and Chinese competitors in the long run, particularly in a world of consumers that are becoming increasingly privacy-savvy and are conscious of the downsides of owning and using products that require to be fed large amounts of data including personal data. In such a world, trust and not speed is likely to be the ultimate driver of innovation.