- On 19 February 2020 the European Commission issued a White Paper with proposals on regulating artificial intelligence. The Commission invited stakeholders to submit their comments by 19th May 2020.
- The Commission is promoting a robust European legal framework to regulate AI as a way forward to developing sound and trustworthy AI technologies. It is proposing a clear definition for AI to determine the scope of its application and accommodate technical progress.
- The Commission is considering human skills and competence shortages in the field. It shall shortly present an agenda which may include the required support for sectoral regulators to enhance their AI skills and effectively implement relevant rules to AI.
- A risk-based approach which differentiates between AI applications is proposed in this White Paper to ensure that regulatory intervention is proportionate. The determining factor is whether or not the AI application is deemed “high risk” based on two criteria: (a) the sector where the AI is employed poses high risk, such as healthcare or the public sector; (b) the application is used in such a manner that would give rise to a high risk factor, such as a life or death situation or where biometric technologies are used.
The Commission is also suggesting that mandatory legal provisions should apply to such “high risk” AI situations which include: training data; data and record keeping; information provided; robustness and accuracy; human oversight; and, specific requirements for particular AI applications, such as biometric identification. To ensure legal compliance the Commission is further proposing a testing, inspection and certification system.
AI applications that are not considered to be high risk could be regulated by existing legislation and by a voluntary labelling scheme. The concept would entail that operators voluntarily decide to subject themselves either to the mandatory high risk requirements or else to a set of similar rules that establish the purposes of the voluntary scheme. The benefit here would be that the operator is awarded a quality label for the AI application.
- Existing regulations applicable to the EU will be promoted in international discussions on AI matters, such as the respect for fundamental rights, including human dignity, pluralism, inclusion, non-discrimination, and protection of privacy and personal data.
- Addressing legal uncertainty in this field is at the forefront of the aims proposed in the White Paper. The Commission identified key requirements to establish trustworthy AI which are: human agency and oversight; technical robustness; privacy and data governance; transparency; diversity; non-discrimination and fairness; societal and environmental wellbeing; and, accountability. A number of these requirements are already reflected in the current regulatory regimes; however, transparency, traceability and human oversight are not covered by existing legislation and require further assessment.
- The Commission is harping on a clear European regulatory framework that would build trust among consumers and businesses in this area; this in addition to existing rules, such as fundamental rights and freedoms, consumer protection laws, product safety and liability laws, which should continue to apply to AI systems. In certain areas, such as the AI feature of opacity, further analysis is required to determine whether current legislation is enough, whether it can be enforced or whether new rules should be enacted.
- Concerns are raised due to certain characteristics of AI technologies - including opacity, complexity, unpredictability and partially autonomous behaviour - which may make it harder to monitor compliance with existing EU laws (such as the General Data Protection Regulation) or to apply effective enforcement in this regard. A person who suffered harm might find it difficult to obtain redress or have effective access to evidence under existing EU and national liability legislation.
- The Commission concludes that new rules may be required which specifically address AI developments. This White Paper offers suggestions on improving the legal framework regulating AI as follows: (a) adjusting or clarifying existing EU and national laws, such as those regulating liability for effective application and enforcement of laws; (b) catering for software on a stand-alone basis and the debate on whether this should be considered as a good or service for product liability purposes; (c) the integration of software into products can change its functioning which may lead to new risks that may not be adequately addressed by existing legislation; (d) the concept under current EU product safety rules which place responsibility on the producers of the products and their components only should be revisited; (e) the safety element in “product safety” is different when AI is in use, but current laws do not cater for this distinction. The risks posed by cyberthreats or software updates, by way of example, should also be considered and regulated by law.
- Finally, the Commission is emphasising that an EU-wide approach is key. There is a voiced concern about the fragmentation of Member State legal regimes in the area given that various approaches are emerging, such as the proposal for a five-level risk based system of regulation in Germany, a data ethics seal by Denmark, and the voluntary certification system for AI in Malta. The Commission is convinced that a common approach at EU level would ensure that companies benefit from seamless access to the single market and ensure their competitiveness in the global market.
The White Paper is available on: https://ec.europa.eu/info/publications/white-paper-artificial-intelligence-european-approach-excellence-and-trust_en