The European Commission recently published its Communication on Artificial Intelligence, which outlines its strategy to leverage its position to influence development of the technology. The paper covers investment in AI, increasing the availability of data, adapting to anticipated changes to the jobs market, and ensuring that legal and ethical systems are equipped to deal with this change. Product manufacturers will be interested to hear that the Commission does not favour the introduction of “AI laws”, but, in line with the approach recommended by the House of Lords (covered here) in their recent paper on AI, we can expect to see developments in specific areas.

The Commission recognises that a wholesale overhaul of legislation is not required to accommodate AI. The EU already has a well-developed legal framework in place, in particular, the high standards for safety and product liability and the upcoming General Data Protection Regulation. However, the Commission also recognises that AI powered products may introduce new and unforeseen safety risks. The current approach has been to rely on voluntary standards, which has provided an effective and nimble means of responding to rapid change. However, the Commission intends to review the current product safety and liability frameworks to ensure they are fit for purpose. It remains an open question whether liability laws need to be tweaked to accommodate AI. Both the Commission and the House of Lords recommend a more detailed examination of the current liability framework to make sure it is capable of dealing with tomorrow’s challenges.

At this stage, one area where Commission considers that further work is required is the ability to understand how AI systems make decisions. Both the Commission and the House of Lords agree that trust is key. Stories of “black box” AI that cannot be interrogated have caused many commentators to ask how liability will be established in the event of an incident involving AI. Yet, the Commission is clear that they want humans to understand how decisions are made. This will be a technical challenge for developers given the complexity of AI systems and as the Commission concedes, legislative change alone will not be sufficient. It will require funding and research, which the Commission has committed to provide.

The Communication builds on the announcement, earlier in April, of a pact between 24 Member States and Norway to cooperate on the development of a Europe-wide AI strategy. We look forward to a regulatory environment that hits the right balance between supporting innovation and protecting consumers, while ensuring international consistency so there is a level playing field for all stakeholders, which avoids jurisdictions competing over regulations.

The Communication sets out a timetable for key developments:

  • develop draft AI ethics guidelines by the end of 2018
  • issue a guidance document on the impact of technical developments on the Product Liability Directive by mid-2019
  • an in-depth report the liability and safety frameworks for AI, Internet of Things and robotics by mid-2019
  • support research into the development of explainable AI (2018-2019)