We use cookies to customise content for your subscription and for analytics.
If you continue to browse Lexology, we will assume that you are happy to receive all our cookies. For further information please read our Cookie Policy.

Blog / Regulation-bot: the rise of AI legislation


31 July 2018

With the fourth industrial revolution fast approaching, artificial intelligence (AI) remains a consistently hot topic on the Lexology hub. As practitioners around the globe debate the pros and cons of AI in their various industries, international authorities face the daunting task of regulating an evolving digital landscape – and our Lexology authors have been keeping subscribers up to date on everything they need to know.

Although AI means something different in every sector, it generally refers to machine-learning technology used to complete tasks that previously required the skills or intellect of human beings. What’s more, it completes these tasks cheaper, faster and without human error. It comes as no surprise, then, that AI-based products are being rolled out across a plethora of industries. However, given its rapid advancement, the technology remains in the early stages of regulation.

In April, 24 EU member states signed the EU Declaration of Cooperation on Artificial Intelligence. William Fry outlines what the declaration entails for employers specifically, since AI may be used to tackle issues around recruitment, gender equality, disability and age-related disputes. The potential for AI is great, but the European Union is determined to implement prudent governance. In fact, France, the United Kingdom and the European Commission have all issued respective strategies for managing AI development amid global competition. McCarthy Tétrault LLP reports that despite their differences, the three strategies all aim to boost productivity and public funding, while prioritising high ethical standards.

Across the pond, the US Food and Drug Administration (FDA) and US Congress have also taken steps to address industry concerns in regard to modern healthcare. With AI now prevalent in the digital health market, the authorities are concerned that some products fall outside the legislative definition of a ‘device’ and therefore the scope of FDA regulation. Jones Day outlines the key takeaways for companies using AI-based medical software, while the FDA works hard to develop an efficient regulatory framework. In addition, in this K&L Gates podcast the firm discusses the prominent legal issues surrounding the use of AI systems to administer medical care – and what the technology could hold in store for the industry.

For those stepping into the unknown, the future of AI is somewhat more alarming. Outside of healthcare, behavioural premium pricing, automated decision making and so-called ‘robo-advice’ demonstrate the great strides that AI has already made in financial services, but the current lack of regulation raises a number of concerns. Womble Bond Dickinson (UK) LLP warns that companies need to keep their heads amid all the excitement by remaining mindful of consumer and market protection.

The potential for fully autonomous AI has also raised concerns among IP practitioners and rights holders. Taylor Wessing forewarns that where machine learning is used for creative tasks, various copyright issues can arise. Not only is there a risk that machines will infringe third-party copyright, it is also unclear how disputes over copyright ownership will be settled in regard to AI-generated works. Indeed, Hogan Lovells asks whether IP law is ready for AI,  considering that under current UK patent law, an ‘inventor’ is still defined as a person. Could this be extended to include AI programmes working of their own accord? Or will it be that behind every great machine lies an accountable individual?

Ultimately, it boils down to whether AI can – or should – be given legal rights and duties. For Nishith Desai Associates, rights and accountability remain a grey area, but firms are celebrating a marked increase in the use of AI for daily tasks.