There is a passage in the prologue to Pedro Domingos’ fantastic book on machine learning called “The Master Algorithm”. In it he outlines a day in the life of the average commuting worker framed by their interaction with an environment rich in learning algorithms. The startling message of those few pages is not about the wonderful innovations that now support our daily existence but the ever-present and pervasive nature of their reach. Machine learning, and to a much lesser extent artificial intelligence, has well and truly landed. The challenge for lawyers and their clients is how to navigate a legal and regulatory environment that is playing catch up and how to simultaneously steer innovators on a path to protection that may be paved with gaps. Are developers and first movers aware of the present and future challenges with exploiting and protecting their innovations?
It is reasonable to argue that some of the issues arising out of this brave new world are manageable from an existing legal perspective, for instance making the seller of a 3D printer liable for the quality of the products printed. But who should be responsible for using the 3D printer to print a product which is protected by a patent or a design - the consumer who prints the product, and will patent holders realistically sue their consumers, or the person who supplied the design file? And who is responsible if the design file is produced by a machine?
So long as innovative algorithm-based goods and/or services get on to the market and drive revenue back to the innovator without an adverse market reaction should there be a need to query existing legal compliance and/or protection strategies?
Yes. Put simply, it’s not smart commercial strategy to dive into a market with an innovative product without understanding the legal risk associated with exploitation as well as being adequately informed on how that innovation can be appropriately protected.
So what does all that mean?
Until recently lawyers could reliably advise on the exploitation of most client technology in the context of a regulatory regime involving a mix of consumer, data privacy and other sector-specific statutes. That would be supported with an IP/technology protection strategy built around a licensing arrangement via a suite of contracts exploiting a portfolio of registered and/or unregistered IP assets i.e. patents, software etc.
Now, depending on the scope of an innovator’s new algorithm/machine learning products we may need to review this analysis and begin to also focus on new and related challenges:
- The protection and commercialisation strategy having regard to the limitations of the current IP legislative regime, ie who is the author of the copyright in code produced by a machine, and if not a human who owns the code where our current laws specify that it has to be a human
- Increased focus at product development stage on regulatory compliance ie privacy by design in the context of data analytics
- The challenge of in-built bias in products and/or services based on machine learning and its impact not only on regulatory compliance but also managing risk eg discrimination
- An appropriate strategy and legal tool box for managing liability eg if the code in the product determines the outcome and that outcome causes harm who is liable and is it possible to procure adequate insurance at a reasonable price to cover this risk?
Fantasy or Reality
As mentioned above the use of machine learning is all around us and very much a part of our daily lives. Day by day the black boxes deployed in those systems become increasingly more complex and sophisticated while the legal landscape remains largely static.
The world of AI and machine learning is a fascinating one but to an extent daunting from a legal and risk perspective.