"AI doesn't just belong to a few tech giants in Silicon Valley": these were the words of Google Cloud's chief scientist for AI, Fei-Fei Li, speaking in March 2018 at a panel discussion on the impact of AI. Whilst companies such as IBM, Microsoft and Google have been at the forefront of AI for a number of years, many organizations across many different industries, are now looking to jump on the bandwagon, as AI continues to permeate the public consciousness. In response to the gathering momentum behind AI, thought-leaders in AI are calling for a careful look at how we should prepare. As Fei-Fei Li put it, we need to "really study the profound impact of AI to our society, to our legal system, to our organizations, to our society to democracy, to education, to our ethics." In 2017 the UK Government commissioned a Select Committee to consider the economic, ethical, social and legal implications of AI. Against this backdrop, we ask whether or not the UK's intellectual property laws are ready for AI, and look at what businesses can do to prepare.

Before we can answer this question we need to consider what is meant by "AI". Popular culture has provided us with multiple examples of AI, such as "Samantha" from the 2013 film, Her and "Ava" from the 2015 film, Ex Machina. Both are intelligent computer operating systems capable of thought and consciousness. For many, this idea computers that have the ability to reason, communicate and perform like a human is the epitome of AI. Yet AI can also include computing advances that extend a human's ability to sense, learn and understand. An AI platform that is able to work through and analyse data in order to establish new data points or patterns, can

enhance a human activity. In the healthcare industry, for example, AI is being developed to analyse huge volumes of data to understand patient symptoms and provide suggested treatment options. In the automotive industry, autonomous vehicles are being developed to navigate roads and avoid other cars or pedestrians, to enable humans to move from A to B. Yet there is an important difference between the "Samantha" and "Ava" examples and the examples of AI platforms just given. The former is completely autonomous, whilst the latter requires collaboration with, or intervention by a human.

A purely autonomous form of AI could involve computers interacting with other computers and making decisions or carrying out functions without any human involvement. This form of AI where, crucially, there is no human agency at any stage, could pose problems for traditional legal structures such as intellectual property laws. For example, an AI that develops an invention without any human involvement would be the "inventor" of that invention. Yet under UK patent laws an inventor is defined as a person. Could this stretch to include an AI? Further, should this stretch to include an AI? Presently a person who discloses their novel and inventive invention to the state is given a 20 year monopoly. This is referred to as the "patent bargain". It rewards the hard work and dedication that is often invested in devising inventions. Yet, compared to humans, AIs have and will have an even greater ability to process considerably larger amounts of data at far quicker rates. Autonomous AIs could therefore arrive at their own inventions without any of the serendipity or hard work behind human invention. Is it appropriate for the state to reward such an AI with a 20 year monopoly for its invention? Would the answer be different if the AI develops a patentable invention with huge benefits for society?

But let's not get carried away. Undoubtedly AI is coming, but is it here yet? The common consensus is that it is not. Whilst enormous progress has been made in AI and machine learning examples of which we've discussed in relation to, the healthcare or automotive industries we are still a way off creating fully autonomous AIs. The decision-making behind a human/AI collaboration is not fully autonomous, because it is limited by the parameters provided in the initial computer program design and algorithms. In this scenario, where the AI amounts to human written software code, arguably the software programmer could qualify as the "inventor" of any patentable outcome. But should that always be the case? Is it right that the software programmer should be able to patent that invention, even if the invention was an unintended and unforeseeable result of the AI? Should a person benefit from the patent monopoly if the invention was derived from analysing large amounts of public data? Developments in AI whether looking at fully autonomous AIs or human/ AI collaborations prompt important questions about who should benefit from the patent monopoly.

Similarly, what happens if an AI output is a work protected by copyright? For example, a piece of music, or an art work, or even a new algorithm. Under UK copyright laws, the author of computer-generated literary, dramatic, musical or artistic works, is the person "by whom the arrangements necessary for the creation of the work are undertaken". "Computergenerated" is defined "as a work generated in circumstances such that there is no human author of the work". Under UK copyright law therefore the software programmer would most likely be the author and first owner of a copyright work generated by an AI. The position will be more complicated and unclear however in relation to more sophisticated AI which involves human collaboration and input at various stages of development of the AI. For example, teams of programmers and multiple companies, working on the design of algorithms and determining and providing the data sets to be analysed. In that scenario there may be multiple joint owners. Moreover, is it right that the output of an AI should be protected by copyright at all? Copyright only protects "original" literary, dramatic and artistic copyright works. Originality under UK copyright law means sufficient 'skill, labour and judgment' has been expended. Does the output of an AI involve the right kind of skill and labour to be afforded protection? Is it sufficient that skill and labour was involved in writing the algorithm that generated the AI? Another interesting question is whether the position would be different in other EU countries? Copyright protection for computer-generated works is not currently harmonised in Europe and the EU test for originality is whether or not the work is the "author's own intellectual creation", which requires a human author. The UK government may well choose to diverge from the rest of Europe on protection for AI-created works post-Brexit.

The example of an AI that infringes IP has been cited as another possible challenge to current IP laws, the argument being that we do not have the legal infrastructure to hold an AI accountable for infringement. Certainly there may be enforcement issues if the IP infringement is committed by a fully autonomous AI. However, the present sophistication of AI is such that there is likely to be a person whether a designer, implementer, tester or user of the controlling software behind it. It should still be possible to hold that "ultimate person(s)" accountable under current laws. Whilst it may be harder to trace the ultimate person(s), such a scenario does not challenge the fundamentals of IP infringement law, yet. Arguably the UK common law system is well set up to deal with technological developments, since case law can evolve to accommodate new factual scenarios. However, as developments in AI progress, the dividing line between AIs that have a human agent, and AIs that are fully autonomous, will blur. It may not be so easy to work out whether an AI's decision to infringe or create IP is ultimately attributable to a human.

Although the UK Government's Select Committee has been considering the legal implications of AI, the focus is on safety and regulation rather than intellectual property laws. Pending changes in the law to clarify the position on ownership of AI-created IP and given the likelihood that there may be some litigation in this area, organisations should check now and ensure that any new agreements relating to the development and use of AI clearly state which part(ies) should own any protectable IP resulting from the AI.