Nothing challenges the effectiveness of data protection law like technological innovation. You think you have cracked a technology neutral framework and then along comes the next evolutionary step in the chain to rock the boat. It happened with the cloud. It happened with social media, with mobile, with online behavioural targeting and with the Internet of Things. And from the combination of all of that, artificial intelligence is emerging as the new testing ground. 21st century artificial intelligence relies on machine learning, and machine learning relies on...? You guessed it: Data. Artificial intelligence is essentially about problem solving and for that we need data, as much data as possible. Against this background, data privacy and cybersecurity legal frameworks around the world are attempting to shape the use of that data in a way that achieves the best of all worlds: progress and protection for individuals. Is that realistically achievable?

At a practical level, sourcing the data required for machine learning to happen is the first battleground. The volume of data available is not a problem in itself given the exponential growth of our digital interactions. But in many cases, the ability to magically crunch the necessary data will rest with those that provide services to the owners of the data. Using European data protection jargon, those developing artificial intelligence are often processors rather than controllers. The limited decisionmaking power of processors when it comes to the use of data can be a serious handicap. To what extent can a vendor of technology services to a hospital use the patient data to develop more effective services? Should a cloud provider be entitled to access data it does not own to enhance its offering? The potential benefits of these activities can be substantial but they may not be directly enjoyed by the controller. However, with the right level of openness, cooperation and creativity it should be possible to enable those vendors to use their insights from the provision of the services and still retain their role as processors.

This is not a machine v. human battle. It is a defining moment which requires a sense of responsibility and a long-term view.

A knottier legal issue is the lawful ground for the processing of personal data in the context of the development of artificial intelligence. The uneasy relationship between consent, contractual necessity and legitimate interests comes firmly to the fore in this area. Obtaining consent for something that is so difficult to understand is never going to be straightforward. Justifying such data processing activities on the basis that they are necessary for the performance of a contract involving the data subject only gives a very narrow margin. So as with many other daily uses of personal information, we are left with the wobbly option of relying on legitimate interests, which is not in itself sufficient when dealing with special categories of data like data concerning health or biometric data both of great relevance to applications of artificial intelligence. The key thing to remember here is that the legitimate interests ground places the onus on those wishing to exploit the data to show that no matter how clever and useful the outcome of that exploitation, it must not place an intolerable burden on people's right to privacy.

And whilst technology becomes increasingly complicated, so does the law. A worrisome legal complication arising in this respect is the uncertain interpretation of the European right not to be subject to a decision based solely on automated processing that significantly affects an individual. Although stated as a right, regulators are adamant that it should be seen as a requirement for explicit consent unless it can fit within the contractual necessity exemption or is authorised by EU or Member State law.

As a result, much of the ability to allow machines to make decisions affecting people will be linked to how relevant those decisions are to our lives. Shopping recommendations generated by algorithms? No big deal. Being eligible for a certain school, a career-defining promotion or life-saving medical treatment? Get a human involved pronto. Whether humans themselves will be able to make the right decisions without blindly relying on machines is perhaps one of the big questions of our time.

Ironically, assessing the impact of technology on our privacy and identifying the right safeguards may end up being more accurately done by machines in the not too distant future. Until then, our principal job will be to embed privacy and cybersecurity practices in the development of artificial intelligence involving personal data. New legal principles such as data protection by design and by default should guide this process whilst allowing for pragmatism and common sense. This is not a machine v. human battle. It is a defining moment which requires a sense of responsibility and a long-term view. Future generations will thank us if the way in which we develop artificial intelligence today looks at the true value it can deliver while respecting data protection principles.