Earlier this year, the chairman of the UK's National Cyber Management Centre warned that "a major bank will fail as a result of a cyber attack in 2017". Scarily, this does not seem so far-fetched. Indeed, it is predicted that cyber crime costs will reach US$2 trillion by 20191.

An increasingly connected world and rapid technological innovation is creating broader and more diverse opportunities for cyber attacks. Use of Artificial Intelligence ('AI') by companies to detect and counter such cyber attacks is becoming increasingly more commonplace. With cyber security related regulatory changes on the horizon, this article explores the associated legal issues in using AI for cyber security purposes.

Scope of liability

Businesses are increasingly at risk of cyber security breaches, with one in four detecting a breach in the last year2. The results of a cyber attack can be devastating. The most costly single breach identified in the UK government's Cyber Security Breaches Survey was £3 million3. Customers are often also affected. Last year, hackers stole £2.5 million from 9,000 Tesco Bank customers in a raid the UK's Financial Conduct Authority described as "unprecedented". The impact of such breaches is only set to increase as attacks become more sophisticated and new regulations come into force which require companies to do more to protect their customers. Furthermore, the "internet of things" means the attack surface is both expanding and diversifying.

The current scope of liability is wide and includes:

  • contractual and tortious liability to individuals seeking compensation for damage and/or distress caused by the unlawful acquisition, disclosure and/or use of their personal information,
  • criminal or regulatory sanctions for failure to comply with legal obligations to keep information and networks secure, or, in some cases, to respond appropriately in the event of a cyber attack,
  • reputational damage flowing from adverse media coverage, publication of investigatory reports by regulatory authorities and any requirement to notify customers of the attack.

Further, with expected changes in cyber security law, the scope of regulatory liability is set to significantly increase. The UK government has confirmed4 that it will implement the General Data Protection Regulation (the 'GDPR')5 and the European Network and Information Security Directive6, more colloquially known as the 'Cyber Security Directive', into UK law, regardless of the UK's decision to leave the EU.

In what is seen as a "step-change" in risk, the GDPR will introduce fines of up to €20 million or 4 per cent of annual worldwide turnover (whichever is the greater) for data protection breaches – far exceeding the current maximum of £500,000. Further, the Cyber Security Directive requires certain public and private entities identified as "operators of essential services" to implement appropriate levels of network and information security and obliges them to report details of cyber attacks to the authorities. With both pieces of legislation coming into force in May 2018, now is the time to upgrade your data security practices.

AI in cyber security

The military and defence sectors have traditionally been the largest sources of funding for AI research. This is set to continue as allegations of Russian interference in the 2016 US presidential election continue to reverberate around the world.

In October 2016 a White House report on AI7 (the 'White House Report') identified cyber security as an area that will greatly benefit from the technology. Traditionally, cyber security technology has been rule or signature based, meaning guarding against a particular threat relies on prior knowledge of that threat's structure, source and operation, thus making it impossible to prevent new threats. By contrast, AI can be used reactively, in response to threats, or proactively, by identifying a system's vulnerabilities and taking action to prevent or mitigate future attacks.

Machine learning allows AI security tools to detect threats, including those that have not previously been encountered, by identifying shared characteristics within families of threats. AI is especially effective at filtering out threats that expose known patterns. These patterns may be highly complex and would otherwise require a large amount of repetitive work on the part of a human cyber security expert to identify them. In this way, AI can greatly reduce the workload of the expert. This preliminary filter enables security teams to focus on new threats or undiscovered symptoms of known attacks.

However, relying on AI-driven cyber security tools is not without risk. Critics have pointed out that AI technology is (as yet) unable to conduct risk assessments in the same way as a human can. For example, an AI-driven cyber security system that is rolled out across thousands of businesses will not know the impact one server has on a particular business compared with another server. If two threats appear at once, AI will be unable to focus remediation efforts on what really matters. In this context, human input will remain indispensable in driving the appropriate response to individual risks. Critics are also concerned by the sheer volume of "false alarms" raised by the AI component. They argue that large numbers of potential threats will be dismissed by the expert as they become numb to these "false alarms".

Perhaps the greatest cause for concern is the use of AI technology by cyber criminals, with many hackers using AI-driven tools to imitate human behaviour. According to the Financial Times, gangs are getting around systems designed to catch automated attempts to gain access to websites by simulating how humans log on with fake mouse movements or varying typing speeds8. In addition, AI is being used by hackers to automate the process of finding cracks in a system's security and for predicting passwords. Using AI-driven security tools to spot AI-driven attacks may be the only way to effectively counter this new threat.

Legal issues

Delegating decision-making to an autonomous machine brings with it a raft of legal issues. The European Parliament recently considered such issues in its draft report containing recommendations to the Commission on Civil Law Rules on Robotics (the 'Report'). Regardless of whether the substance of the Report becomes EU law, or whether the UK follows its recommendations, the Report provides a helpful indication of what future regulation might look like.

AI raises the fundamental question of whether robots should possess legal status. In response to this question, the Report calls for the creation of a specific legal status for a "smart robot", which would possess its own distinct rights and obligations. It calls for a common European definition of a "smart robot" which should take into consideration the following characteristics:

  1. The capacity to acquire autonomy through sensors and/or by exchanging data with its environment (interconnectivity) and the analysis of this data.
  2. The capacity to learn through experience and interaction.
  3. The extent to which the robot is supported by a human.
  4. The capacity to adopt its behaviours and actions to its environment.

As a result, depending on the level of human interaction, AI-driven cyber security tools may be considered "smart robots" under future European legislation.

Machine liability

If a cyber security system has its own legal status, to what extent can that system be held liable for its acts and omissions? The current scope of liability, as set out earlier in this article, does not accommodate a machine being at all responsible for an individual's loss.

To take a simple example of a tortious claim (ignoring any contractual element) – a bank suffers a cyber attack and an individual loses £5,000 from his account. The "smart robot" utilised by the bank's cyber security provider to protect the individual's account failed to prevent the attack. The human input was minimal – the "smart robot" did not flag the threat to the cyber security expert. The relevant algorithm had become opaque so it is now impossible for the expert to trace the "smart robot's" series of decisions in order to identify a reason for the breach. How can the chain of causation be identified when a large stretch of it was dictated by a machine-taught "smart robot" whose "instructions" came from an impossibly broad range of sources? The Report recognises that current rules on liability are insufficient to cope with a scenario like this and calls for new rules in which a machine can be held – partly or entirely – liable for its acts or omissions.

Interesting points raised by the Report include suggestions that:

  • any legal solution applied to "smart robots'" liability should not restrict, in any way, the type or extent of damages that may be recovered by the aggrieved party, nor should it limit the forms of compensation that may be offered to them,
  • any future legislative instrument should impose strict liability on the "smart robot" for damage caused by it, meaning the claimant would only need to establish a causal link between the harmful behaviour of the robot and the damage suffered by the injured party (and would not be required to establish negligence on the part of the robot),
  • future legislation should contain an obligation on the "producer" (in our circumstances, the cyber security provider) to take out insurance for any "smart robots" it uses,
  • a European agency for robotics and AI should be formed,
  • a compensation fund should be created to ensure that damages can be provided in cases where no insurance cover exists.

Regardless of the proposals of the Report, we are a long way from a legislative landscape that reflects its content. Until such legislation is enacted in the UK, it is unclear whether a person suffering loss at the hands of a "smart robot" would be able to recover that amount from its developer or otherwise.

Developing cyber security technology with the ability to take autonomous decisions often relies on the input and analysis of large quantities of data from different sources. Much of this data may be personal (i.e. information relating to an individual), which brings the technology within the realms of data protection rules. This triggers a variety of data protection compliance challenges, including providing appropriate notice to users and ensuring data is only used for the purpose it was initially collected for.

Having said this, a key cornerstone of data protection law is to implement "appropriate" security to protect personal information. With technological advancements presenting new cyber risks, "appropriate" may now include adopting AI-driven cyber security tools to counter threats. This creates a natural tension in complying with other aspects of data protection law, e.g. implementing AI-driven security tools may satisfy requirements to keep data secure, but may not comply with the requirement to provide users with clear and effective notice. Striking the right balance, particularly with one eye on the increased fines under the GDPR, is a significant challenge that companies face.

Top tips: Implementing AI-driven cyber security tools

  1. Due diligence cyber security providers and seek warranties that the product complies with current and future regulatory frameworks impacting area (e.g. GDPR; Cyber Security Directive; the new e-Privacy Regulation9).
  2. Consider the following in any services contract with AI cyber security provider:
    • Adapting service levels and the definition of service failure to reflect the different standards, points of reference to assess such standards and the types of failure likely to occur at the hands of a machine. Service levels and the definition of service failure are generally drafted with human error in mind. Machine error, although rarer than human error, tends to have more extreme consequences.
    • Including specific standards as to the quality of the data fed to a machine. As with any AI-driven technology, a machine's performance improves as it is fed with more data. If it is corrupted with bad data (e.g. data fed to it by another customer of the cyber security product), the product may not work as intended.
    • Requiring a separate liability cap to cover any breaches caused by the AI component in any agreement with a supplier. This would likely be significantly higher than other liability caps to reflect the more extreme consequences of machine error.
    • If the customer has audit rights in relation to the supplier, amending these in light of the fact that traditional audit processes may no longer be sufficient. A human will be incapable of "checking" an AI system's series of decisions. It may be that, in addition to accountants and audit professionals, IT forensic experts will need to be included in any audit process.
    • Amending termination provisions. If the AI system is licensed software, the intellectual property rights in the data generated by your company and fed to the system may well remain with the supplier post-termination. It is therefore important to address in the contract how to reimport such information into a new AI system, so as to reap the rewards of a complete period of learning.
  3. Always back-up your data to mitigate data loss resulting from cyber attack.
  4. Ensure cyber insurance covers use, misuse and malfunctions resulting from AI-driven technology.