On 18 October 2017, WP29 published proposed Guidelines on two key aspects of the GDPR – namely:

We set out the key points and takeaways in this post.

Data Breach Notification

The proposed Guidelines clarify how the Data Protection Authorities (DPAs) expect organisations to apply the personal data breach reporting requirements in practice and adopt a fairly broad definition of “personal data breach”, which includes not only a breach of confidentiality and integrity of the personal data, but also “availability” of personal data. In particular:

  • Non-availability of a system which contains personal data (e.g. following a DDoS or malware attack) even on a temporary basis. This is something that will be a cause of concern for breach reporting / security, as much as any compromise (unauthorised exfiltration / deletion / alteration) of the data itself.
  • Guidance on where notification may not be required is provided, for example where: (i) the compromised data is already in the public domain; (ii) if the data is securely encrypted; or (iii) availability is not compromised because the controller has access to other sources of the data. Though note that the Guidelines do clarify that simply because data is encrypted does not necessarily mean that there is no need to notify in the event of a breach which is certainly the case given that there have been many reported incidents where hackers have infiltrated networks and obtained encryption keys to encrypted data.
  • Awareness by an organisation to trigger the 72-hour breach reporting obligation) happens where the organisation has “a reasonable degree of certainty” of the facts pertaining to a personal data breach – this permits time to investigate before the clock stars, but investigations should happen quickly, with decisions formed soon thereafter as to whether a breach has happened.
  • Interim notification is encouraged if not all information is available at the outset (i.e. within 72 hours), with the ability to modify or enhance a notification downstream as more information comes to light later.
  • The requirement in Article 28(3) to ensure contractual arrangements with a data processor include breach reporting is re-emphasised, but clarify that there is an expectation that the processor reports a breach to the controller “immediately“. On this basis, controller-processor agreements should be updated to ensure “immediate notification” provisions are in place.

Profiling and Automated Decision Making

The Guidelines seek to clarify the distinction among (i) profiling; (ii) automated decisions based on profiling, and (iii) solely automated decisions which produce a significant effect on data subjects as regulated by Article 22 GDPR:

i. Profiling

  • ‘Profiling’ is broadly defined as a general act of automated processing of personal data which leads to an evaluation of personal aspects of a person (whether or not with human intervention).
  • When an organisation is ‘profiling’, the Guidelines explain in clear terms how regulators expect the organisation to: (i) apply the principles of lawful, fair and transparent processing; (ii) minimise the data being used; (iii) ensure accuracy of the underlying data sets; and (iv) explain clearly to individuals their right (and respect individual rights) to object under Article 21 GDPR.
  • Practically, organisations should bear in mind that wherever profiling takes place organisations are expected to:
    • provide a clear explanation of the factors involved in the profiling activity sufficient for data subjects to understand the factors that go into any profiling activity (whether solely automated or otherwise) (principle of transparency); and
    • ensure the technology used to run these arrangements does not create unfair or discriminatory outcomes (principle of fairness).

ii. Solely automated decisions which produce a significant effect on data subjects (Article 22)

  • The Guidelines make clear that where an algorithm is run as part of a wider decision (that does involve human review) the initial profiling will not be regulated under Article 22. This is a helpful clarification.
  • A fairly broad definition is adopted in respect of a decision that has a significant effect on a data subject. It is not simply decisions that have an obvious impact on an individual (e.g. obtaining credit or insurance) but also includes decisions that may not be intended to have particular impact on an audience, but may indirectly cause harm to a particular sub-set of that group (for example advertising online gambling could be harmful to gambling addicts).
  • On the degree of human intervention required for decision making to fall out of the scope of Article 22, the Guidelines clarify that “any intervention needs to be meaningful” and not “just a token gesture“.

iii. General guidance on automated decision making

  • The Guidelines suggest that the entitlement to conduct automated decision making where ‘necessary for the performance of a contract’ should be construed narrowly – i.e. only where no other form of decision making can take place.
  • The Guidelines stress the importance of carrying out frequent assessment on the data sets to check for any bias and develop ways to address any prejudicial elements to protect data subjects where automated decisions are being made.

Both proposed Guidelines are open for consultation until 28 November 2017.