Welcome to our data protection bulletin, covering the key developments in data protection law from March 2022.
- US-EU agree principles for data transfer mechanism to replace Privacy Shield
- ICO publishes guidance on use of video surveillance
- European Data Protection Board publishes guidelines on Codes of Conduct as tools for transfers
- ICO publishes new chapter of its anonymisation guidance
- Internal EDPB guidance could address inconsistencies in GDPR enforcement
- EDPB publishes guidelines on dark patterns in social media platforms
- Online Advertising Programme
- New digital identity law and regulator
- New proposals to address security risks to public telecoms networks and services
- UK law firm fined £98,000 after data breach
- Clearview AI fined € 20,000,000 for GDPR breach involving facial recognition on public web sources
- Facebook fined $19 million for breaching GDPR
- Caixabank fined €2,100,000 for a violation of Articles 6 and 7(4) GDPR
- Local authority held not to be vicariously liable for breaches of GDPR by employee on a frolic of their own
- UK High Court to allow children's privacy law case against TikTok to proceed for now
- Judicial Review application by unvaccinated data subject challenging the lawfulness of the Covid Cert Check NI app denied
US-EU agree principles for data transfer mechanism to replace Privacy Shield
On 25 March 2022, the European Union and the US announced that they have reached an agreement in principle on a new framework for transatlantic data flows. This follows many months of legal uncertainty following the landmark ruling in Data Protection Commissioner v Facebook Ireland Ltd, Maximillian Schrems and others (Case C-311/18) ("Schrems II") in July 2020 which declared the previous EU-US Privacy Shield invalid in recognition of two main flaws with the Privacy Shield as a transfer tool. Namely, that:
- US legislation regulating the access and use of personal data by intelligence services does not provide protections for personal data that are essentially equivalent to legislation in the EU (and, since Brexit, the UK); and
- Individuals whose data was transferred to the US under the Privacy Shield were not provided with sufficient means of redress in respect of the actions of US intelligence agencies - in particular those individuals did not have an equivalent right to an effective remedy before an independent and impartial tribunal. The relevant three key requirements for an effective method of redress are that it must be: (a) a body validly established by law; (b) sufficiently independent from the Executive; and (c) authorised to require appropriate corrective action.
At present, the exact details are still being agreed. However, according to a joint statement by the White House and the European Commission, the new scheme will see an Executive Order (a declaration of the US president which has the force of law) under which the US government will commit to:
- enhancing privacy and civil liberties safeguards and oversight of intelligence authorities;
- introducing a set of binding safeguards to limit access to data by intelligence authorities to what is necessary and proportionate to protect defined national security objectives; and
- establishing a new two-tier redress mechanism to investigate and resolve complaints from Europeans relating to access to their personal data by intelligence authorities.
In respect of this last issue, the proposed redress mechanism will include the establishment of a specific Data Protection Review Court, consisting of non-government officials who have the authority to independently adjudicate claims and provide appropriate remedies, as required.
The replacement deal is likely to be tested by fresh legal challenges once finalised. From the factsheets published, it appears that organisations will need to be signed up to the Privacy Shield framework in order to benefit from the new transfer mechanism. Organisations that have maintained their Privacy Shield registrations will therefore be well-placed to benefit.
Of particular interest will be the question of whether the Executive Order would provide adequate and long-lasting solutions to the issues raised by Schrems II. The key question is whether the proposed Data Protection Review Court will satisfy the three limbs identified above or whether it will fall short of being an independent and impartial tribunal.
European Commission president, Ursula von der Leyen spoke at a joint press conference with US president Joe Biden stating that the new deal "will enable predictable, trustworthy data flows between the EU and the US, safeguarding privacy and civil liberties." In further comments, von der Leyen suggested that the new deal manages "to balance security and the right to privacy and data protection."
Max Schrems, the privacy and lawyer and campaigner responsible for the claim that invalidated the previous Privacy Shield, is just one example of the criticism that has arisen since its announcing. He responded to the announcement on twitter stating: "This failed twice before. What we heard is another ‘patchwork’ approach but no substantial reform on the US side. Let’s wait for a text but my [first] bet is it will fail again."
Only time will tell as the legal documents are drawn up and this new transfer mechanism is implemented, whether this new regime will appropriately address the issues raised by Schrems II. Watch this space.
ICO publishes guidance on use of video surveillance
The ICO has published guidance on the processing of personal data by video surveillance systems (the "Surveillance Guidance").
The Surveillance Guidance outlines how data protection principles can be complied with when using certain surveillance systems including traditional CCTV, automatic number plate recognition, body worn video, drones, facial recognition technology, dashcams and smart doorbell cameras. The Surveillance Guidance only applies to public and private organisations and so is not applicable to surveillance systems used by individuals on their private property.
The key takeaways from the Surveillance Guidance are set out below and provide a useful tool for any business looking to use surveillance technologies.
- Surveillance systems should only be utilised if they are a necessary and proportionate response to the issue being addressed.
- Before using surveillance systems, a data protection impact assessment ("DPIA") must be performed for any type of processing that is likely to result in a high risk to individuals such as processing special category data, large-scale public monitoring, and monitoring individuals in the workplace.
- As with all data processing, a lawful basis is needed for processing data. The most appropriate ground is likely to be either legitimate interests or reliance on a public task (to the text it is carried out as a public authority in the public interest or under official authority).
- In terms of fair and lawful processing, surveillance should only be used in places where individuals have a heightened expectation of privacy in the most exceptional circumstances, where it is necessary to deal with very serious concerns.
- Finally, people must be informed when they are in an area where a surveillance system is in operation. Such information should be positioned at a reasonable distance from the places monitored and so that individuals can recognise the circumstances of surveillance before entering the area.
Updated guidelines on the use of Codes of Conduct as transfers tools
On 4 March 2022, the European Data Protection Board ("EDPB") published the final version of its adopted guidelines (the "CoC Guidelines") on using codes of conduct ("Codes") as appropriate safeguards for international transfers of personal data under the EU GDPR. The Guidelines were first published in July 2021 and have since been updated. The Guidelines are intended to clarify key areas relating to the use of Codes including: a) content requirements; b) adoption process; c) different actors involved in setting up Codes; and d) guarantees to be provided by Codes.
Under Article 46 of the EU GDPR, controllers and processors are required to put in place appropriate safeguards for international transfers of personal data. Codes are an appropriate safeguard for this purpose. In order to offer sufficient protection, a Code should set out the rules and obligations which must be complied with by the data importer to ensure that personal data continues to be adequately protected in a third country. The Guidelines have a check-list of elements to be included in a draft Code which cover the level of protection required by the EU GDPR and in light of the ruling in Schrems II.
An interesting feature of the Guidelines is that the data exporter does not itself have to comply with any Code it relies on to transfer personal data to a third country by virtue of the fact that it is itself subject to the EU GDPR.
For a Code to be adopted, it must first be approved by a competent supervisory authority in the EEA and must then be recognised by the European Commission through an implementing act. There are slightly different procedures for Codes which are intended to be used by exporters from more than one EU Member State, which the Guidelines refer to as "transnational codes". These trinational codes require an opinion from the EDPB in addition to the other steps. Once established, each Code will be monitored by a monitoring body which is accredited by a competent supervisory authority with an establishment in the EEA.
ICO publishes new chapter of its anonymisation guidance
The ICO has now published Chapter 4 of its draft guidance on anonymisation, pseudonymisation and privacy enhancing technologies (the "Draft Guidance"). In our June 2021, October 2021, and February 2022 bulletins we discussed the first three chapters of the Draft Guidance respectively.
This new fourth chapter covers the accountability and governance measures that are required for the anonymisation of data. The Draft Guidance explains that a governance structure should be put in place covering how a controller plans for anonymisation, how it identifies and mitigates anonymisation risks, how it ensures effectiveness of anonymisation and how it ensures it complies with all applicable legislation. Responsibility for this structure should sit with someone of sufficient seniority, who has an understanding of the circumstances of the anonymisation process and relevant technical and legal considerations, to ensure compliance throughout the organisation.
The Draft Guidance explains that organisations should consider the following key points:
- A Data Protection Impact Assessment ("DPIA") should be used to structure and document relevant decisions;
- Organisations should understand the purpose and method for anonymisation and be able to identify different risks associated with different purposes for and methods of anonymisation;
- Organisations should work with other organisations likely to be processing or disclosing other information that could change the effectiveness of the originating organisation's anonymisation. For example, if two authorities are releasing anonymised data, relevant to the same dataset, that risk of rediscovery of an individual should be jointly understood;
- Staff should be appropriately trained so that they have a clear understanding of anonymisation techniques, risks involved and how to mitigate these risks.
The Draft Guidance confirms that even where personal data is anonymised, organisations should remain transparent about their processing. This can be achieved by explaining how and why personal data is being anonymised; saying what safeguards are in place; being clear about any risks; and describing publicly the reasoning for publishing anonymous information.
Internal EDPB guidance could address inconsistencies in GDPR enforcement
March saw the EDPB disclose internal guidance on the practical implementation of "amicable settlements" in the context of GDPR enforcement ("AS Guidance"). Although this AS Guidance was adopted in November 2021, until now it had not been officially published and disclosure only arose because of a freedom of information request. The AS Guidance was published to address inconsistences in the implementation of amicable settlements across the EU in cases that originated as a complaint from a data subject. This is useful for businesses as it gives an indication of when supervisory authorities ("SAs") should initiate an amicable settlement process rather than proceed with more severe enforcement action in the context of a data breach.
Amicable settlements are not defined by the GDPR but, as recognised by the AS Guidance, most Member States see amicable settlements as a process of alternative dispute resolution. The requirements and conditions that govern this process largely depend on the law and policy of each member state. The AS Guidance explains that many SAs use "amicable settlements" when dealing with complaints, but there are diverse variations across SAs and their interpretations due to domestic legislative differences.
The AS Guidance outlines general criteria which could guide SAs in initiating an amicable settlement procedure. These criteria are:
- There is a likelihood for the case to be solved amicably;
- Only a limited amount of data subjects are affected;
- There are no recognisable systemic failures;
- The data breach is incidental or accidental;
- The case involves the processing of a limited number of personal data;
- The effects of the violation are not of serious duration and nature; and
- Further violations are unlikely.
The AS Guidance also suggests that there are public interest and 'broader societal significance' components which may help to determine whether or not the amicable settlement procedure is appropriate. Helpfully, the AS Guidance provides a map of the steps that may be used by SAs when deciding whether cases are suitable for an amicable settlement. The AS Guidance impresses that the checklist is not a 'yes/no' chart with consequences but key decision-stages. This may be of particular interest to organisations concerned as to whether their engagement with an SA will lead to an amicable settlement or not.
The AS Guidance also provides a list of countries in which an amicable settlement procedure is not possible as it conflicts with national legislation. These include: France, Spain, Portugal and Poland among a number of others within Annex 2 of the AS Guidance.
EDPB publishes guidelines on dark patterns in social media platforms
On 14 March 2022, the European Data Protection Board ("EDPB") adopted draft guidelines for designers and users of social media platforms on how to assess and avoid "dark patterns" on social media interfaces that infringe on GDPR requirements ("Dark Pattern Guidelines").
The Dark Pattern Guidelines defines dark patterns as interfaces and user experiences on social media platforms that lead users into making unintended, unwilling and potentially harmful decisions regarding the processing of their personal data. More specifically, the Dark Patterns Guidelines defines six categories of dark patterns:
- Overloading: presenting users with a large quantity of requests, information, options or possibilities in order to prompt them to share more data or unintentionally allow personal data processing which is likely to be outside of the expectations of the data subject.
- Skipping: designing an interface or user experience in a way that users forget or easily overlook all or some of the data protection aspects.
- Stirring: affects the choice users would make by appealing to their emotions or using visual nudges.
- Hindering: obstructing or blocking users in their process of becoming informed or managing their data by making actions hard or impossible to achieve.
- Fickle: the design interface is inconsistent and not clear, making it hard for the user to navigate the different data protection control tools and to understand the purpose of the processing.
- Left in the dark: an interface is designed in a way to hide information or data protection tools or to leave users unsure of how their data is processed and what kind of control they might have over it.
The key problem which the Dark Pattern Guidelines intend to address is whether the processing is fair under Article 5(1)(a) of the GDPR. The techniques identified all, in slightly different ways, represent a threat to the fairness of the processing and thus compliance with the GDPR. There is a related concern about whether any of the techniques identified above would represent compliance with privacy by design and default under Article 25 of the GDPR.
The Dark Pattern Guidelines break the above techniques down into 15 specific dark pattern behaviours and consider how these behaviours are displayed in the life cycle of a social media account. Best practice guidance is available in the Dark Pattern Guidelines and focusses on transparency, shortcuts to privacy information, and making the information easy to understand through examples and clear wording.
The Dark Pattern Guidelines can be found here and are subject to public consultation until 2 May 2022.
It has been a busy month for the Department for Digital, Culture, Media & Sport ("DCMS"). A consultation to tackle a lack of transparency and accountability in relation to paid-for online advertising known as the Online Advertising Programme ("OAP") was published, plans to introduce new legislation to make digital identities more secure were announced and a consultation was launched to address security risks to public telecoms networks and services. We have provided a summary of all the key details below.
Online Advertising Programme
Online advertising has emerged rapidly, and fundamentally redrawn how businesses and consumers interact. The industry now plays a critical role in the monetisation of the internet and the ability of consumers to access myriad online services. The OAP will work in conjunction with the measures being introduced through the forthcoming Online Safety Bill in relation to tackling fraudulent paid-for advertising.
The DCMS identifies that the harms associated with online advertising can broadly be divided into: (1) harmful content of adverts; and (2) harmful placement or targeting of adverts (especially when the harmful content is targeted at vulnerable groups). A lack of transparency and accountability are labelled as the two core factors driving the prevalence of (1) and (2). In particular, the opaque, complex and automated nature of the online supply chain, and a lack of consistency in holding actors to account under existing regulatory frameworks, are identified as contributing factors. A number of options for regulatory reform have been outlined in the OAP including:
- A self-regulatory approach which would involve relying on the Advertising Standards Authority's existing regulation through the Committee of Advertising Practice Code and the new Online Platforms and Network Standards codes;
- A statutory regulator to backstop more fully the self-regulatory approach which would have powers to enforce the self-regulatory codes in existence or in development through tougher sanctions on actors who do not comply; or
- A fully statutory approach to ensure appropriate measures are put in place and enforced so that all actors in the supply chain address both legal and illegal online advertising harms.
New digital identity law and regulator
The DCMS has announced plans to introduce new legislation to make digital identities more trustworthy and secure following the digital identity and attributes consultation (which was published last year). Specifically, it will allow individuals to set up digital IDs, on a voluntary basis, which can be accessed online or via a phone app and will be able to be used in-person or online to prove their identity instead of having to rely on traditional physical documents. Different types of digital IDs will be able to be created for different purposes, limiting the personal information required to be disclosed for the particular purpose.
Under the planned legislation:
- the legal validity of digital forms of identification will be confirmed as being equal to physical forms of identification;
- a legal gateway will be created to allow trusted organisations to carry out verification checks to validate a person's identity against official data held by public bodies; and
- a robust and secure accreditation and certification process will be established so organisations can prove they are meeting the security and privacy standards needed to use digital identities.
A new "Office for Digital Identities and Attributes" will also be established to oversee the security and privacy standards for digital identification.
The DCMS has confirmed that digital identities will not be compulsory even once the legislation comes into force.
The DCMS response to the digital identity and attributes consultation is available here.
New proposals to address security risks to public telecoms networks and services
The DCMS has announced that it has launched a new consultation and survey on proposals for new security regulations ("Regulations") and a code of practice ("Code of Practice") to address security risks to public telecoms networks and networks. The government will be using its new powers under the Telecommunications (Security) Act 2021 to make the Regulations and Code of Practice which will be relevant to all public telecoms networks and service providers.
The draft Regulations cover obligations including (but not limited to): ensuring that network providers understand and record the risks of security compromises to network architecture and act to reduce them; protecting network management workstations from exposure to incoming signals and the wider internet; and protecting tools that enable the monitoring or analysis of the use or operation of UK networks and services. The Regulations apply to the entire supply chain to make sure appropriate security reviews are undertaken across the entire network.
Once implemented, the Regulations and Code of Practice will have a significant impact on public telecoms providers. Providers will need to assess their current security arrangements and incur the cost of ensuring that their networks are secure and that a high level of security is maintained.
For more information on the consultation click here.
UK law firm fined £98,000 after data breach
Tuckers Solicitors LLP ("Tuckers") has been fined £98,000 by the Information Commissioner’s Office (the "ICO") as a result of a data breach caused by ransomware, during which hackers accessed 24,000 court bundles some of which contained sensitive personal data such as witness statements and medical reports, 60 of which were released on the dark web.
Following an investigation, the ICO issued a Monetary Policy Notice, which is the first arising from a ransomware attack, having concluded that Tuckers had contravened Article 5(1)(f) GDPR citing "data security contraventions" and "inadequate" technical and organisational measures to protect sensitive data.
Specifically, the ICO noted that the firm had not implemented multi-factor authentication ("MFA") for remote access to its systems, failed to encrypt personal data and had a history of failing to implement adequate patch management and apply its data retention policies to ensure data minimisation. This was notwithstanding that, at the time of the incident, the firm was aware that its security was not at the level of the NCSC Cyber Essentials.
Clearview AI fined €20,000,000 for GDPR breach involving facial recognition on public web sources
Three data subjects issued complaints to Garante, the Italian Supervisory Authority, against Clearview A.I. Inc. ("Clearview") regarding the processing of their personal data by Clearview without their consent. In addition, two "organisations committed to defending the privacy and fundamental rights of individuals" submitted reports on the activities of Clearview, regarding prior decisions of Supervisory Authorities in Germany and Sweden.
Clearview conducts facial recognition on public web sources and is therefore a data controller. Its technology is used by law enforcement agencies to assist with identifying criminals and Clearview’s terms state that it is the responsibility of its customers to "verify that the use of this product is legitimate in light of the local regulations applicable to it." Clearview has highlighted that it neither offers its services in the EU nor monitors behaviour in the EU. It is based in the US and has no branches in the EU. It also made clear that it is compliant with US law, being the country in which it operates, and that it would be impossible to take into account all existing laws throughout the world. In addition, Clearview argued they were akin to a search engine (such as Google) and as such “… since Google's search engine is presumed to comply with European laws because Google is established in the EU and offers its services to users in the EU, if the Regulation were also found to apply to Clearview, the processing of the complainant's data should be considered lawful”.
The Garante made the following findings in relation to Clearview:
1. It not only collected personal data but also biometric data (through the process of converting photographs);
2. It breached:
a. Article 5(1)(a) GDPR which requires "compliance with the principles of lawfulness, fairness and transparency when processing the data subject's data";
b. Article 5(1)(b) GDPR which "provides for compliance with the principle of purpose limitation";
d. Article 6 GDPR as it had no lawful basis for processing personal data. Its reliance on the legitimate interest exemption was misguided. Garante noted reliance on this exemption "cannot but be at odds with the rights and freedoms of the persons concerned, and in particular with the serious threat to the right to privacy, the prohibition of automated processing and the principle of non-discrimination inherent in the processing of personal data such as that carried out by the Company";
e. Article 9 GDPR as a result of its "processing of special categories of data (with reference to biometric data)";
f. Articles 12, 13, 14 and 15 GDPR as the data subjects "had to repeat their requests for access several times before receiving a reply from Clearview, despite the fact that the contact channels indicated on the company's website (online form and e-mail address dedicated to privacy requests) had been used." Moreover, "Clearview, in order to process requests for access, has asked the interested parties to provide identification, such as an identity document, which is excessive in relation to the objective pursued" as there were no "reasonable doubts" as to their identity. Further, Clearview did not provide timely, full, up to date, "precise and transparent communication" to the data subjects;
g. Article 27 GDPR by not having its representative in an EU territory;
3. Its violations were akin to mass surveillance and were therefore very serious in nature. They were not isolated events and continued even after "service was no longer offered to customers established in the European Union." With this in mind the DPA directed Clearview to do the following:
i. "prohibit the processing of: i) further collection, by means of web scraping techniques, of images and related metadata concerning persons who are on Italian territory; ii) prohibit any further processing of common and biometric data processed by the Company through its facial recognition system concerning persons who are on Italian territory.";
ii. Delete the "aforementioned data, without prejudice to the obligation to provide timely feedback to requests to exercise the rights" given under Articles 15-22 GDPR, "which may have been received in the meantime from interested parties";
iii. Designate within 30 days "a representative in the Italian territory to act as interlocutor, in addition to or instead of the data controller, with the interested parties in order to facilitate the exercise of their rights.";
iv. Provide "adequately documented feedback, within thirty days of notification of this measure, of the initiatives taken to implement the above order" and to provide details of "measures put in place to facilitate the exercise of the rights of the persons concerned."; and
v. Pay a cumulative sum of € 20,000,000 for contravening Articles 5(1)(a), (b) and (e), 6, 9, 12, 13, 14, 15 and 27 GDPR.
Facebook fined $19 million for breaching GDPR
The Irish Data Protection Commissioner ("DPC") has fined Meta Platforms Inc., the parent company of Facebook, $19 million for violating GDPR, following its investigation into a series of data breaches lodged in 2018, where hackers are believed to have gained access to roughly 50 million Facebook user accounts.
The DPC concluded that Facebook had breached Article 5(2) and 24(1) GDPR by failing to have "appropriate technical and organisational measures" which would have allowed it to "readily demonstrate the security measures that it implemented in practice to protect EU users' data". Article 5(2) GDPR requires that companies demonstrate compliance with the requirements to ensure EU residents' personal data is processed "in a manner that ensures appropriate security of the personal data, including protection against unauthorised or unlawful processing and against accidental loss, destruction or damage, using appropriate technical or organisational measures (‘integrity and confidentiality’)", while article 24(1) requires data controllers to implement "appropriate technical and organisational measures to ensure and to be able to demonstrate that processing is performed in accordance with" the law.
In response to this fine, Meta issued a statement saying that the penalty related to "record keeping practices from 2018 which have since been update[d], not a failure to protect people's information".
Caixabank fined €2,100,000 for a violation of Articles 6 and 7(4) GDPR (link to original article)
The Spanish Supervisory Authority, the AEPD, has fined Caixabank €2,100,000 for a violation of Articles 6 and 7(4) GDPR following complaints that the bank had asked them to consent to terms for the processing of their personal data through pre-ticked boxes. If they did not consent to these terms, the bank would charge the customer a €5 per month fee for their bank account's maintenance.
The bank argued that this fee was not a charge but rather a necessary fee (which they sometimes provided exemptions to) to allow it to provide its services to the customers and was therefore an essential element of the contract. The bank further argued that Article 7(4) GDPR was not applicable since the terms of the contract did not mandate a condition, and consent for the processing of personal data was not a must-have for contracting with the bank.
In the AEPD's view, linking an exemption from fees to the provision of consent for the processing of personal data resulted in consent not being given freely, on the basis that not giving consent resulted in the payment of maintenance fees and was therefore detrimental to the data subject.
The AEPD rejected the bank's argument that these charges were to be considered an inherent element of the contract. The AEPD highlighted that they were at odds with the national law regarding payments for bank services (Real Decreto-ley 19/2017 de cuentas de pago básicas, traslado de cuentas de pago y comparabilidad de comisiones), which requires that fees for basic bank accounts need to be freely agreed upon between the customer and the bank.
The AEPD therefore found that the two legal bases for the lawful processing of personal data on which the bank had relied (i.e. consent and performance of a contract), were not made out, and issued a €2,000,000 fine against the bank for infringing Article 6 GDPR by imposing conditions based on obtaining consent for the processing of personal data which were not necessary for the performance of a contract in breach of Article 7(4) GDPR. In addition, the bank was fined an additional €100,000 for requesting this consent through pre-ticked boxes, in breach of Article 6(1) GDPR.
Local authority held not to be vicariously liable for breaches of GDPR by employee on a frolic of their own
In a recent decision, the High Court has revisited the principles governing vicarious liability set out by the Supreme Court in Various Claimants v Morrisons Supermarkets  AC 989 (which we covered here) with reference to the handling of sensitive data by an employee ("RB") of a local authority.
RB worked for the Defendant, Luton Borough Council's social services department, as a Contact Assessment Worker, meaning she had responsibility for supervising and assessing contact sessions in circumstances where she was under a legal duty (principally, under the Children Act 1989) to safeguard the child's wellbeing. As part of this role, RB had access to the Defendant's social services records held on the Defendant’s computer system. This included the records of the Claimant, who had made a compliant against her ex-husband, the current partner of RB.
RB accessed a number of records at work relating to the Claimant's police report against her ex-husband and likely took photographs of the documents on her mobile phone and printed a document containing the information. These pictures were then sent or shown to the Claimant's ex-husband who told others within the community. The Claimant subsequently became concerned for her safety and alleged that she was suffering anxiety and distress.
RB was arrested and charged with the offence of unauthorised access to computer material, contrary to section 1 of the Computer Misuse Act 1990. She pleaded guilty and was sentenced to three months imprisonment, suspended for 12 months.
The Claimant, Ms Ali, brought proceedings against the Defendant alleging that it was vicariously liable for RB’s actions, which it was common ground had breached the Claimant’s rights under the GDPR, at common law and under the Human Rights Act 1998.
The Court considered the authoritative test set out in Morrisons for determining vicarious liability in cases of employment, where Lord Reed drew a distinction between cases where the employee was engaged, however misguidedly, in furthering his employer’s business, and cases where the employee is engaged solely in pursuing his own interests: on a "frolic of his own". The Court also had regard to Lord Reeves' observations in Morrisons that cases involving sexual abuse have not followed the authoritative test but rather focused on different factors, such as misuse/abuse of authority over the victims, over whom they have some responsibility or trust.
The Claimant sought to distinguish the decision in Morrisons, arguing that the fact that the primary purpose of RB’s job was the safeguarding and welfare of vulnerable persons meant that it was appropriate to apply, by analogy, the principles which had been developed in the sexual abuse cases.
The High Court rejected this argument concluding that the different approach adopted in the sexual abuse cases was a "principled" one which focuses on the fact that the wrongdoer is the very person to whom the Defendant has entrusted the care, custody or education of the victim. The Court also considered it pertinent that in this case, RB accessing the data relating to the Claimant and her children formed no part of the work which she was engaged by the Defendant to carry out, whereas in Morrisons it could at least be said that the employee was engaged in using unlawful data which he had been tasked with processing lawfully, whereas RB was not tasked in any shape or form with either accessing or disseminating the information in question.
The Court found that RB's actions were carried out solely in pursuit of her own agenda, namely divulging information to the Claimant’s husband, with whom she had a relationship. The fact that there was a safeguarding element to her job only served to underline how plainly she was not engaged in furthering her employer’s business. The disclosure of the data to the husband was to the detriment of the Claimant and her children, whose safety and interests as users of the Defendant’s services, formed part of her core duties to further and protect. She was, therefore, on a frolic of her own, and the claim failed.
UK High Court to allow children's privacy law case against TikTok to proceed for now
The UK High Court has, for now, allowed the continuation of a representative claim (i.e. an opt-out class action) against TikTok which alleges that the social medial site has been responsible for the unlawful processing of children’s data. The now former Children’s Commissioner for England, Anne Longfield, brought the representative action on behalf of the Claimant, a girl under the age of 16, and other children residing in the UK or the European Economic Area who had accounts or used TikTok since May 2018.
This case was stayed pending the outcome of the UK Supreme Court’s decision in Lloyd v. Google  UKSC 50 (which we covered in detail here), which, as readers will recall, was, similarly, a representative claim, albeit alleging breaches of Section 13 of the Data Protection Act 1998 (the "DPA") In that case, the Supreme Court dismissed the claim, holding that a representative action could not be pursued as damages for non-trivial breaches of s13(1) of the DPA require proof of financial loss or distress was suffered by each member of the class; it was not enough to show each member of class had suffered a "loss of control" of their personal data . The Claimant has sought to distinguish Lloyd, arguing that the GDPR specifically provides for damages to be recovered for "non-material damage" including loss of control, and therefore the claim can be pursued as a representative action.
Whilst Nicklin J dismissed the Claimant's claims against various Defendants domiciled out of the jurisdiction due to procedural errors on the Claimant's solicitors' part, he agreed, whilst expressing doubt as to whether there was any merit in the attempts to distinguish Lloyd, not to do so on the basis that the claim gave rise to a serious issue to be tried, to allow the claim to proceed so that the points raised by the Claimant in this regard to be fully argued at an upcoming hearing listed to hear a strike-out application by a UK-based Defendant.
Judicial Review application by unvaccinated data subject challenging the lawfulness of the Covid Cert Check NI app denied (link to original article)
The High Court of Justice in Northern Ireland Queen's Bench Division denied an application for judicial review brought by Darren Williams challenging the lawfulness of Covid Cert Check NI app (an app which checks COVID-19 vaccination status before granting entry to public places). Mr Williams' application was denied on the basis that, as someone who had not been vaccinated, he could not have been subject to the data processing of which he complained, and lacked standing to challenge the app's usage.
However, the Court did highlight that the question of whether data processing is necessary by an app is one of substance, but equally one which might most appropriately considered by elected representatives, who are best suited to assess the public health interests in question.