Lexology GTDT Market Intelligence provides a unique perspective on evolving legal and regulatory landscapes. This interview is taken from the Privacy & Cybersecurity volume discussing topics including government initiatives, M&A risks and cloud computing within key jurisdictions worldwide.

GTDT: What are the key regulatory developments in your jurisdiction over the past year concerning cybersecurity standards?

Christian Duvernoy, Anne Vallery and Itsiq Benizri: Protection of personal data has been a concern for at least 25 years in the European Union (EU) and in Belgium, and a fundamental right in the EU for almost 10 years. Cybersecurity is a more recent concern, given the vital role digital networks and information technology systems and services play in society nowadays. Data protection laws include cybersecurity requirements, such as the obligation to keep personal data secure and transfer it only if it is adequately protected, but there are also specific cybersecurity laws that apply to critical infrastructure, even if it is not used to process personal data.

2018 marked a turning point in terms of cybersecurity in the EU. Before this year, cybersecurity rules in the EU were by and large composed of a patchwork of national laws. Cybersecurity is now addressed more systematically at EU level through a number of regulations and directives intended to promote the single internal market. EU rules in some cases harmonise national rules and in other cases provide an overlay on top of them, enabling companies to take a common approach to cybersecurity. Each EU member state is responsible for operational application and enforcement of the EU cybersecurity requirements and for how it organises the public authorities responsible for these tasks. As far as substance is concerned, EU cybersecurity instruments impose new, far-reaching requirements. Given the substantial changes under way, businesses are very busy implementing compliance programmes for the new EU cybersecurity standards.

Against this background, the key regulatory developments in 2018 were the deadlines for applicability of the General Data Protection Regulation (GDPR) and the Directive on Security of Network and Information Systems (NISD) in May, progress in the adoption of the European Electronic Communications Code (EECC), the Cybersecurity Act and, to a lesser extent, the ePrivacy regulation.

The GDPR constitutes a regulatory big bang for data protection and cybersecurity. It impacts all private and public-sector entities. There are broad exceptions to the GDPR, so that it does not apply to external (foreign) policy, security, or criminal law. A separate Directive known as the ‘Law Enforcement Directive’ governs the data protection and data breach notification responsibilities of public authorities that process personal data in the context of criminal law. This Directive also had to be transposed by all EU member states by May 2018. Belgium transposed it in mid-July, together with specific rules to implement the GDPR. Also in May this year, the Council and the EU Parliament reached a political agreement on the final text of a regulation covering and bringing the EU institutions’ data processing activities into line with the GDPR. The final text of this regulation still needs to be adopted, but it is expected to apply as of autumn 2018.

The GDPR gives regulatory authorities significantly more power and is meant to promote consistent and effective enforcement across the EU. National data protection authorities (DPAs) now have stronger investigative and enforcement powers, with the ability to impose high fines for infringements of the GDPR. They are empowered to carry out data protection audits and they can require companies to provide access to their premises, their IT equipment and all personal data as necessary for the performance of their tasks. DPAs can order companies to bring processing operations into compliance with the GDPR, including imposition of a temporary or permanent ban on specific processing operations. Non-compliance with a DPA’s order may be subject to a fine of up to €20 million or 4 per cent of the company’s total worldwide annual turnover, whichever is higher. In December 2017, Belgium adopted a new law to give all these powers to its national DPA, which is now called Autorité de Protection des Données or Gegevensbeschermingsautoriteit. However, a controversial issue is whether the DPA’s budget will be increased so that it can effectively exercise its enhanced powers, since the government has maintained that restructuring of the DPA is possible without a budget increase.

The GDPR also sets up a mechanism (one-stop shop) to identify a lead DPA to supervise a business established in several EU member states. A new ‘consistency mechanism’ creates a procedure to ensure consistent application of the GDPR among national DPAs. The European Data Protection Board (the EDPB) replaces the previous article 29 Working Party (WP29) and includes all EU DPAs, the European Data Protection Supervisor who supervises the EU bodies’ processing activities, and a representative of the European Commission (EC). The EDPB acts as an advisory body and is responsible for adopting binding decisions where DPAs disagree (enforcing the ‘consistency mechanism’) or fail to follow its advice.

The GDPR requires all companies that process personal data to implement appropriate technical and organisational measures – such as pseudonymisation and encryption – to ensure a level of security that is appropriate to the risk. Importantly, this includes ensuring such security when transferring personal data outside of the EEA, including adopting appropriate measures where the transfer is made to a recipient country that has not been found to provide adequate protection by the EC. The GDPR also requires companies to notify a security breach that is likely to result in a risk to the ‘rights and freedoms’ of the individuals concerned.

The new Belgian Data Protection Act adopted mid-July to implement specific rules under the GDPR leaves the GDPR security requirements unchanged. Non-compliance with these obligations may may result in a fine of up to €10 million or 2 per cent of the company’s total worldwide annual turnover, whichever is higher. In practice, though, Data Protection Authorities (DPAs) will have to consider the gravity of and reaction to any data breach to ensure that the fines they impose are proportionate.

The Belgian DPA already published some guidance on the right to data portability, the identification of the lead authority and the requirement to appoint a Data Protection Officer (DPO) to help companies comply with the GDPR. The Belgian DPA also published guidance on the requirement to maintain internal records of processing activities, recommending that such records are maintained by all companies, even though the GDPR contains certain exceptions for those with less than 250 employees. From a cybersecurity point of view, the most interesting guidance from the Belgian DPA is the most recent, published in April 2018, on data protection impact assessments (DPIAs). The GDPR requires companies to carry out such assessments where a type of processing is likely to result in a high risk to the rights and freedoms of individuals. WP29 had already published guidelines on DPIAs in October 2017. These guidelines set out the criteria to identify a ‘high risk to the rights and freedoms of individuals’. While these criteria were still quite abstract, the Belgian DPA’s guidelines go a step further and provide lists of processing activities that do or do not require a DPIA. For example, the Belgian DPA considers that a DPIA is always required for the large-scale processing of personal data of vulnerable individuals, such as children, for a different purpose than the one for which the data was collected. A DPIA is also required for large-scale processing of personal data where an individual’s behaviour is observed, collected, established or influenced, including for advertising purposes, in a systematic manner and using automated means. The Belgian DPA considers that a DPIA is not required for the processing of personal data that is necessary to comply with a legal obligation, subject to a legal definition of the purposes of the processing, the categories of personal data processed, and guarantees to prevent abuse, unlawful access or transfer. Another example of processing activity that does not require a DPIA according to the Belgian DPA is the processing of personal data that is necessary and exclusively restricted to the administration of employees’ salaries, provided that such personal data is only shared with recipients who are authorised for this purpose and is not kept longer than necessary for the purposes of the processing.

Second, the NISD, which regulates cybersecurity standards for essential infrastructure at EU level for the first time, also took effect in May. Not all member states have transposed the NISD into national law yet. For example, France has only partially transposed the NISD, while transposition is still in progress in Belgium, Ireland and Spain. The NISD applies only to designated private and public-sector entities responsible for essential infrastructure in specific sectors. It does not extend to governmental information systems other than those used to operate essential infrastructure.

The NISD seeks to achieve a high common level of network and information system security across the EU; however, member states can impose more stringent requirements. Each member state must adopt and communicate its cybersecurity strategy to the EC. The strategy is supposed to define the objectives, appropriate policy and regulatory measures to achieve this high level of security of the relevant Network and Information Systems (NIS). However, in contrast to the GDPR, the NISD applies only to Operators of Essential Services (OES) in specific sectors including energy, transport, banking, financial market infrastructure, healthcare providers, drinking water supply and distribution, and digital infrastructure (ie, internet exchange point, domain name system service providers and top-level domain name registries). By November 2018, member states must identify OES that are established on their territory based on the criteria provided by the NISD. They are required to consult with each other where a company provides an essential service in several member states. The NISD also applies to Digital Service Providers (DSPs), that is, online marketplaces such as eBay or Amazon, online search engines such as Google or Bing, and cloud computing services. For these operators, member states cannot go beyond NISD obligations in terms of security and notification requirements. The NISD does not apply to companies providing public communications networks or publicly available electronic communications services, since these infrastructures are covered by the ePrivacy Directive and will be covered in the future by the EECC.

Companies falling within the scope of the NISD have two obligations that are similar to those under the GDPR. First, they must implement appropriate technical and organisational measures to ensure a level of security appropriate to the risk. This includes preventing and minimising the impact of incidents affecting the security of their NIS to ensure their continuity. Second, companies covered by the NISD must notify incidents impacting their services without undue delay to the competent authority. A competent authority under the NISD can require OES and DSPs to provide the information needed to assess their security, including documented security policies and evidence of effective implementation such as a security audit. Competent authorities can also issue binding instructions to OES to remedy their operations. Member states must adopt effective, proportionate and dissuasive penalties for infringements of the NISD. Although fines will likely vary, given the importance of the infrastructure at issue they can be expected to be significant. For example, the UK NIS Regulations provide that fines could be as much as £17 million or 4 per cent of global turnover.

The NISD requires member states to designate one or more competent authorities to monitor the application of the Directive; a body responsible for coordinating NIS issues and acting as a single point of contact for cross-border cooperation at EU level; and a Computer Security Incident Response Team (CSIRT) that is meant to ensure the effective capacity to deal with incidents and ensure efficient cooperation at EU level. One entity can play all three roles – organisation of national cybersecurity agencies is left to national law. The NISD also promotes active cooperation among cybersecurity agencies at the EU level. A Cooperation Group, composed of representatives from member states, the EC and the European Union Agency for Network and Information Security (ENISA), is established and meant to exchange best practices, discuss cybersecurity preparedness and evaluate national NIS security strategies, where requested. The Cooperation Group met for the first time in February 2017 and has already held seven meetings so far. A network composed of national CSIRT representatives is meant to exchange information on CSIRT services and operations and issue guidelines for convergence of practices. ENISA is responsible for assisting the member states and the EC by providing its expertise and facilitating the exchange of best practices.

In Belgium, the Council of Ministers approved the draft implementing law for the NISD in March 2018 and, four months later, it announced that the draft would be introduced into Parliament. This draft is not publicly available at time of going to press, but the Council of Ministers outlined its main features in a press release. The Belgian law will set up general and sector-specific rules. Existing agencies have been restructured to make the Belgian Centre for Cybersecurity (BCC) the central authority for cybersecurity in Belgium, and to give its Computer Emergency Response Team (CERT.be) the CSIRT responsibilities, meaning that CERT.be will receive the operational responsibility for coordinating the response and providing support in case of a cyber incident. There will also be sector-specific authorities in charge of the implementation of the law in their specific sectors, under the coordination of the BCC. Some sector-specific CSIRTs may also be created. Incidents with a significant impact must be notified to CERT.be, the sector authority or the Belgium Crisis Centre, which is responsible for addressing crisis and emergency situations at national level in Belgium. Three levels of control of OES are contemplated: control at any time by sector-specific inspection services; annual internal audits; and external audits every three years by a certified body. Sector authorities will be able to impose administrative and criminal sanctions for non-compliance with the law. By the end of 2018, sector-specific authorities should identify OES in concert with the BCC and the Belgian Crisis Centre. Providers of essential services will have to adapt their information security policies by the end of 2019 and implement them by the end of 2020.

Belgium has already launched several programmes to inform the public about cybersecurity risks and to promote solutions (eg, more than two million people have been reached by the BCC’s anti-phishing campaign in October 2017). A National Emergency Cyber Plan prepared by the BCC and the government was adopted in late April 2017. The BCC provided more than 30 training sessions to some 400 officials in 2017. CERT.be offers its services on a 24/7 basis to companies and key sectors.

Third, there are also specific instruments that impose cybersecurity requirements on providers of public communications services. The current instrument is the e-Privacy Directive. Either national DPAs (eg, the Commission Nationale de l’Information et des Libertés in France) or a communications or network regulator (eg, the Belgian Institute for Postal Services and Telecommunications (BIPT)) are responsible for enforcing this Directive. It imposes similar but more specific requirements than the GDPR on providers of publicly available electronic communication services concerning data security and data protection; it also requires notification in case of network security risks or data breaches.

In January 2017, the EC published a draft e-Privacy Regulation that would replace the Directive and be lex specialis to the GDPR. Although the goal was to have this regulation enter into force at the same time as the GDPR, it has gotten bogged down in the legislative process, in part due to uncertainty in case of overlap between the GDPR and the e-Privacy Regulation, as well as controversial proposals regarding the very broad scope of the e-Privacy Regulation, and a very restrictive approach to cookies and the elimination of businesses’ legitimate interests as a legal ground for the processing of electronic communications data. As a result, it is still unclear when the final text will be adopted.

The EC’s proposal would extend application of e-privacy rules to ‘over-the-top’ content providers, such as VoIP, text messaging and email providers. However, it would replace the personal data protection provisions in the current e-Privacy Directive by reference to these obligations under the GDPR, and extend them to cover the information that the service provider can obtain from the user’s terminal equipment. Although the EC’s proposal maintained the ePrivacy Directive rules on data security and breach notification, the European Parliament proposed to eliminate them, since it views such provisions as adding little to the framework provided by the GDPR, the NISD and the EECC, the text of which was agreed on in June 2018 by the EU Parliament and the Council to update EU telecommunications regulation.

The EECC would set out, in a single unified Directive rather than in differing regulations with varying scope that exist now, the rules that apply to companies providing public communications networks or publicly available electronic communications services, including over-the-top services, irrespective of whether they process personal data. In particular, the EECC includes the cybersecurity rules that are currently laid down in the e-Privacy Directive (ie, taking appropriate technical and organisational measures to appropriately manage the risks posed to the security of networks and services; informing users potentially affected by a particular and significant threat of a security incident of any possible protective measures or remedies they can take and, where appropriate, of the threat itself; and notifying security incidents that have a significant impact on the operation of networks or services without undue delay to the competent authority). The EC, taking account of ENISA’s opinion, may adopt decisions detailing the technical and organisational security measures to be adopted by providers, and the format and procedures applicable to the incident notification requirements. Since the EECC is a Directive, all these rules must be transposed into national law by the member states. ENISA will facilitate the coordination of member state rules to avoid diverging national requirements.

Last, the European Council adopted its negotiating position for discussions with the European Parliament on a final text on a proposal for an EU Cybersecurity Act in June 2018. The proposed Act would upgrade ENISA into a permanent EU agency for cybersecurity, while it currently operates on a fixed-term mandate that would otherwise require periodic renewal. The Act would also create an EU-wide certification framework for information and communications technology products and services. ENISA would get more resources, both in terms of staff numbers and budget, and take on additional responsibilities, such as organising annual pan-European cybersecurity exercises; advising member states on implementation of the NISD, and supporting and promoting EU policy on cybersecurity certification.

ENISA is intended to be a centre of excellence and a resource for cybersecurity in the EU. The new European cybersecurity certification schemes for ICT products, services and processes are intended to increase trust and security by attesting compliance with specified cybersecurity requirements. These certification schemes are also meant to address barriers in the single market caused by the existence of different national certification processes. The details of these certification schemes and requirements will be important to network and data service operators, including cloud computing service providers.

Belgium has already launched several programmes to inform the public about cybersecurity risks.

GTDT: When do data breaches require notice to regulators or consumers, and what are the key factors that organisations must assess when deciding whether to notify regulators or consumers?

CD, AV & IB: Under the GDPR, companies must generally notify any data breach to the DPA within 72 hours where it is likely to result in a risk to the rights and freedoms of individuals. These risks include discrimination, identity theft or fraud, financial loss, unauthorised reversal of pseudonymisation, reputational damage, or any other significant economic or social disadvantage; disclosure of sensitive personal data (identifying the race, ideology, health, sex life or criminal record of an individual); profiling data (work performance, creditworthiness, personal interests); personal data of vulnerable individuals (eg, children); or a disclosure that simply prevents the future exercise of control over personal data. Risks are viewed as higher under the GDPR where processing concerns big data.

Companies must also disclose a breach directly to the individual concerned without undue delay where that breach is likely to result in a ‘high risk’ to the individual’s rights and freedoms. Companies do not have to disclose a breach to the individual concerned where the data affected by the breach was protected (eg, through encryption); or if they have taken subsequent measures which ensure that a high risk is no longer likely to materialise; or if such disclosure would involve a disproportionate effort. In the latter case the company must, however, issue a public statement disclosing the breach. Companies that process personal data on behalf of another company must notify the breach to their customers. DPAs also have the authority to order companies to communicate a personal data breach to the individual concerned in appropriate circumstances, such as where the DPA views the breach as resulting in a ‘high’ risk and not just a risk.

WP29 has already published some guidelines to help companies understand the breach notification requirements. These guidelines recommend that companies take into account specific criteria when assessing whether there is a risk or a high risk. These criteria include the type of breach, the nature, sensitivity and volume of personal data, how easy it is to identify individuals, the severity of consequences of the breach for individuals, special characteristics of the individual concerned and the company in question and the number of affected individuals. The guidelines also specify when a data controller or processor should be deemed to be aware of a breach, since that triggers the countdown to the deadline for notifying the breach, if required. Guidance on the identification of these risks can also be provided by approved Codes of Conduct (CoCs) or certifications, or by EDPB guidelines. CoCs are likely to be of particular interest, since they could offer a degree of compliance comfort on a cost-efficient basis. It will take some time before the EDPB adopts guidelines on this issue or before companies obtain approval for CoCs or certifications. In the meantime, companies should err on the side of caution, notifying breaches to the DPA and discussing with the DPA whether they should also notify the breach to the individuals concerned.

Under the e-Privacy Directive, providers of publicly available electronic communications services must notify all personal data breaches to the competent authority within 24 hours and to subscribers or other affected individuals, where the breach is ‘likely to adversely affect’ their personal data or privacy, without undue delay. An EC implementing regulation specifies that the assessment of whether a personal data breach is likely to have an adverse effect must take into account the nature and content of the data (eg, financial information, sensitive data, or location data, log files, browsing histories, email or itemised call lists); likely consequences of the breach (eg, identity theft, fraud, reputational damage or other types of harm); and the circumstances of the breach, for example where the data has been stolen or is in the possession of an unauthorised third party. Subscriber notification is not required where the data was encrypted and the encryption keys have not been compromised by a security breach. The e-Privacy Directive also requires service providers to inform subscribers of a ‘particular risk’ of a breach of network security, even where this is not in the control of the service provider, with an indication of the measures that can be taken to protect against this risk. For example, service providers that offer publicly available electronic communications services over the internet should inform users and subscribers of measures they can take to protect the security of their communications, for instance, by using specific types of software or encryption technologies.

As already mentioned, it is expected that the e-Privacy Regulation will not include cybersecurity rules as they were moved to the EECC. Member states must ensure that companies providing public communications networks or publicly available electronic communications services notify the national competent authority without undue delay of a breach of security that has had a significant impact on the operation of their networks or services. To determine the significance of the impact of a security incident, providers of public communications networks and publicly available electronic communications services should take into account the number of users affected, the duration of the incident, the geographical spread of the area affected, the extent to which the functioning of the network or service is affected, and the extent of impact on economic and societal activities. The competent authority in one member state may inform the competent authorities in other member states. It can also inform the public, or require the providers to do so, where it determines that disclosure of the incident is in the public interest. It is for each member state to decide which national authority will be responsible (eg, the communications regulator or the DPA).

Under Belgian law there is a twofold notification obligation: providers of publicly available electronic communications services must inform their subscribers and the BIPT of a particular risk of breach of network security. In 2014, the BIPT issued a decision that specifies when and how operators must notify it of a security incident. Service providers must also inform the Belgian DPA of a breach of personal data, which in turn notifies the BIPT. They must inform their subscribers of a breach of personal data where the criteria in the EC implementing regulation described above are met.

Under the NISD, companies must notify incidents impacting their services to the competent authority (in Belgium, the BCC). OES must notify incidents having a ‘significant’ impact on the continuity of their services to the authority, while DSPs must notify incidents having a ‘substantial’ impact on their services. Criteria to determine whether an incident is significant or substantial include the number of users affected, the duration of the incident and its geographical spread and, for substantial incidents, the extent of the disruption to the functioning of the service and the extent of the impact on economic and societal activities. Since these criteria are quite broad, it is expected that they will be refined through practice. There are also provisions for voluntary notification of incidents by entities that are not listed as OES or DSPs. After consulting with the company, the notified competent authority may inform the public about an incident where public awareness is necessary to prevent or deal with it or, with regard to DSPs, where disclosure of the incident is in the public interest.

Separate voluntary notification of a data breach to CERT.be is also possible in Belgium, pending transposition of the NISD. This can provide the basis for benefiting from CERT.be’s expertise to limit the damage caused by the breach and avoid future incidents. It also allows CERT.be to identify trends for cyber incidents and further develop specific solutions at the national level.

The first concern should be to identify the scope of the incident and to make sure that the network is secured.

GTDT: What are the biggest issues that companies must address from a privacy perspective when they suffer a data security incident?

CD, AV & IB: As in other areas, an ounce of prevention is worth a pound of cure. Companies should make sure that they continuously invest in their own data security and stay abreast of cybersecurity developments as well as their evolving legal obligations.

If an incident occurs, whether an attempted data breach or a successful breach, the first concern should be to identify the scope of the incident and to make sure that the network is secured. An incident response team should be set up to assess the situation, prevent expansion of an identified data breach, secure relevant information systems and then try to recover the data concerned. Private sector advisers as well as public authorities can support companies in this process.

In a second step, the company will need to determine whether it is under an obligation to notify a data security incident to the competent authority or affected individuals. Even if not under an obligation to do so, such notifications may be advisable in many cases. As described above, companies face broad notification obligations under the GDPR and the NISD – in particular where a data breach creates risks for individuals, has a significant impact on continuity of services for an OES, or a substantial impact on services for a DSP.

Third, despite the EPDB’s guidelines, it may not be easy to identify when a company should be deemed to be aware of a breach. This means that companies may face difficulties identifying the deadline for notifying the breach.

A fourth and closely related issue consists of appropriately preparing to be able to make relevant notifications within the short time periods that will apply. The GDPR provides that companies must notify the security breach to the authority within 72 hours. If this time frame is not feasible, companies should notify the breach without undue delay and provide a reasoned justification for the delay. The NISD provides for OES and DSP notification without undue delay, rather than specifying a minimum time period.

It is not enough to notify the breach; certain information must be gathered and provided to the regulators. The GDPR provides that notification to the authority must at least describe the nature of the personal data breach, including the number and categories of data subjects and personal data records affected; provide the DPO’s contact information; describe the likely consequences of the personal data breach; and describe how the controller proposes to address the breach, including any mitigation efforts. If all of this information is not available immediately, it may be provided successively as long as there is no undue delay. Collecting and providing the relevant information within 72 hours may be challenging, hence the need for teams that are prepared and can follow appropriate policies on who should be contacted within the company, how the authority should be notified, and how communication with customers should occur.

Under the NISD, less information is required for a notification but it should include sufficient detail to enable the competent authority or the CSIRT to determine whether there is any cross-border impact and the degree of significance of cross-border impact for DSPs.

GTDT: What best practices are organisations within your jurisdiction following to improve cybersecurity preparedness?

CD, AV & IB: Given that the GDPR and NISD only took effect in May 2018, it is too early to assess how broadly best practices are followed by companies operating only or primarily in the EU, let alone in just one member state like Belgium that has had limited obligatory breach notification requirements to date. Still, the result of a survey conducted in April 2018 with more than 1000 companies in the EU and the US across a broad variety of sectors showed that most companies considered that a lack of preparedness regarding breach notification requirements creates the greatest risk for fines and regulatory action. At the same time, most companies considered that this is by far the most difficult obligation to comply with. As a result, only a quarter of companies had a high level of confidence in their ability to comply with the GDPR breach notification requirements by late May 2018. Most companies that think they can comply with the GDPR data breach notification rules have an incident response plan that results in providing timely notification, or they have the necessary security technologies in place to quickly detect breaches.

Best practices to improve cybersecurity preparedness include internal procedures for reporting breaches, an internal response plan, an incident response team and training of personnel. Large companies view a written information security policy as best practice. This is also useful in documenting existing data security measures when notifying data processing operations to the DPA. A few companies have also implemented additional measures, such as regularly testing their breach notification plans.

Best practice also includes more general compliance with international cybersecurity standards. This is reflected in CoCs developed by cloud service providers. These companies are already sensitised to cybersecurity requirements by the nature of their business. Thus, in 2017, Alibaba Cloud, Fabasoft, IBM, Oracle, Salesforce and SAP voluntarily developed a CoC for the cloud computing industry, working together with the EC and WP29. TrustArc, Google Cloud and Cisco joined the initiative in March 2018. This Code includes cloud service providers’ commitments to ensure security and notify breaches to competent authorities. The Code also includes a governance structure intended to support its effective implementation.

Another organisation, the Cloud Infrastructure Services Providers in Europe (CISPE), consisting of more than 20 primarily European cloud infrastructure providers, created its own CoC in September 2016. The latest version published on CISPE’s website is dated January 2017. This Code goes beyond GDPR requirements, as it guarantees that participating organisations will provide customers with the ability to choose to use their services to store and process data entirely within the EEA. There are already almost 100 services that have been declared under this Code. They can all be found on CISPE’s online public register. WP29 gave substantial feedback to CISPE in February 2018 and made suggestions to improve the Code, which it viewed as making a positive first impression. The objective of CISPE is to have the Code approved by EU DPAs and make it the benchmark standard for the industry. CISPE is therefore revising its Code in light of the comments received.

In Belgium, the BCC has also published a Cyber-guide reference in February 2018, in cooperation with the Belgian Cyber Security Coalition, a cross-sector collaboration between public authorities, companies, professional organisations and universities. The coalition had also released a kit for internal cybersecurity awareness in February 2017, a Cyber Security Guide for SMEs in December 2016, and a Cyber Security Incident Management Guide at the end of 2015, containing guidance on how to prevent, detect, communicate, and handle cybersecurity incidents. Best practices have been developed by a joint private sector or academic initiative that is led by the Federation of Belgian Enterprises, the International Chamber of Commerce in Belgium, Microsoft, Ernst & Young and the cybercrime centre of the University of Leuven. This group published the Belgian Cyber Security Guide in 2014. The guide is intended to convince businesses of the importance of being prepared for cyber threats and offers advice that is adapted to a company’s size and/or role (eg, large national companies trading internationally; medium-size retailers with an online presence; SMEs in the accounting sector; or Belgian start-ups). The guide lists 10 ‘security key principles’ and 10 ‘must-do security actions’, including user education and awareness, protection of information, updating systems, use of mobile device security, enforcement of safe surfing rules and access to information on a ‘need to know’ basis. A checklist can be used by companies to assess whether they are sufficiently equipped for cyber incidents.

In the financial services sector, the Belgian National Bank (BNB) has focused on increasing cyber resilience by improving risk management and intensifying the use of internal tests within companies to assess the level of cybersecurity preparedness. In December 2015, the BNB adopted a circular on management of cyber risks that is binding on the Financial Market Infrastructures (FMIs) that are established in Belgium. The circular entered into force in January 2016. The Bank for International Settlements and the International Organization of Securities Commissions published similar recommendations on this topic in June 2016. The BNB also contributed to the publication of the European Bank Authority’s recommendations on outsourcing to cloud service providers.

The BNB carries out controls to determine whether FMIs established in Belgium comply with the circular. In particular, the circular requires companies to adopt a security plan that covers data integrity, notes reporting rules and expectations, contains criteria to identify critical activities and resources, specifies training measures, and sets out internal control measures where subcontractors are employed. FMIs must create a governance system to keep the complexity of their IT systems at a manageable level and avoid harm to data security. FMIs must also evaluate personnel with access to data for their integrity, reliability and knowledge about data security. The BNB expects to be informed rapidly and adequately of any incident that has a serious impact on data security or operational continuity for critical activities of FMIs. Additionally, an internal report on an FMI’s critical activities, services and resources must be kept up-to-date. In practice, the BNB has already led a number of inspections to verify compliance with the regulatory framework and the appropriate management of IT systems regarding cyber risks. Finally, the BNB is involved in the sector-specific initiatives in the field of cyber risks. For example, it contributes to the creation of a framework for red teaming (ethical hacking) in the context of the Belgian Financial Sector Cyber Advisory Council and the European Central Bank’s Cyber Security Strategy.

Companies should track every party’s precise involvement in the processing operation to determine liability in case of a data breach.

GTDT: Are there special data security and privacy concerns that businesses should consider when thinking about moving data to a cloud-hosting environment?

CD, AV & IB: Moving data to a cloud hosting environment may create additional risks for data security and privacy because of the data transfers required and reliance on external systems. On the other hand, specialised cloud hosting service providers will have invested heavily in data security since this is core to their business and the services they provide. By contrast, data security is simply a cost and compliance obligation for companies active in other sectors of the economy. A cloud-hosting provider may well offer a more robust platform in terms of data security than its customer could or would want to. Nevertheless, the GDPR imposes additional requirements on the use of external processors by data controllers. Companies thinking about moving personal data to a cloud hosting environment should consider two threshold issues: where is the cloud server located and how is the data going to be protected?

With regard to data security in the cloud, companies must ensure that their cloud service provider provides sufficient guarantees that it has implemented appropriate technical and organisational measures to ensure that processing will meet GDPR requirements. To that end, companies must conclude a binding contract with their cloud service provider, specifying, among other things, that it will assure the level of data security required under the GDPR. Again, adherence to an approved CoC or an approved certification can be used by cloud service providers to demonstrate compliance.

The Belgian DPA issued an opinion regarding the use of cloud service providers in October 2016. The DPA recommends identifying potential risks prior to moving data into a cloud. Access of the cloud service provider to the data should be limited to a strict minimum. The employees of the cloud service provider should sign a confidentiality clause with their employer. The cloud service provider should also contractually commit not to share the data with third parties, except if it uses subcontractors (which must observe all the same obligations vis-à-vis the cloud service provider as it has as regards the data controller). Moreover, the contract between the data controller and the cloud service provider should state that any data breach must be immediately reported to the data controller. More generally, the DPA advises that companies should track every party’s precise involvement in the processing operation to determine liability in case of a data breach. Both the data controller and the cloud service provider can be held responsible under the Belgian Data Protection Act where they do not take appropriate measures to ensure data security.

The location of data is very important, as confirmed in the Belgian DPA’s opinion. Under the GDPR, companies are prohibited from transferring personal data outside the EEA unless the recipient country has been found to provide adequate protection by the EC, or the company has adopted appropriate alternative instruments to commit to adequate protection. Such instruments can consist of Standard Contractual Clauses (SCCs) published by the EC; Binding Corporate Rules (BCRs) for intra-company transfers (ie, an internal CoC defining a company’s policy regarding data transfers from businesses to their affiliates located out of the EEA that has been approved by a DPA); or an approved certification or adherence to an approved CoC where the controller or processor located outside the EEA submits to jurisdiction and legal process in its home country to ensure enforceability of a data subject’s rights.

Transfers to the US can rely on participation by the US entity in the Privacy Shield (more than 3000 US companies rely on this instrument to date), but this instrument has been challenged in court proceedings that are still pending. An advocacy group brought a challenge to the Privacy Shield before the European Court of Justice (CJEU) in September 2016, claiming its provisions do not adequately protect the personal data of EU citizens.

The EC released its report on its first annual review of the Privacy Shield mid-October 2017. The report concluded that the Privacy Shield continues to ensure an adequate level of protection for personal data transferred from the EU to the US, but it made some recommendations to continue the proper functioning of this framework. WP29 published its main findings of the review late November 2017. WP29 recognised the US’s efforts to implement the Privacy Shield, but it identified several concerns (in particular, the appointment of an independent ombudsperson). WP29 expects these concerns to be addressed at the latest at the second joint review. It said it is ready to take appropriate action, including bringing the Privacy Shield decision to national courts for them to make a reference to the CJEU for a preliminary ruling, in case no remedy exists by that time.

In early July, the EU Parliament passed a non-binding resolution, asking the EC to suspend the Privacy Shield unless the US is fully compliant by 1 September 2018. The Parliament’s concerns include the failure of the US to appoint an ombudsman, the re-authorisation for US intelligence agencies to collect information on non-US individuals located outside the US under section 702 of the Foreign Intelligence Surveillance Act, and the adoption of the Clarifying Overseas Use of Data Act, which allows US law enforcement agencies to access personal data stored abroad. Only the EC can suspend the Privacy Shield (unless it is struck down by the CJEU, as was already the case with Safe Harbor, which was then replaced by the Privacy Shield), and it is unlikely to do so. In fact, the EC reacted to the Parliament’s resolution by saying that it would continue to work to keep the Privacy Shield running. Nevertheless, there is a lot of pressure on this instrument.

SCCs have also been challenged in court proceedings that are pending since October 2017. The Irish High Court agreed that there are well-founded concerns about the protection of personal data transferred to the US pursuant to SCCs, and referred the case to the CJEU for a preliminary ruling on their validity.

Companies should therefore identify the location of the cloud service provider’s servers to determine whether additional instruments should be put into place to ensure adequate data protection. If so, they are likely to have to rely either on the EC’s SCCs or on appropriate certification or CoCs, since BCRs only apply to intra-company transfers. Non-compliance with transfer restrictions will be subject to fines under the GDPR of up to €20 million or 4 per cent of a company’s worldwide annual turnover, whichever is higher.

GTDT: How are governments in your jurisdiction addressing serious cybersecurity threats and criminal activity?

CD, AV & IB: The EU Cybercrime Directive came into force in August 2013 and member states were required to transpose it into national law by September 2015. In April 2016, the EC issued a statement saying that implementation of cybersecurity measures (prevention) is the first line of defence, but that there should also be effective investigation and prosecution of cybercrime.

The Cybercrime Directive established four main offences: (1) illegal access to information systems; (2) illegal system interference; (3) illegal data interference; and (4) illegal interception. The Directive also makes it a criminal offence to produce, sell, procure for use, import, distribute or otherwise make available software designed or adapted primarily to committing these offences. This includes making available a computer password, access code or similar data through which an information system can be accessed with the intention of committing any of these offences. Member states must take the measures required to ensure that these offences are punishable by effective, proportionate and dissuasive criminal penalties.

The Cybercrime Directive also provides rules about increased cooperation between competent authorities. Member states must ensure that they have an operational national point of contact and that they make use of the existing network of available operational points of contact. Member states must also ensure that they have procedures in place so that urgent requests for assistance receive a response within eight hours, at least as to whether the request will be answered, in what form and with what timing. The Directive also lays down basic rules for the definition of criminal offences and provides that member states must have effective, proportionate and deterrent sanctions for such offenses.

At the EU level, EUROPOL, the EU Agency for law enforcement cooperation, set up the European Cybercrime Centre (EC3) in 2013 to strengthen law enforcement response to cybercrime in the EU. EC3’s approach is based on three prongs: forensics, strategy and operations.

Although Belgium was one of the first member states to establish criminal offences for cybercrimes in its Criminal Code in 2000, the EC warned Belgium in December 2016 that it had not implemented the Cybercrime Directive properly, giving Belgium two months to comply before escalating the case. In May 2017, the Belgian government announced an increase in human resources within the Regional Computer Crime Units and Federal Computer Crime Unit of the police to tackle cybercrime more efficiently, signalling that Belgium understands that this issue requires significant attention and resources. In early July 2017, Belgium finally transposed the Cybercrime Directive to impose higher sanctions for some of the four offences that already existed under the Belgian Criminal Code (ie hacking, cyber sabotage, cyber fraud and cyber forgery). Sanctions for internal and external hacking are up to five years jail and/or penalties of up to €400,000. Sanctions for providing the means to hack a system are up to three years in jail and/or penalties of up to €800,000. Sanctions for instructing someone to hack a system are up to five years’ jail and/or penalties of up to €1.6 million. There also are sanctions for knowingly possessing, disclosing, sharing or otherwise using hacked data (up to three year’s jail and/or penalties of up to €800,000). Sanctions for cyber sabotage are up to five years’ jail and/or penalties of up to €800,000. Sanctions for cyber forgery and cyber fraud are unchanged. They are up to five years in jail and/or €800,000. All of these sanctions can be doubled where they are committed less than five years after a conviction for the same offences. The Belgian Code of Criminal Procedure also provides authorities with several instruments to investigate cybercrime offences, including the interception and tracing of electronic communications, the identification of users of electronic communications services, the seizure of data, and network searches.

GTDT: When companies contemplate M&A deals, how should they factor risks arising from privacy and data security issues into their decisions?

CD, AV & IB: M&A deals can heighten the risk of a cyberattack by creating an attractive target – the proposed deal. Since such deals involve the sharing of very large volumes of commercially sensitive information (eg, bid prices, confidential business information gathered and stored in data rooms, negotiation strategies, the acquisition decision itself, potential synergies and future expansion plans), companies involved in the deal should make sure that due diligence repositories and the deal process are well protected.

More substantively, it is very important that companies include cybersecurity and data protection checks in their due diligence programme. This means that the acquirer should check whether the target complies with Belgian and/or EU data protection and cybersecurity laws. As the 2017 Verizon/Yahoo transaction demonstrates, the risk would be to acquire potentially significant liability for past infringements through the deal (Verizon paid US$350 million less for the acquisition following Yahoo’s disclosure of two breaches affecting more than one billion accounts months after the initial purchase agreement). Even if the parties are not aware of a past data breach, the acquirer should investigate the security policies that the target has put in place to determine whether they are adequate or will require reinforcement when the deal closes. A prominent example of the risks in this regard was the 2014 acquisition of Viator, a tour booking company, by TripAdvisor. Viator experienced a cyberattack two weeks after the deal closed, resulting in notification of approximately 1.4 million customers that their personal information, including payment card data, might have been compromised and leading to a fall of 4 per cent in the value of TripAdvisor’s stock. Finally, the acquirer should investigate what data security provisions are in place between the target and third parties. Contracts with data processors, including cloud service providers should be reviewed, but also more generally, agreements with other suppliers that give rise to data flows or create potential exposure for data security.

International M&A deals can involve the transfer of large amounts of personal data outside the EEA. Companies should make sure that they process such data in full compliance with the GDPR and its Belgian implementation. As discussed above, they must adopt appropriate protective instruments to transfer personal data outside the EEA, unless the country of the deal partner has obtained an adequacy decision from the EC.

The Inside Track

When choosing a lawyer to help with cybersecurity, what are the key attributes clients should look for?

Clients should look for lawyers who understand their business and how it works. Good cybersecurity lawyers also understand the threats to IT security and how authorities expect companies to deal with them. They are not only GDPR experts, but they also understand the interaction between all EU cybersecurity instruments. They focus on practical but comprehensive solutions, from prevention to response to cyber incidents. Because cybersecurity covers highly technical issues, companies should look for lawyers who can communicate cybersecurity risk in business terms. Broad sector and geographic reach are also important assets.

What issues in your jurisdiction make advising on cybersecurity and privacy complex or interesting?

Legal requirements concerning cybersecurity are developing rapidly in the EU. Advising in an area that is changing quickly but is of key strategic significance is challenging but fulfilling. With the adoption of the GDPR, this is also true of the many more specific interpretations of data protection law that will come in the next few years. Working in this area requires close monitoring of all developments as well as the ability to propose creative solutions. The EU cybersecurity framework is quite complex. A number of different legislative requirements may apply to the same company, particularly to providers of critical infrastructure and digital networks and services. For example, a communications operator should analyse its cybersecurity obligations both under the GDPR and the e-Privacy Directive regarding the processing of personal data and under the EECC for non-personal data. An air carrier and a financial institution would have to do the same under the GDPR and the NISD. The interplay of EU competence and member state legislative and enforcement responsibility adds an additional layer of complexity. Both the NISD and the GDPR (even though it applies directly as a Regulation) leave a significant amount of discretion to member states in how they will be applied. For example, it is up to member states to identify OES and DSPs on their territory that will be subject to the NISD. Even if companies are not required to appoint a DPO under the GDPR, they may have to do so under national law.

How is the privacy landscape changing in your jurisdiction?

The NISD, the GDPR and the EECC are significantly changing the privacy and cybersecurity landscape in Europe. With its broad territorial scope and high fines in case of infringement, this new architecture is not only raising awareness but it is also putting compliance with data protection and data security requirements at the core of the debate. Companies that may not have taken these issues that seriously in the past are changing the way they do business in the EU.

What types of cybersecurity incidents should companies be particularly aware of in your jurisdiction?

According to ENISA, the top five cyber threats include malware, web-based attacks, web application attacks, phishing and spam. This highlights the importance of employee training and raising internal awareness concerning these issues. Cybersecurity reports show that eight member states, including Belgium, are on the list of the 10 most affected countries by mobile ransomware trojans in the first quarter of 2018. Attacks on ‘Internet of Things’ devices are also becoming a major concern.