At the end of March, Italy became the first country in the world to block the chatbot “ChatGPT” from the company “OpenAI” due to data protection concerns. Subsequently, the Swiss data protection authority also received numerous enquiries regarding the data protection conformity of ChatGPT, which prompted the Federal Data Protection and Information Commissioner (FDPIC) to issue a statement. In the media release of 4 April 2023, the FDPIC commented on the “use of ChatGPT and comparable AI-supported applications” in Switzerland and advises a conscious approach.

Conscious use of AI-based applications such as ChatGPT

The FDPIC recognises the opportunities that the use of AI-supported applications such as ChatGPT offers both for society and for the economy. However, the processing of data by means of such new technologies is also associated with risks for privacy and informational self-determination.

The FDPIC therefore specifically advises users to – before entering text or uploading images, to check for what purposes they are being processed, and – when using AI-supported applications, to ensure that the requirements of data protection law are complied with and, in particular, to inform users about which data are processed for which purposes and in which way.

Furthermore, the FDPIC does not comment on the data protection conformity of ChatGPT. However, the FDPIC is in contact with the Italian data protection authority, which had temporarily banned the chatbot ChatGPT on 30 March 2023.

Temporary ban on ChatGPT in Italy

At the end of March, the population in Italy could not access the services of the chatbot ChatGPT for the time being. According to a media release dated 31 March 2023, the Italian data protection authority (Garante per la protezione die dati perspnali, GPDP) in Rome had ChatGPT temporarily blocked due to data protection concerns and opened an investigation against OpenAI as the developer of ChatGPT.

In the order of 30 March 2023, the Italian GPDP cites several reasons for banning ChatGPT in Italy: – First, no information was provided in this regard to users or data subjects whose data was collected by OpenAI and processed through the ChatGPT chatbot. This is a violation of the duty to inform according to Art. 13 and 14 of the European Data Protection Regulation (GDPR). – Secondly, there was no appropriate legal basis justifying the massive collection and storage of personal data for training the chatbot. ChatGPT users could give their consent to the processing. On the other hand, individuals whose data was systematically collected for artificial intelligence (AI) training were not asked for their consent under Art. 6 GDPR. – Thirdly, the processing of data subjects’ personal data was inaccurate in that the information provided by ChatGPT did not always match the actual data. The lack of accuracy of the data was apparently considered by the GPDP as a violation of Art. 5(1)(d) DSGVO. – Fourthly, OpenAI stipulated a minimum age of 13 years in the terms of use for ChatGPT; however, a corresponding verification of the age was missing. Due to this lack of age verification, a violation of Art. 8 GDPR is assumed.

Consequently, the GPDP concluded that “the processing of the personal data of users, including minors, and of those whose data are used by the service, in the circumstances described above, violates Articles 5, 6, 8, 13 and 25 of the GDPR” and that it was therefore necessary to order a provisional restriction of processing and thus a temporary ban on ChatGPT in Italy.

About a month later, ChatGPT is available again in Italy. Thus, on 28 April 2023, the Italian data protection authority announced in a media release that OpenAI had restored the service in Italy with improved transparency and rights for users. In light of these improvements, OpenAI was allowed to make ChatGPT available to Italian users again.

The data protection issues behind AI-powered applications

The privacy concerns behind AI-powered applications such as ChatGPT appear to stem primarily from the novel treatment and analysis of data that an AI brings. An AI has the ability to link different sets of data together, matching the different types of information. This makes it increasingly difficult to distinguish between personal data and factual data. Furthermore, AI can combine several impersonal data elements and derive personal information from them. We are therefore talking about data that was not initially personal, but which is identified by AI and thus becomes personal. If a user does not know what is being done with his or her personal data, he or she is not in a position to control its use and exercise the rights associated with it. In the case of ChatGPT, there was no such information for users or it was not even possible when processing data to “train the AI”.

In the meantime, OpenAI has published a notice on its website to inform users about what personal data is processed and how it is processed, in particular for training the underlying algorithm. Furthermore, OpenAI points out that every person has the right to object to such processing. With regard to the training of the AI, this is also possible by means of a special form that can be filled out online.

In accordance with the recommendation of the FDPIC mentioned at the beginning of this article, users of AI-supported applications such as ChatGPT should also be aware in Switzerland that they must comply with data protection requirements and, in particular, inform the data subjects about which data are processed for which purposes – even if only for training purposes of the algorithm underlying the AI – and how. HÄRTING also recommends that companies in particular should develop an AI strategy for the internal use of ChatGPT and at the same time issue clear written instructions to their employees on how to deal with AI-supported applications.

Sources