On April 29th, NOYB, the nonprofit organization founded by activist Max Schrems, filed a complaint with Datenschutzbehörde, the Austrian Data Protection Authority (“DSB” or “Austrian DPA”) over alleged violations of Regulation 2016/679 (“GDPR”) resulting from ChatGPT software, developed by the American company OpenAI.
The facts of the case
An Austrian public figure asked ChatGPT, the well-known artificial intelligence chatbot, which uses large language models to provide answers to users prompts, to indicate the date of birth, which was easily found online. After receiving inaccurate information, the data subject filed an access and erasure request with OpenAI. The company’s reply focused only on the person’s account data, without providing any explanation about the datasets used to train the algorithm. With regard to the inaccurate date of birth, the company stated that there was no way to prevent the output of the software from being incorrect and that the filters used to avoid ChatGPT showing users’ personal data are not developed in such a way to block only part of these data. As a result, inaccurate information is not corrected, but only “hidden”. Furthermore, OpenAI objected that the information requested was of public interest, since the data were related to a public figure.
The complaint to the Austrian DPA
Thus, the data subject decided to file a formal complaint with the Austrian Data Protection Authority, since OpenAI, through its software, had allegedly violated Article 5(1)(d), Article 12(3) and Article 15 of the GDPR. With reference to the principle of accuracy of data, the company did not erase or rectify without delay inaccurate data, despite the applicant’s precise request. On the other hand, with regard to the right of access, the answer provided by OpenAI did not concern the algorithm’s functioning model; moreover, the data subject was not able to know which of the personal data were processed for the purpose of training the software, neither their origin, legal basis nor retention period.
The potential effects
It will now be interesting to see the reaction of Datenschutzbehörde, especially to understand whether it will follow the steps already taken by the Italian Data Protection Authority. As is known, in April 2023 the Italian Authority was the first to act against OpenAI and ChatGPT, after ascertaining numerous violations of privacy legislation by the American company. In the same period, the European Data Protection Board also acted, setting up a task force at EU level to promote cooperation and the exchange of information between the various national authorities.
It will most likely take the entry into force of the Artificial Intelligence Act, which sets out specific obligations for providers of generative AI systems, such as updating the technical documentation of the model (including the training process and its results), to obtain exact safeguards against the errors and “hallucinations” of this technology.
