Skip to content Skip to footer

The EU ChatGPT Taskforce unveils a report concerning data privacy.

The European Data Protection Board (EDPB) established the ChatGPT Taskforce a year ago to evaluate the compliance of OpenAI, an artificial intelligence firm, with the General Data Protection Regulation (GDPR) laws when handling personal data. The Taskforce recently released a preliminary report on its findings. The GDPR laws stipulate how companies can and cannot use personal data, and are especially strict about its use within the European Union. The report indicates ambiguity on whether artificial intelligence companies such as OpenAI comply with these regulations.

The Taskforce launched investigations into three primary areas: lawfulness, fairness, and accuracy of OpenAI’s use of personal data. Under lawfulness, OpenAI uses public data, scrubs it, and applies it for training its AI models. However, the data collected from web scraping often contains personal information. According to the GDPR, the use of such information can only be justified if there is a legitimate interest and respects people’s expectations of how their data is used. OpenAI argues that its models comply with Article 6(1)(f) of the GDPR, asserting that their use of personal data is legal as it is necessary for their legitimate interests or those of a third party. The Taskforce recommends that companies should delete or anonymize personal data collected through web scraping before training their models.

Under fairness, the Taskforce argues that any terms and conditions asserting that users are responsible for their chat inputs are unjust. Companies should not transfer GDPR compliance responsibilities to the user. OpenAI is required to be transparent with its users, clearly conveying that their prompt inputs may be used for training purposes.

Regarding accuracy, the AI models sometimes fabricate responses when they cannot determine the correct answer. This hallucination violates GDPR requirements for accurate personal data. The Taskforce emphasizes that warning users about potential inaccuracies is insufficient relative to the data accuracy principle. OpenAI is involved in a lawsuit due to the frequent inaccuracies in ChatGPT, notably the misrepresentation of a public figure’s birthdate.

Currently, ChatGPT and similar models might face challenges in meeting privacy requirements written before the onset of AI. However, OpenAI has an established Irish entity in Dublin which falls under Ireland’s Data Protection Commission (DPC), protecting it from individual EU state GDPR lawsuits. It remains unclear whether the ChatGPT Taskforce will release legally binding conclusions in its next report, and whether OpenAI will be able to comply with all regulations.

Leave a comment

0.0/5