OpenAI’s conversational AI, ChatGPT, has come under fire from Italy’s data protection authority, Garante, for potential GDPR violations. The investigation launched last year after ChatGPT was temporarily banned in Italy. Concerns were raised about the handling of users’ personal data, age verification procedures, and inadvertent exposure of users’ messages and payment details. There were also legal questions about OpenAI’s data collection practices.
Garante was also alarmed about the occasional creation of inaccurate information about individuals by ChatGPT. Such happenings have led to cases of AI “hallucinations” that have wrongfully implicated individuals in crimes including embezzlement and sexual harassment. Although libel lawsuits have been filed against AI developers due to these occurrences, the legal outcomes remain uncertain.
In response, OpenAI has stressed its commitment to abide by GDPR and other privacy laws. It stated that it aims to limit the incorporation of personal data in its training methods, and has designed its system to reject requests for private or sensitive data, expressing its belief in the compliance of its practices with GDPR regulations.
Concurrently, the Federal Trade Commission (FTC) in the US is scrutinizing the relationships between key AI startups such as OpenAI and tech behemoths including Microsoft, Amazon, and Google. Additionally, civil rights groups and organizations like Mozilla Foundation have called on the European Commission to probe Microsoft and OpenAI for potential breaches of antitrust rules.
With AI technology becoming increasingly integral to many societal sectors, the necessity for a regulatory structure that harmonizes innovation, ethical considerations, and privacy and market protections is evident, and is eagerly anticipated. The outcomes of these ongoing investigations and regulatory measures will likely establish significant precedents for future governance of AI technologies and shape their trajectory in the global digital economy.