Italys privacy regulator has temporarily banned OpenAIs ChatGPT, the smash-hit conversational A.I. that has over recent months impressed and concerned people in equal measure.

The Italian Data Protection Authority said Friday that ChatGPT was violating the European Unions strict General Data Protection Regulation (GDPR) in multiple ways, ranging from the fact that it sometimes spews out incorrect information about people, to OpenAIs failure to tell people what its doing with their personal data.

Until it can satisfy the privacy regulator that it has brought its practices into compliance with GDPR, OpenAI now has to stop processing the personal data of people in Italy, which means the authority wants it to stop serving users there. It has 20 days to comply with the ban, or face fines that could theoretically go up to 20 million ($22 million) or 4% of global revenue, whichever is higher. OpenAIs revenues are not publicly disclosed. According to OpenAI documents seen by Fortune, the company was projected to have less than $30 million in revenues in 2022 but was forecasting revenues would grow rapidly to exceed $1 billion by 2024.

It is not yet clear whether OpenAI will also have to stop ChatGPT from referencing Italians personal data in the answers it gives users around the worldFortune has asked the regulator for clarification. Under European law, personal data means any data that can be connected with an identifiable individual.

ChatGPT is a conversational interface that sits on top of an A.I. system known as a large language model. These models are trained on vast amounts of text culled from the Internet and from private data sources. OpenAI has not disclosed exactly what data was used to create the latest version of ChatGPT, so it is unclear exactly how the Italian privacy regulator is sure that the personal information of Italian citizens is contained in the training set OpenAI used.

Growing Sense of Panic

It is unusual for a European privacy regulator to institute a temporary ban at the same time as launching an investigation into the target of the ban. The urgency of the move reflects a sense of panic that has become particularly apparent over the last couple days, regarding the potential dangers of todays unprecedentedly powerful A.I. systems.

On Wednesday, a host of technologists and other expertsincluding Elon Musk and Apple co-founder Steve Wozniakpublished an open letter calling on OpenAI and its peers to pause the development of next-generation A.I. models for at least half a year, so that industry and governments can draw-up governance structures for systems like OpenAIs GPT-4 and future, more powerful ones.

Then on Thursday, civil society groups in the U.S. and Europe called on regulators to force OpenAI to address some of the problems with ChatGPT. In the U.S., the Center for AI and Digital Policy (CAIDP) filed a complaint with the Federal Trade Commission (FTC), while in Brussels the European Consumer Organisation (BEUC) called on EU-level and national regulators to quickly launch investigations into ChatGPT.

Legal experts say EU-level action is unlikely while the blocs grand institutions continue to negotiate the wording of an A.I. Act that the European Commission proposed two years agolawmakers are currently scrambling to bring that proposal up-to-date so it can adequately address recently-unveiled services like ChatGPT. However, BEUC was also directing its call at national data protection watchdogs, among others, and it seems Rome has been quick to deliver.

Incorrect information

In a Friday statement, the Italian authority said OpenAI was breaking the GDPR by failing to give information to ChatGPTs usersor to people whose personal data has been used to train the large language modelabout the processing of their data. OpenAIs failure to identify a legal basis for its processing of Italians personal data also allegedly falls foul of the GDPR; this is a serious issue that is currently plaguing many American tech companies.

Citing a relatively obscure provision of the GDPR, the Italian watchdog also said it is concerned that the information provided by ChatGPT does not always correspond to the real data, thus determining an inaccurate processing of personal data. This would be a novel legal hurdle for generative A.I. models, which regularly hallucinate or make up information.

The regulator also pointed out that OpenAI doesnt have any system in place to verify that its users are over the age of 13, even though its terms of use set the age limit. This, it said, exposes minors to absolutely unsuitable answers compared to their degree of development and self-awareness.

OpenAI had not responded to a request for comment at the time of publication. Fortune has also sought comment from Microsoft, which recently integrated ChatGPT into its Azure OpenAI service.


Newspapers

Spinning loader

Business

Entertainment

POST GALLERY