Italian data protection agency warns publisher against sharing data with OpenAI
cryptopolitan.com 5 h
OpenAI faces a regulatory hurdle in Italy according to comments by the country’s data protection authority, Garante per la Protezione dei Dati Personali (GPDP). The regulator issued a warning to GEDI, a publisher in the country against sharing its data archives with OpenAI, citing a potential breach of the European Union’s (EU) General Data Protection Regulation (GDPR).
This follows a partnership between GEDI and OpenAI, which would allow OpenAI to train its ChatGPT models on Italian language content.
GEDI is a media company owned by the Agnelli family. It publishes daily newspapers, La Repubblica and La Stampa, and it announced a partnership with OpenAI in September. The partnership involves delivering Italian language content from the publisher’s news portfolio to help train and improve OpenAI’s products.
GPDP frowns on GEDI’s partnership with OpenAI
John Elkann, Chairman of GEDI, said, “The partnership signed with OpenAI is part of GEDI’s digital transformation journey and recognizes its leadership in producing high-quality content within the Italian media landscape.”
However, the GPDP warns that this partnership could potentially lead to a breach of the EU’s GDPR.
The GDPR sets a global standard for privacy laws, emphasizing user consent, transparency, and accountability. It was also the first to propose an AI regulatory framework, which it claims will promote safe and responsible use of AI.
“If GEDI, on the basis of the agreement signed with OpenAI, were to disclose the personal data contained in its archive, it could violate EU regulation, with all the consequences, including those of a sanctioning nature,” the GPDP stated.
GDPR violations can cost offending companies up to €20 million or 4% of their annual global turnover.
Contrasting global perspectives on AI usage
This latest episode with GEDI further escalates the tensions between technological advancement and compliance with privacy regulations in the EU.
Clearview AI, an American company, was fined about €30 million by the Dutch Data Protection Authority, citing privacy breach and violation of user rights under the GDPR. Earlier this year, the GPDP temporarily banned ChatGPT over concerns about unlawful collection and processing of user data.
The U.S. has taken a relaxed and market-driven approach to AI, favoring innovation and emphasizing self-regulation within the space. The “Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence”, which was issued in October 2023 alludes to this.
The lack of clear legislation, especially on a federal level, has led to states leading the way in regulating the industry. California’s Consumer Privacy Act (CCPA) is an example of state-level AI legislation.
China has also put a regulatory framework in place. In July 2023, the Cyberspace Administration of China issued regulations on the use of generative AI. It also plans to formulate over 50 standards for AI by 2026. These regulations would apply to both local and international providers of AI services.