European privacy regulators coordinate on OpenAI ChatGPT scrutiny

No time to read?
Get a summary

Spain is examining whether OpenAI’s ChatGPT complies with European privacy laws, after a decision by the Spanish Data Protection Agency (AEPD) to launch preliminary investigative actions this week. The move follows Italy’s regulator, which two weeks ago ordered an immediate blockage of the service. The scrutiny centers on how user data is collected, stored, and used by the popular chatbot, and whether those practices align with GDPR standards.

Last week, as reported by El Periódico de Catalunya from the Prensa Ibérica group, the AEPD requested that European counterparts participate in a broader discussion. The goal is to share information on potential enforcement steps and to coordinate action at the upcoming General Meeting of the European Data Protection Board. The meeting will feature conversations among EU regulators about how to harmonize responses and how to exchange information that could shape forthcoming coercive measures across member states.

European coordination

Big tech firms that operate across Europe, such as Meta platforms and Amazon, typically route their data processing through Ireland, making the Irish regulator the initial gatekeeper for enforcing EU privacy rules. When a company’s EU presence is centralized in a member country, that country’s regulator often leads the response, with other regulators following the same framework. But OpenAI’s status as a non-EU entity means EU member state authorities can assert jurisdiction independently, as Italy did. This scenario risks inconsistent enforcement across countries unless regulators harmonize their approaches.

The formation of a joint working group is aimed at reducing such disparities. Regulators expect to align on how to apply GDPR provisions to real-world uses of artificial intelligence and large language models, ensuring that actions taken in one state are compatible with those in others while preserving the underlying protections for residents.

Italy’s actions and expectations

In a decision announced this week, Italy set a deadline for OpenAI to respond by April 30 with a plan that addresses calls for greater transparency around data handling and the governance of user information. This decision underscores a broader trend toward demanding clearer disclosures and stronger governance mechanisms when it comes to AI tools used by the public.

ChatGPT in the crosshairs

In late March, Italian authorities blocked access to ChatGPT on the grounds that there was no legal basis to justify the collection and bulk storage of personal data, citing GDPR requirements. This action marked one of the early, concrete steps taken by EU regulators to assess and constrain the use of AI systems that process personal information. The next phase gives OpenAI a set of compliance milestones to meet, including age verification measures designed to exclude users under 13 from the platform.

If OpenAI cannot demonstrate a compliant data governance framework or fails to implement the required safeguards, authorities could extend restrictions to block access to the service within the country. Such outcomes would have implications for users, developers, and regulators across Canada and the United States as they observe how EU actions translate to global AI governance patterns and cross-border data handling.

No time to read?
Get a summary
Previous Article

Five MiG-29s to Poland signal NATO defense alignment and regional deterrence

Next Article

Victoria Bonya on Critics, Haters, and Honest Influence