Italy faced a regulatory deadline as authorities assess the use of AI tools and how data is handled by leading services. A shift toward stricter controls has emerged as officials weigh how artificial intelligence platforms should operate within national borders and respect privacy laws.
The national data protection authority called for a verification system that blocks access to users under the age of 13 and urged implementation before a specified deadline. This step aims to safeguard younger users while online services expand their AI capabilities.
In addition, the supervising body urged OpenAI to provide a clear public briefing about how personal data is used to train models and to improve service performance. The request emphasizes transparency and informed participation by users across markets where these tools are offered.
The regulator also requires a public-facing outline describing how user information is stored, processed, and used. A simple, accessible page would help users understand data practices and their rights.
Italian users signing up for the service must agree to the terms before use, while existing users would need to consent again upon reactivation. This approach aligns with privacy principles that prioritize explicit user consent and clear notification of data processing.
The enforcement action took immediate effect as authorities asserted that the tool did not meet local consumer data protection standards. The move signals a broader reconfiguration of how AI tools are deployed in regulated regions and how compliance is demonstrated.
OpenAI, a U.S.-based company, operates globally and has drawn scrutiny from several organizations concerned about data handling during AI experiments. The incident underscores the importance of robust data governance when large language models process user information.
Reports noted a lapse in the protection of payment data and user subscriptions, highlighting the need for clear information for users and other stakeholders about data collection practices. Critics argued that the absence of a solid legal basis for bulk data collection and storage undermines trust in AI services.
The use of artificial intelligence continues to grow in popularity and effectiveness. UNESCO has urged nations to implement the Global Ethical Framework for this technology without delay, a framework that received unanimous endorsement from member states in the past and remains a reference point for responsible AI deployment [citation: UNESCO governance of AI].