European institutions moved closer to a legal framework on artificial intelligence on Wednesday afternoon, with negotiators already aligning on rules for the core models that power tools like ChatGPT. Insiders familiar with the talks told EFE that the framework is taking shape, signaling a global push to regulate this technology in a consistent way.
Governing AI has been a central point of tension among EU bodies as the long-anticipated AI law nears final form. The draft regulation seeks to set clear standards for a technology that has become a routine part of daily life over the last year, aiming to establish one of the first comprehensive global baselines for accountability and safety in AI practices.
At present, observers note substantial work remains before a final agreement is reached, with ongoing discussions highlighting several unresolved items. Negotiators from the EU Council, the European Parliament, and the European Commission began their talks yesterday at 15:00 local time (14:00 GMT) and planned to continue through 08:00 on Thursday to try to seal the deal. In practice, it became evident that talks would extend beyond the initial timetable as members work through remaining differences.
A highly debated issue is how the regulation should address real-time biometric surveillance in public spaces. The core question is whether such capabilities can be allowed under strict safeguards, or if they should be limited to protect fundamental rights. Advocates from European governments argue biometric monitoring can aid crime prevention and public safety, including countering terrorism, addressing serious abuses such as sexual exploitation, and protecting critical infrastructure — but only with prior judicial authorization and strong limits and oversight. They argue that court approval beforehand is essential to minimize risk and ensure proportional use.
Conversely, the European Parliament has voiced strong concerns about the spread of biometric surveillance, warning that wide deployment could infringe on civil liberties and privacy rights. The norm rapporteur in the Parliament, Brando Benifei, noted yesterday that such surveillance might be considered if strict guarantees shield individual rights and provide transparent oversight. The conversation thus centers on balancing public safety needs with fundamental rights, a crossroads that will shape the final architecture of the law.
Throughout the talks, negotiators emphasize clear accountability for AI systems. They are exploring how to assign responsibility across developers, operators, and users, and how to enforce compliance in a way that is understandable for businesses of all sizes, researchers, and public authorities. The aim is to establish a coherent framework that encourages responsible innovation while maintaining a strong safety net for consumers and workers who interact with AI-enabled tools in everyday life.
Also under discussion are provisions related to transparency, risk assessment, and the handling of high-risk AI applications. The negotiators are considering how to require companies to disclose essential information about how their models function, what data they rely on, and what safeguards are in place to prevent discrimination and bias. The expected outcome would be a regulation that builds trust in AI systems by making processes auditable and decisions explainable, without slowing the pace of technological progress.
As talks intensify, observers suggest that the final text could incorporate flexibility that reflects the evolving nature of AI. The goal is to craft a regulatory approach that can adapt to future developments while preserving core protections and ensuring a level playing field across member states. Officials stress that the process remains highly technical and depends on reaching consensus among diverse political views, economic interests, and regional priorities.
In late-stage negotiations, all parties seek to resolve how to handle exemptions and transitional provisions for existing technologies, the criteria for classifying applications as high-risk, and the mechanisms for oversight and enforcement. The outcome will shape not only how AI products are developed and marketed within the European Union but also how global companies design and deploy their AI solutions to meet European standards, potentially guiding practices well beyond EU borders. [Citations: European policymakers, regional press, industry briefings]