European AI law inches toward full implementation across the bloc
European artificial intelligence policy is moving closer to completion. This Friday, 27 EU member states ratified a political agreement reached last December, signaling a historic milestone in the union’s approach to AI governance. The regulation aims to provide a harmonized framework that affects AI systems on a global scale, including applications used in Canada and the United States, while prioritizing safety and fundamental rights.
The path toward a comprehensive regulation began in April 2021 when the European Commission proposed a framework setting risk-based requirements for AI. By late 2022, when consensus appeared near, the advent of generative tools like ChatGPT shifted attention back to the drawing board. After nearly three years of debate, the bloc has settled on a legislative text that is expected to guide AI policy for years to come.
Despite the absence of many technical details, three central facets are outlined here to summarize what this ambitious project entails, and what it could mean for developers, users, and policymakers alike.
France supports rapid approval of Europe’s AI regulation
Classification by risk
The initiative, known as the Artificial Intelligence Act, categorizes AI systems according to the level of risk they pose to society. Brussels distinguishes four risk levels. It bans systems considered to pose unacceptable risk, such as those that manipulate behavior or classify individuals by socio economic status or engage in real time biometric identification in general settings.
The next tier, high risk, permits deployment but under strict obligations. Systems that could endanger security or fundamental rights—when used in areas like education, transportation, or border control—will undergo pre market assessment and ongoing lifecycle checks.
The third tier covers technologies associated with manipulation risk, including certain virtual assistants and generative models. These must meet transparency requirements, such as clearly informing users that they are interacting with a machine and indicating when content is generated by AI.
Finally, there are those with low or minimal risk, which appear in many everyday digital services such as video games or spam filters. These do not require special regulatory compliance beyond standard practices.
Generative AI
Generative AI emerged sharply in 2023, driven by models trained on vast internet data capable of producing text, images, or audio. Regulating these practices has been a central topic for European legislators. The agreed text requires providers of such programs to be transparent about the data used for training to mitigate potential copyright issues.
France has pushed for lighter obligations to avoid disadvantaging European AI startups, such as Mistral AI, and to keep competitiveness with major players like OpenAI and Google. The final deal does not incorporate the reduced obligations favored by Paris and Silicon Valley.
Exceptions and safeguards
The last key issue involves biometric surveillance. In June, the European Parliament restricted the use of facial recognition and similar technologies as a privacy safeguard. The political agreement reached in December narrows real time identification but includes a few exceptions that mainly benefit security and law enforcement. Public space use by security forces remains possible in very limited cases with prior judicial authorization, such as threats to public safety or investigations into serious crimes.