This Wednesday, the European Parliament will give definitive approval to what is described as the world’s first comprehensive regulation of artificial intelligence (AI). Since its initial proposal in April 2021, EU member states have negotiated a law that is as ambitious as it is intricate, setting out different obligations for AI use according to the level of risk involved.
From Strasbourg, a media outlet talks with Ioan-Dragos Tudorache, a co-drafter of the AI Act and chair of a dedicated Special Committee on AI, to uncover the details and controversies of a regulation Brussels has called “historic,” yet one that will take about two years to implement.
On December 8 an agreement was reached after more than 35 hours of negotiations. What were the main sticking points?
Entering the talks, the big questions centered on foundational models and how AI is used by law enforcement. There were intense discussions about which obligations to impose on foundational models, but the second area proved even tougher to finalize.
In June 2023, the EU Parliament banned biometric identification. What has changed since then to permit police use?
The Council of the EU had a clear mandate. The Commission’s first proposal already allowed for police exceptions, and many countries wanted them. The prohibition insisted on by the Parliament gave negotiations a higher starting point and enabled tougher safeguards than expected, earning even the Green party’s support for this approach.
The regulation bars systems with “unacceptable” risks like social scoring, but permits other high-risk uses such as facial recognition in specific cases. How are these limits defined?
Preventive AI deployment has been ruled out to stop potential abuses. The use of biometrics is tightly limited, only applicable in imminent-threat scenarios such as an attack and under judicial authorization. All police use is restricted to a precise list of crimes, requires notification, and must be preceded by a risk assessment.
A statement from the European Data Protection Board expressed concern about the absence of red lines that would categorically ban AI applications posing unacceptable risks.
That assessment will need to be read in detail. The agreement tightens restrictions compared to the initial deal. There is a ban with a permitted exception: from a public-safety perspective, if safeguards are in place, oversight exists, and abuses are prevented, some uses can be justified despite the concerns. That said, there are divergent views about this point.
Amnesty International and similar bodies have warned that the regulation could enable AI use at borders to monitor migrants.
Generally, data analysis that does not identify individuals, such as AI to predict migration trends or criminal activity without targeting a person, is not automatically prohibited. But once a person is subjected to a calculation by the system, such as AI-assisted asylum interview optimization or threat detection for specific groups, it becomes high-risk and requires safeguards. The idea is to deploy AI in these contexts with strict controls and human oversight; the algorithm should not have the final say.
Even so, AI results can be wrong and carry biases. High-risk systems must train on non-biased data. Developers will be compelled to ensure data accuracy and fairness, and to be transparent about their design so authorities understand how conclusions were reached. Past missteps, like biased social scoring, show how important it is to avoid such outcomes in the future.
This creates a framework for openness that was missing before. Administrations will continue to use AI, but the new obligations push responsible use and enable regulators to spot problems. Citizens will be able to challenge algorithmic recommendations they deem unfair.
Europe does not have tech giants and may never become a Google-sized powerhouse
The plan also regulates foundational models like GPT-4, the engine behind ChatGPT, to bring more transparency. Will there be exemptions that allow European firms to compete with U.S. AI giants?
Exemptions based on protectionism for European companies are not feasible. The focus is on building a robust European AI ecosystem rooted in research, startups, and a strong innovation culture. Open-source code is favored, and efforts are made to reduce compliance costs so small, emerging firms aren’t left behind. The aim is to stimulate a competitive European market within a crowded global space.
Can Europe really compete with giants accused of behaving like monopolies?
Over the past two decades, strong dependencies have emerged that make competing with the likes of Google or Meta extremely difficult. It is not about becoming the next Google Europe; it is about thriving as a strong, independent innovator. Europe can excel in areas like robotics and civil applications of satellite technologies, and it hosts some of the world’s most powerful supercomputers. These capabilities are being mobilized to support European AI initiatives, ensuring a solid computational backbone for startups and researchers alike.