European artificial intelligence (AI) law is closer than ever. This Friday, 27 EU countries ratified the “historic” political agreement signed in December. first horizontal arrangement This technology is available all over the world
In April 2021, the European Commission proposed the development of a regulation that would set a set of requirements for the use of artificial intelligence according to its risks. At the end of 2022, when consensus appeared to be in place, the emergence of ChatGPT returned lawmakers to the starting point. Almost three years after this initial proposal, the community bloc has agreed on a legislative text that will mark the coming years.
In the absence of further details technical fringeswe summarize here three keys Here’s what you need to know about this ambitious project.
Classification according to risks
The initiative, known as the Artificial Intelligence Act, classifies artificial intelligence systems according to the dangers they pose to society. Brussels establishes four different stages. Thus, it will ban those that represent “something”.unacceptable risk“. This is the case of cognitive-behavioral manipulation, classification of people according to their behavior or socioeconomic status, or real-time biometric identification programs.
One step down”high risk“, is permitted but subject to strict obligations. Therefore, systems that may adversely affect security or fundamental rights, applied in areas such as education, motoring or border control, will be evaluated before they are marketed and also throughout their life cycle.
“In the third step, there are artificial intelligences that pose the risk of manipulation.”annoyed“. Refers to virtual assistants or generative systems such as ChatGPT. These must meet transparency criteria, such as informing users that they are interacting with a machine or that the content it generates is synthetic.
Finally there are those who have no risk or one “minimum“. These are artificial intelligences found in many of the digital services we use, from video games to spam filters. These can be used without having to comply with any special regulations.
Generative AI
2023 was the year of the emergence of generative AI – models fed from large amounts of internet data that can automatically generate any text, image or sound. Regulation of these practices was also one of the most discussed issues among European legislators. The text agreed upon in December stipulates that companies responsible for these programs, such as Microsoft or Google, transparent and indicate what data they trained them with To prevent possible copyright infringement.
France has been the country pushing hardest to ease these obligations, thinking it would harm European AI startups such as France’s Mistral AI. Companies like OpenAI, creator of ChatGPT, have also done this. The final deal will not include the reduction demanded by Paris and Silicon Valley.
Exceptions and threats
The last key is perhaps the most controversial and important: biometric surveillance. In June, the European Parliament banned the use of systems such as facial recognition, seen by privacy organizations as a threat to fundamental rights. The political agreement signed in December vetoes real-time identification but adds: a few exceptions that particularly benefit the police and military. Thus, security forces and institutions will be able to use this technology in public spaces and, with prior judicial permission, in limited cases such as terrorism, human trafficking, and detection of rapists.