EU AI Act: Risk-Based Rules and Timelines

The long‑awaited EU law regulating artificial intelligence is now a reality. On Thursday, August 1, 2024, the world’s first comprehensive framework came into force, limiting both the marketing and use of AI technologies based on assessed risks. The aim is clear: push forward ethical development while safeguarding fundamental rights for European citizens.

The rule represents more than three years of meticulous work. In April 2021, the European Commission first proposed a balanced regulation that would anchor the bloc’s technological leadership, allowing the best of AI to flourish while curbing its downsides. By late 2022, the text appeared largely settled, yet the surge of generative models spurred by the popularity of tools like ChatGPT prompted lawmakers to revisit and refresh the draft to prevent obsolescence. After months of intense negotiations, the European Parliament approved what Brussels called a historic measure in March of the following year.

Risk classification

The pioneering regulation that takes effect today lays the groundwork for a governance framework around AI. It groups technologies into four risk categories, imposing stricter or lighter obligations depending on the level of risk. The idea is to balance innovation with the protection of basic rights, adding a further layer of protection for a highly potent tool, as explained by Ibán García del Blanco, a former socialist eurodeputy who contributed to the drafting.

These rules apply not only to developers and providers of AI systems but also to public bodies, users, and enterprises. The impact extends beyond the EU’s borders: suppliers that market or deploy AI within Europe, even from outside the union, must meet the rules. Fines may reach up to 35 million euros or seven percent of a company’s global annual turnover, whichever is higher.

1. Unacceptable risk

The law categorically bans AI uses that violate basic rights. This includes predictive policing systems, attempts to manipulate user behavior, or systems that categorize individuals to infer race, political opinions, or sexual orientation.

2. High risk

When AI can meaningfully threaten individual safety or rights, it is treated as high risk. Examples include facial recognition, biometric workplace monitoring, and AI systems used to diagnose diseases or determine access to public benefits, loans, or educational services. For these applications, the law imposes strict transparency requirements so that the underlying algorithms can be audited. Developers must ensure data are accurate and fair and be able to explain how the system arrived at a given conclusion to authorities. The obligation to provide an explanation is emphasized by a leading member of the law’s drafting team and its chair, in discussions with major media outlets.

Public bodies adopting high‑risk AI must conduct before‑use impact assessments. A centralized EU database will inform citizens about which entities rely on these technologies, empowering them to challenge algorithmic decisions deemed unjust.

3. Limited risk

This category covers AI systems that pose transparency concerns, notably general purpose models and generative AI that creates text, imagery, audio, or video from large online data sources. The regulation seeks to compel the major players who market these generative systems—principally Microsoft, OpenAI, and Google—to clarify how their products work and to help citizens determine whether outputs are trained on biased data or protected works. It also envisions watermarks on IA‑generated content so users can distinguish real from synthetic material, reducing misinformation risks.

Transparency rules are designed to help people understand when content is machine‑generated and to ensure that training data do not undermine trust or rights. The implications extend to compliance strategies for major providers and to informed consumption by users across Europe and beyond.

4. Minimal risk

At the lowest tier, systems can operate with no additional obligations beyond those already in place by existing law, provided the risk is deemed very small. For instance, AI used to optimize logistics and streamline production processes may fall here. Even so, the regulation encourages firms developing these tools to voluntarily adhere to ethical codes and transparency practices.

phased implementation

Today marks the official entry of the law, but its principal provisions will roll out over the next three years. The timeline is designed to ease transitions for both the public sector and private sector. High‑risk systems will be prohibited from February 2, 2025. Generative AI limitations come into force on August 2, 2025, with the bulk of obligations applicable by August 2, 2026. Provisions relevant to high‑risk AI already regulated in sectors such as aviation or medical devices will wait until August 2, 2027.

— End —

Previous Article

Oil Markets React to Middle East Tensions and Inventory Trends

Next Article

EU Competition Scrutiny of Iberia–Air Europa and Related Mergers

Write a Comment

Leave a Comment