(Update as of February 2, 2025: Ban on AI systems with unacceptable risks and the implementation of AI literacy requirements.)
Artificial intelligence has significantly improved many sectors and industries—such as healthcare, manufacturing, and education. However, the use of AI also carries risks and can lead to unfair or unintended consequences for both individuals and society. These risks range from social manipulation to reinforcing societal biases and socioeconomic inequalities.
To minimize these risks, prevent harmful outcomes, and ensure the safety and transparency of AI systems, the European Union introduced the first comprehensive legislation, namely the "Artificial Intelligence Act" (EU AI Act) in the summer of 2024.
The EU AI Act was published in the Official Journal of the European Union on July 12, 2024, and came into effect on August 2, 2024. Starting in February 2025, the first provisions become mandatory for businesses. Additional key regulations will follow in the months ahead, and the entire law will be applicable from August 1, 2026, except for certain high-risk AI systems.
With the AI Act becoming binding EU law, companies that develop and deploy AI systems must understand how to comply with its requirements to avoid legal consequences and significant fines.
This post provides a comprehensive overview of the Act’s implications as well as a timeline for the introduction of the new rules.
The EU AI Act is the first comprehensive continental legislation on Artificial intelligence the world. The Act aims to ensure that AI systems are safe and transparent and that consumers in the EU are not exposed to risks. In pursuing these objectives, the EU AI Act also recognises the need to promote innovation and investment in the AI sector and seeks to strike a balance between these objectives.
The EU AI Act legislative text therefore pursues a risk-based approach to the regulation of AI systems and classifies the A.I. systems into four different categories:
Depending on the risk category, an AI system is automatically banned or subject to less strict or stricter requirements.
Starting today, February 2nd, Article 4 of the EU AI Act on AI literacy comes into effect. This article applies to companies that develop, distribute, or operate AI systems. They are now required to ensure that all employees and external service providers involved in the planning, implementation, or use of AI systems are trained in the safe handling of these systems and in compliance with legal and ethical standards. To meet legal and ethical requirements and minimize liability risks, companies should therefor proactively invest in enhancing their data and AI competencies. However, since the AI Act does not provide specific guidelines on the scope or content of training, compliance can be confusing.
As part of our Academy Program, we offer specialized training on the safe use of AI systems in accordance with Article 4 of the EU AI Act. Our program includes comprehensive training for end-users as well as specialized expert sessions for technical implementation. We can help you establish a solid foundation for compliant AI use while promoting successful and responsible AI adoption across your entire workforce.
The EU AI Act is intended to cover all parties involved in the Development, introduction, sale, distribution and utilisation of AI systemsAI systems that are made available to consumers in the EU. Article 1 of the AI Act stipulates that providers, product manufacturers, importers, distributors and suppliers of AI systems may fall under the European AI Act.
In particular, the EU AI Act would also apply to organisations based outside the European Union if they supply AI systems to EU consumers.
In addition, the EU AI Act No turnover or user threshold for the applicability of the AI Act is fixed. Therefore, all organisations should consider the new obligations and seek advice on whether their AI systems fall under the EU AI Act.
What the Exceptions the law excludes the following from the scope of application:
As the European AI Act applies to providers, suppliers, distributors and importers of AI systems, it is important to determine whether a particular tool or service falls within the definition of an AI system.
Article 3 of the draft AI Act describes AI systems as "a machine-based system that is designed to operate with varying degrees of autonomy and that, once deployed, can demonstrate adaptability and infer for explicit or implicit goals from the inputs it receives how to generate outputs such as predictions, content, recommendations or decisions that can affect physical or virtual environments;".
This definition covers a wide range of AI systems such as biometric identification systems and chatbots, but does not include simple software programmes.
According to the EU AI Act, there are four different risk categories for AI systems, and you are subject to different obligations depending on the category.
Category 1: Unacceptable risk
Article 5 of the AI Law lists the artificial intelligence practices that are automatically prohibited. Therefore, an organisation should not deploy, provide, place on the market or use these prohibited AI systems. These include:
Category 2: High risk
High-risk AI systems are listed in Annex III of the AI Act and include AI systems used in the areas of biometrics, critical infrastructure, education, employment and law enforcement, provided certain criteria are met.
High-risk AI systems are not prohibited, but they do require compliance with strict obligations. Article 29 of the AI Act imposes the following obligations on you if you deploy or use a high-risk AI system:
Category 3: Limited risk
This category includes lower-risk AI systems such as chatbots and deepfake generators, which have less stringent obligations than the high-risk category. If you deploy, provide or use an AI system in this category, you must inform users that they are interacting with an AI system and also label all audio, video and photo recordings as being generated by AI.
Category 4: Minimal risk
AI systems in this category are not associated with any obligations and include systems such as spam filters and recommendation systems.
The EU draft law will be published on 2 August 2026, two years after its entry into force be applicable. Following the adoption of the EU AI Act by the European Parliament in April 2024, the AI Act entered into force on 2 August 2024 after its publication in the Official Journal of the EU.
However, there are Exceptions to this rule:
For example, the provision banning AI systems with an unacceptable risk will come into force after 6 months (2 February 2025).
In addition, the obligations relating to high-risk AI systems will only come into force after 36 months (2 August 2027), giving companies additional time to prepare.
The penalties for non-compliance with the AI Act depend on the specific offence and the degree and type of non-compliance.
In the case of prohibited AI systems, the fines can amount to up to 35 million euros or 7 % of annual turnover. Anyone who makes false statements can be fined up to 7.5 million euros or 1.5 % of their annual turnover.
date | Milestone |
---|---|
21 April 2021 | EU Commission proposes the AI Act |
6 December 2022 | EU Council unanimously adopts the general approach of the law |
9 December 2023 | European Parliament negotiators and the Council Presidency agree on the final version |
2 February 2024 | EU Council of Ministers unanimously approves the draft law on the EU AI Act |
13 February 2024 | parliamentary committees approve the draft law |
13 March 2024 | EU Parliament approves the draft law |
12 July 2024 | Publication of the law in the Official Journal of the European Union |
2 August 2024 | AI Act takes effect, start of the 24-month transition period |
2 February 2025 | Ban on AI systems with unacceptable risks and the implementation of AI literacy requirements (Chapters 1 & 2) |
2 August 2025 | Entry into force of governance rules and obligations for GPAI providers, as well as regulations on notifications to authorities and fines (Chapters 3, 5, 7, 12 & Article 78) |
2 August 2026 | End of the 24-month transition period. Obligations for high-risk AI systems come into effect (Article 6(2) & Annex III) |
2 August 2027 | Obligations for high-risk AI systems as a safety component come into effect (Article 6(1)) and the entire EU AI Act becomes applicable |
EU AI Act timetable (as of February 2025)
The European Parliament voted on and adopted the AI Act on 13 March 2024. Following its adoption by Parliament and after the Publication in the Official Journal of the European Union the law comes into force on 2 August 2024.
Considering that the upcoming European AI Act is likely to cover many AI systems in use and will apply to providers, importers, vendors and organisations, they should familiarise themselves with the EU AI Act and its obligations in a timely manner.
For example, it is vital that organisations have a detailed and up-to-date inventory of all the AI systems they use and are fully aware of the specific obligations for each risk category.
Share this post: