On 13th of June, 2024, the AI Act 1 was approved through the EU Regulation 2024/1689 (published on July 12, 2024), aimed at regulating the use of artificial intelligence (AI) to ensure safe, transparent, and ethical use. The regulation seeks to ensure that AI systems placed on the European market and used in the EU are safe, respect fundamental rights and the values of the Union, and promote investment and innovation in this sector in Europe
Who it applies to:
The regulation concerns:
- AI providers and users (deployers) within the European Union;
- AI providers and users outside the EU whose Artificial Intelligence system outputs are intended for use within the European Union;
- Importers and distributors of AI systems.
However, the AI Act does not apply to AI systems for military purposes, scientific research and development, non-professional use, or partly to systems with open-source or open-license.
The primary developments of the Artificial Intelligence Regulation
The regulation introduces obligations for Artificial Intelligence system developers, such as:
- Risk assessment: all AI systems must undergo a risk assessment based on their potential impact on decision-making.
- Introduction of AI systems categories, there are 4 and function based on risk level:
- Unacceptable risk: prohibited;
- High risk: AI systems must meet stringent requirements, such as harmonized standards and oversight by competent authorities;
- Limited risk: transparency obligations, such as providing clear and easily accessible information to end users;
- Minimal risk: no specific obligations.
- CE marking: high-risk Artificial Intelligence systems must bear CE marking before their availability on the market.
- Data management: strict rules for the collection, management, and storage of data used in AI system development and operation.
- Transparency: providers must offer clear and accessible information to end users, including capabilities, limitations, and potential risks.
Examples of usage of high risk AI systems include:
- Critical infrastructure (e.g., energy, transport, water, gas, and other essential networks),
- Educational and vocational training contexts,
- Essential public services (e.g., healthcare, social security, financial services),
- Biometric identification of people in public spaces, surveillance and control,
- Product safety (e.g., medical devices, autonomous vehicles),
- Systems that exploit vulnerabilities (e.g., age, disability, social or economic status),
- Social scoring of individuals based on behavior, personal characteristics, or assessments leading to discrimination.
What will change for companies based on the application of the AI Act?
The regulation, once in force, will impact companies, public entities, and developers working with Artificial Intelligence systems. Key dates:
- 01/08/2024: Regulation enters into force (20 days after publication on G.U.);
- 02/02/2025: Provisions on prohibited AI practices become applicable;
- 02/08/2025: Provisions regarding designated national notifying authorities, general-purpose AI models, governance, penalties (excluding financial penalties for general-purpose AI model providers), and confidentiality apply;
- 02/08/2026: All sections of the regulation apply, except for the next point;
- 02/08/2027: Obligations apply for high-risk systems, product or component referred to in Article 6, paragraph 1.
Thus, companies, entities, and developers will need to implement technical documentation to assess risks, ensure compliance, enforce security measures, and protect data to avoid legal and financial risks.










