Authored by Pierre-Alexandre Degehet, Partner
Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024, and the Luxembourg draft bill No. 8476.
The European regulation on artificial intelligence (AI Act – Regulation (EU) 2024/1689) has now been adopted. For businesses, the challenge is very concrete: identify your role (provider, deployer, etc.), determine the level of risk, and organise compliance accordingly.
📌 Three key takeaways
This first introductory note, deliberately educational in nature, aims to explain the main principles of the AI Regulation and the first practical implications, with a focus on implementation in Luxembourg.
The AI Act is the common name given to Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024, laying down harmonised rules on artificial intelligence and amending several existing regulations and directives (the AI Act).
It is an EU regulation, i.e. a legal act that is directly applicable in all EU Member States, without any need for transposition into national law. The obligations it sets out therefore apply directly to the relevant stakeholders, subject to the national implementing measures provided for by the text, in particular regarding the competent authorities, supervisory and enforcement mechanisms, and the sanctions regime.
The purpose of the AI Act is to govern the development, placing on the market and use of artificial intelligence systems, based on the risks those systems may present. To that end, it does not proceed in an abstract manner: it adopts a specific legal definition of artificial intelligence, which determines the scope of its application as a whole.
Accordingly, the AI Act defines an “artificial intelligence system” as “a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.”
This definition is deliberately broad and technologically neutral. It does not target a specific technology, but rather a function: a system’s ability to produce, in an automated manner, outputs that have an impact on its environment. As a result, many software tools used in a professional context may fall within the scope of the AI Act, even where they are not presented as “artificial intelligence” in the everyday sense of the term.
The adoption of the AI Act takes place against a backdrop of profound change in digital uses. As the opening recitals of the AI Act recall, artificial intelligence is no longer an experimental or marginal technology. It is now embedded in many tools and processes that directly shape economic, social and professional life.
Artificial intelligence systems are now used to automate or support decisions across a wide range of areas, such as recruitment and human resources management, access to essential services, the assessment of individual situations, and decision support in sensitive environments. These uses can have very tangible effects on the persons concerned, whether in terms of access to a job, a service, or a right.
At the same time, the EU legislator does not deny the significant benefits associated with the development of artificial intelligence. Recital (4) of the AI Act expressly notes that artificial intelligence is a fast evolving family of technologies that contributes to a wide array of economic, environmental and societal benefits, and explains that, by improving prediction, optimising operations and resource allocation and personalising digital solutions, the use of artificial intelligence can provide key competitive advantages and support socially and environmentally beneficial outcomes.
In that regard, pursuant to the AI Act, artificial intelligence systems can bring significant benefits to individuals, businesses and society as a whole, notably by improving predictions, optimising operations and resource allocation, and personalising digital solutions across many economic and social sectors. They can also contribute to objectives of general interest, such as better healthcare, disease prevention, enhanced security and the promotion of environmental sustainability.
However, it is important not to overlook the specific risks that certain uses of artificial intelligence may pose to fundamental rights and the Union’s values. Those risks relate in particular to the opacity of certain automated decisions, the difficulty of understanding or challenging the outputs produced by an artificial intelligence system, biases that may affect data or models, and issues of system security and reliability. In that respect, Recital (5) of the AI Act recalls that “[s]uch harm might be material or immaterial, including physical, psychological, societal or economic harm.”
Before the adoption of the AI Act, these issues were addressed through general rules under existing law, such as data protection, consumer law or product safety rules. While those frameworks remain applicable, they were not designed to respond specifically to the distinctive features of artificial intelligence. This situation created a degree of legal uncertainty, both for affected persons and for businesses, and entailed a risk of divergent approaches across Member States.
In response to this, the European Union has chosen to intervene through a specific, horizontal and harmonised legal framework, directly applicable across all Member States. As Recital (1) of the AI Act states, that framework aims to “improve the functioning of the internal market”, to promote the uptake of human centric and trustworthy artificial intelligence, and to ensure a high level of protection of health, safety, fundamental rights “enshrined in the Charter of Fundamental Rights of the European Union, including democracy, the rule of law and environmental protection”, while supporting innovation.
It is in this vein that the AI Act adopts a risk-based approach, distinguishing artificial intelligence systems according to the level of risk they are likely to pose and, consequently, the impact they may have on health, safety and fundamental rights. This architecture, which is the guiding thread of the AI Act, makes it possible to regulate the most sensitive uses more strictly, while maintaining a lighter framework for low-risk applications.