AI Act: obligations, risks and opportunities in the new governance of artificial intelligence

Publicado em 20/04/2026

The EU’s AI Act (Regulation (EU) 2024/1689) introduces the first comprehensive framework for regulating artificial intelligence, based on a risk-tiered approach to protect fundamental rights and safety. By imposing stricter obligations on higher-risk systems—particularly around transparency, governance, and accountability—the Regulation reshapes how organizations develop and use AI. With phased implementation through 2027, it positions responsible AI management as both a compliance requirement and a driver of trust and competitiveness. This article is authored by Valeria Specchio (Senior Associate) and Nicola Sandon (Manager) of Rödl Italy.

AI Act: obligations, risks and opportunities in the new governance of artificial intelligence

1. The AI Act

Regulation (EU) 2024/1689 (the so-called “AI Act” or “Regulation”) on artificial intelligence is the first legislative act in the world to comprehensively regulate the development, use, placing on the market, distribution and import of artificial intelligence systems ("AI”). The hallmark of the Regulation is the adoption of a risk-proportionate approach to the risks that AI systems pose to fundamental rights and freedoms, calibrated according to the purposes and contexts of use. The higher the risk, the more stringent the applicable obligations. The Regulation aims, in fact, to balance the promotion of technological innovation with the protection of fundamental rights, health and safety of citizens.

The regulation applies not only to providers, users (or deployer), distributors and importers established in the EU, but also to operators from third countries, provided that the output of their AI systems is intended to be used within the European Union. Excluded from scope are systems used exclusively for military, defense or national security purposes, and scientific research and development activities.

On the temporal dimension, the AI Act was published in the EU Official Journal on 12 July 2024 and entered into force on 1 August 2024. In order to facilitate industry operators, the Regulation provides for a gradual and progressive application:

·        as of 2 February 2025, the provisions on prohibited systems and on AI literacy became applicable;

·        from 2 August 2025, the provisions on the so-called General-Purpose AI Models (“modelli GPAI");

·        from 2 August 2026, the remaining provisions of the AI Act will apply, with the exception of those relating to high-risk systems embedded in safety components of certain products, the applicability of which is deferred to 2 August 2027.

To ensure the precise application of the rules, a governance structure at two levels, national and European, has been established, led by the AI Office at central level.

2. Some basic concepts

2.1. The risks classified by the AI Act

The AI Act structures its regulatory framework around three risk levels: prohibited systems, high-risk systems and so-called “limited risk” systems.

As regards prohibited systems, the AI Act bans certain AI applications deemed inherently incompatible with the fundamental values of the Union. This category includes: systems that use subliminal or manipulative techniques or exploit people’s vulnerabilities, those that assess or classify individuals based on social behavior or personal characteristics (so-called social scoring), and systems for individual predictive profiling of the risk of committing crimes. The prohibition also extends to systems that collect facial images from the internet or from CCTV footage to create recognition databases, to those that infer emotions in workplaces or educational institutions, and to biometric categorization systems. Limited exceptions are provided, mainly for investigative purposes or the prevention of terrorist attacks.

High-risk systems represent the core of the Regulation and encompass AI systems potentially capable of causing significant harm, whose use is nonetheless permitted provided they comply with a set of rigorous requirements and conformity obligations. This category includes, on the one hand, AI systems that constitute safety components of products subject to sector-specific legislation (e.g. medical devices, toys, machinery, lifts) and, on the other, AI systems deployed in particularly critical areas, such as the management of infrastructure, education, employment, migration flows, the administration of justice or access to essential services. Here too the Regulation provides for certain exceptions, for example for AI used in biometric identity verification, financial fraud detection or the management of political campaigns.

There are also limited-risk systems, linked to the possible deception of the user, which include AI systems that interact directly with humans, that generate synthetic content (such as deepfake) or that create and modify textual, audio and visual content. It should be noted that these systems are not regulated as an entirely autonomous risk category: the obligations dedicated to them operate in a complementary manner with respect to other applicable rules, introducing specific transparency requirements with the aim of preventing digital deception and enabling people to distinguish what is real from what is artificially generated.

Although not subject to specific regulation, it is worth mentioning the category – purely residual – of minimal- or no-risk AI systems, in which a large proportion of systems currently in use fall and for which no specific obligations or restrictions are envisaged, without prejudice to the possibility of voluntary adherence to codes of conduct.

Finally, a ad hoc regime is reserved for GPAI models: these do not constitute AI systems in the strict sense, but rather general-purpose models that enable the operation of AI systems, trained at scale and deployable for multiple applications, such as chatbots, content generation and much more. A general transparency obligation applies to them, with reinforced requirements for models presenting a systemic risk, identified through the computational threshold of 1025 FLOPs, in relation to which providers must ensure high cybersecurity standards and in-depth analyses to mitigate possible large-scale negative impacts.

2.2. The actors under the AI Act

The AI Act defines a series of specific roles for the operators involved in the AI value chain, assigning differentiated responsibilities to each based on their position and level of control over the system. The actors relevant for the purposes of compliance with the obligations on AI systems and GPAI models are identified in the following categories:

·        providers (or provider): the entity (natural or legal person, public authority) that develops an AI system (or a GPAI model) or has it developed in order to place it on the market or put it into service under its own name or trademark. It bears the primary responsibility for compliance, especially for high-risk systems;

·        users (or deployer): the entity that uses an AI system under its own authority (direction and control) in the course of a professional activity;

·        importers: where a system has been developed in a third country, the importer is the entity that places it on the EU market;

·        distributors: entities that make AI systems and GPAI models otherwise available on the Union market;

·        authorized representatives: an entity established in the Union, mandatorily appointed by written mandate by a non-EU provider, whose task is to fulfil the obligations incumbent on the provider, on their behalf.

2.3. The main obligations

The AI Act introduces a system of modular obligations, graduated according to the level of risk that AI systems may generate for health, safety and fundamental rights, as well as the role played by each individual operator in the supply chain.

All providers and deployer are required to adopt measures to ensure a sufficient level of AI literacy among their staff and anyone acting on their behalf, considering their technical knowledge, experience and context of use. This obligation – which became mandatory as of 2 February 2025 – applies regardless of the type or degree of risk of the systems actually used and currently represents one of the greatest cultural challenges posed by the AI Act.

Systems classified as high-risk are subject to the most stringent requirements, both before and after being placed on the market. Compliance must be ensured by providers from the design stage, and entails the need to:

·        risk management system: establish a continuous process to identify and mitigate risks throughout the entire lifecycle of the system;

·        data governance: use high-quality training and validation datasets that are relevant, representative and free from errors to avoid bias;

·        technical documentation and logs: draw up detailed documentation to demonstrate compliance and ensure automatic event logging for traceability;

·        human oversight and accuracy: design the system so that it can be effectively supervised by natural people and ensure adequate levels of accuracy and cybersecurity;

·        conformity assessment and CE marking: submit the system to assessment, draw up the EU declaration of conformity and affix the CE marking;

·        registration and monitoring: register the system in the EU database and implement a post-market monitoring system to report serious incidents to the authorities.

The regulatory burden does not fall on developers alone: companies that use these technologies are in fact required to scrupulously follow the instructions for use provided by the provider, to monitor the functioning of the systems, retaining logs for at least six months, and to ensure transparency both towards their own workers and towards third parties who may be affected by the use of such systems. Public bodies and certain private entities (e.g. banks and insurance companies) must also carry out a fundamental rights impact assessment prior to use, to which we shall return shortly.

The obligations described above will apply from 2 August 2026, with an extension to 2 August 2027 for high-risk AI systems that constitute safety components of certain products (see supra, para. 1).

For limited-risk systems, the obligations envisaged revolve around the principle of transparency and are designed to counter manipulation and digital deception: it will, for example, be mandatory to inform users when they are interacting with an AI and to clearly label deepfake and synthetic content with machine-readable markings. These obligations will also apply from 2 August 2026.

With regard to GPAI models, the Regulation lays down specific transparency requirements for providers, including the obligation to maintain detailed records relating to the development and testing of their models, as well as the duty to provide information to providers who will in turn use the models to develop their own systems (so-called provider downstream). These obligations are applicable from 2 August 2025.

3. Some measures in particular: transparency, FRIA and procedures

3.1. Transparency

The transparency obligation represents a structural requirement and a fundamental pillar for the protection of citizens within the Regulation: it cuts across high-risk systems, limited-risk systems and GPAI models.

Looking at high-risk AI systems, the transparency obligation broadly aims to support the user in the correct management of the technology and translates primarily into the need to have clear and comprehensive instructions including the technical specifications, the expected level of accuracy, the system’s limitations and the envisaged human oversight measures.

By contrast, as we have seen, the information obligation connected to limited-risk systems aims to protect the end user: anyone who interacts with a chatbot or a virtual assistant, is exposed to emotion recognition systems or receives synthetic content (deepfake, images, AI-generated texts) must be informed of this or the content must be labelled as artificial.

Providers of GPAI models fulfil a structural information obligation, which involves the need to: (i) draw up and keep updated detailed technical documentation (architecture, training processes, energy consumption) to be provided to the AI Office upon request, and (ii) make available to downstream provider information and documentation enabling them to understand the capabilities and limitations of the models.

In summary, the three information flows cover distinct levels of the chain: the first operates at the level of the provider–deployer relationship for high-risk systems; the second directly protects the end user by imposing transparency in human-machine interaction; the last governs the structural transparency of foundational models towards the entities that integrate them.

But when do this transparency obligations translate into a genuine duty to inform? This occurs in two different scenarios:

·        use of high-risk AI systems in the workplace: deployer acting in the capacity of employers are expressly required to inform workers’ representatives and the workers concerned that they will be subject to the use of such tools;

·        use of emotion recognition or biometric categorization systems: in such cases, deployer are required to inform the natural persons exposed to the systems about how they function and the personal data processing associated with them.

3.2. FRIA

The Regulation introduces a specific accountability tool and a preventive safeguard, aimed at identifying and mitigating the risks that certain high-risk AI systems may produce in relation to human dignity and civil liberties: we are referring to the obligation to carry out a fundamental rights impact assessment, or Fundamental Rights Impact Assessment (“FRIA”).

This obligation does not fall on everyone indiscriminately, but is restricted to deployer operating in sectors of particular social sensitivity: public-law bodies, private entities that provide public services (such as healthcare or education) and operators that use AI for assessments that are crucial from an economic and social perspective, such as creditworthiness or the calculation of premiums for life and health insurance.

The assessment required by the AI Act must be carried out mandatorily before the system is put into use and must be continuously updated if relevant factors change during its lifecycle. Its content must be extremely detailed, including a precise description of the business processes concerned, the identification of categories of persons or vulnerable groups exposed to the technology, the analysis of specific risks of harm and a clear definition of the human oversight measures and mitigation strategies to be activated in the event of incidents

In the interest of reducing administrative burdens, the legislator has provided that the FRIA may integrate and complement the Data Protection Impact Assessment (“DPIA”) already required under Regulation (EU) 2016/679 in cases of high-risk personal data processing, thereby avoiding unnecessary duplication.

Once completed, the assessment must be notified to the competent market surveillance authority. To assist companies in this delicate task, the AI Office is tasked with drawing up a standardized questionnaire template that will serve as an operational guide to ensure a uniform and rigorous approach throughout the Union.

3.3. Procedures

The AI Act mandates a profound transformation of corporate governance, elevating the adoption of internal policies and procedures from an optional choice to the cornerstone of regulatory compliance.

For providers of high-risk systems, the centerpiece of procedural compliance is the establishment of a rigorous Quality Management System (QMS), a framework documented framework that must include written procedures on compliance strategies, testing and validation procedures, as well as data and risk management systems active throughout the entire technology lifecycle. Moreover, these entities are required to formalize procedures for post-market monitoring and the timely reporting of serious incidents, ensuring traceability through documentary archives to be retained for at least ten years from the date of placing on the market.

In parallel, companies acting as deployer must adopt appropriate technical and organizational measures to ensure use in accordance with the provider’s instructions, including the identification of natural persons with the competence and authority to be assigned human oversight of the system’s operation.

This scenario requires companies to equip themselves with a structured AI Policy to map roles, responsibilities and internal operational flows, while also formalizing the supplier selection process and defining acceptable use rules for AI systems to prevent violations of fundamental rights or intellectual property.

Conclusions

Although it formally entered into force in 2024, the Regulation is characterized by a progressive application that is already producing concrete effects for economic operators. Some obligations – such as those concerning prohibited systems and AI literacy – are already applicable, while most provisions will become operative from 2 August 2026. The time available for structured adjustment is, therefore, significantly limited.

In this context, the AI Act cannot be interpreted as an isolated regulatory intervention but fits into a broader regulatory ecosystem. The adoption of AI systems therefore requires an integrated approach to compliance, capable of coordinating the various regulatory levels and translating them into coherent organizational, technical and procedural measures.

While non-compliance exposes organizations to a particularly stringent sanctions regime, on the other hand a conscious and proactive management of obligations can represent an enabling factor for the development of the business. The ability to correctly classify systems, govern their risks and ensure transparency and reliability towards users, partners and authorities is, in fact, an increasingly relevant element in competitive and reputational terms as well.

In this perspective, the path to compliance with the AI Act does not end with a formal obligation but entails a broader revision of governance models and business processes, requiring cross-disciplinary expertise and a strategic vision of technological innovation. Anticipating this process allows not only for risk mitigation, but also for seizing the opportunities offered by a regulatory framework set to significantly shape the evolution of digital markets in the coming years.

This article is authored by:

Valeria Specchio, Attorney at law (Italy), Senior Associate, Rödl Italy

Nicola Sandon, Attorney at law (Italy), Manager, Rödl Italy