Corporate Compliance under the EU Artificial Intelligence Act: Legal Framework and Strategic Implications
Publicado el 5 may 2025

The Artificial Intelligence Act (AI Act) of the European Union is the first comprehensive legislative framework designed to govern the use of Artificial Intelligence (AI) across the EU. Its extraterritorial scope and broad definitions make it essential for companies within and outside the EU to assess and adjust their compliance frameworks. For companies operating in or entering the European market, compliance with the AI Act is not only a legal obligation but also a strategic necessity.
The AI Act adopts a risk-based approach, which shapes the obligations that providers and deployers of AI systems must comply with. Companies are expected to implement a risk management process and a data governance strategy. Furthermore, comprehensive documentation must be prepared before the AI system is placed on the market and transparency obligations must be met.
Operators are also expected to monitor the performance of AI systems after deployment and to report serious incidents and malfunctions to the competent authorities.
The AI Act encourage companies to conduct AI Impact Assessments, following specific methodologies and metrics, to evaluate potential legal, ethical, and societal risks, with a specific focus on fundamental rights. Also, the adoption of internal regulations (or policies) on the use of generative AI (GenAI) systems is highly advisable for private companies.
The AI Act
The European Union Artificial Intelligence Act (AI Act) represents the first comprehensive legal framework aiming to regulate the use of Artificial Intelligence (AI) in the EU.
It follows the path traced by strategic and policy documents. In 2018, the European Commission published the AI for Europe Communication, emphasizing the need for coordinated efforts to promote AI and address emerging risks. The year after, the Ethics Guidelines for Trustworthy AI was released by the High-Level Expert Group on AI. It introduced seven key requirements for trustworthy AI, such as human agency, transparency, and accountability, which form the backbone of the actual legislative framework. Finally, the White Paper on AI, which proposed a risk-based approach to AI regulation and launched a public consultation.
Implementation Timeline of the EU AI Act: A Legal Perspective
The European Union's AI Act follows a phased implementation schedule, designed to allow stakeholders—particularly legal and compliance teams—to progressively adapt to its requirements. Below is a structured overview of the key milestones:
1 - August 1, 2024 – Entry into Force
The AI Act formally enters into force. While not all provisions are immediately applicable, this date initiates the legislative process and triggers transitional compliance planning for providers and users of AI systems.
2 - February 2, 2025 – Applicability of Provisions on Prohibited AI Practices
From this date, Article 5 concerning prohibited AI practices becomes binding. Legal entities must ensure the immediate cessation or avoidance of deploying AI systems that fall under the scope of prohibited use cases, such as manipulative or exploitative systems and certain real-time biometric identification applications in public spaces.
3 - May 2025 – Deadline for the Adoption of Codes of Conduct
This marks the deadline for industry actors and relevant stakeholders to develop and adopt voluntary codes of conduct. These instruments, while non-binding, are expected to play a critical role in shaping sector-specific compliance standards and may influence supervisory expectations.
4 - August 2025 – GPAI and Enforcement Provisions Become Applicable
Several crucial components of the regulation come into effect:
General-Purpose AI (GPAI) Systems: Obligations for GPAI providers and deployers commence, including transparency, documentation, and risk management duties.
Notified Bodies: Designation and operation of conformity assessment entities begin.
Governance Framework: National and EU-level governance bodies assume their oversight functions.
Sanctions: Enforcement provisions, including administrative fines, become operational.
Implementation Guidelines: The European Commission is expected to publish further guidance and implementing acts to clarify compliance obligations, especially for GPAI providers.
5 - August 2026 – Full Applicability of the Regulation
The regulation becomes fully applicable across the EU. All actors within the AI value chain—developers, deployers, importers, and distributors—must ensure complete compliance with the relevant obligations based on the risk classification of their AI systems.
6 - Post-August 2026 – Ongoing Compliance and Regulatory Evolution
The AI Act anticipates ongoing monitoring, updates, and possible revisions. Legal professionals should expect continuing developments in delegated and implementing acts, as well as evolving jurisprudence and regulatory guidance.
AI Act Compliance
Artificial Intelligence is transforming societies, economies, and legal systems. As its systems become increasingly integrated into products and services, regulators have recognized the need for robust governance frameworks. The EU AI Act, proposed by the European Commission in April 2021 and formally adopted in 2024, aims to balance innovation with the protection of fundamental rights and with the EU values.
Its extraterritorial scope and broad definitions make it essential for companies within and outside the EU to assess and adjust their compliance frameworks. For companies operating in or entering the European market, compliance with the AI Act is not only a legal obligation but also a strategic necessity.
This essay delves into the key aspects of compliance, offering a detailed analysis of the regulatory architecture and its implications for businesses. It explains the main aspects of corporate compliance under the AI Act, focusing on its risk-based approach, classification of AI systems, obligations for providers and users, conformity assessments, post-market monitoring, and enforcement mechanisms.
Corporate compliance under the AI Act is not merely a matter of avoiding fines—it is a comprehensive process involving legal analysis, technical documentation, risk management, and ethical alignment. Additionally, compliance with the AI Act should not be treated as a one-off obligation but rather as an ongoing commitment.
Objectives and Scope
The AI Act is expected to reach several objectives:
a) Improving the internal market, by preventing market fragmentation and creating a unified legal framework for AI applicable across all EU Member States.
b) Forbidding the use of AI systems that pose unacceptable risks to human safety, health, or other fundamental rights (e.g., discriminatory or manipulative AI).
c) Facilitating innovation while mitigating risks associated with high-risk AI.
d) Fostering the development and use of trustworthy, human-centric AI.
e) Supporting innovation and competitiveness through the AI regulatory sandboxes (i.e. safe environments for experimentation under supervision of the competent authorities).
Mirroring previous EU regulations (e.g. GDPR), the AI Act provides an extraterritorial application and implies that non-EU companies must also comply if their AI systems affect EU citizens.
The AI Act shall apply to:
a) Providers placing AI systems on the EU market.
b) Users of AI systems located in the EU.
c) Providers and users outside the EU where the output of the AI system is used in the EU.
Risk-based Approach
The AI Act introduces a four-tiered classification system, which influences obligations:
a) Unacceptable Risk: AI systems that are prohibited (e.g., social scoring by governments, manipulative techniques).
b) High-Risk: AI systems with significant impact on rights and safety (e.g., biometric identification, credit scoring, employment decisions).
c) Limited Risk: Systems requiring transparency obligations (e.g., chatbots).
d) Minimal Risk: Systems with no specific obligations (e.g., AI in video games).
Compliance Obligations for High-Risk AI Systems
The compliance framework for such systems is extensive and includes the following elements.
The risk management process must be documented and updated in light of system modifications or post-market evidence. Companies must implement a continuous risk management system throughout the AI system’s lifecycle. For instance, they are obliged to identify foreseeable risks to health, safety, and fundamental rights; to evaluate and testing risks under normal and abnormal conditions; to take measures to eliminate or mitigate identified risks.
Companies must also plan a data governance strategy. High-risk AI systems must be trained, validated, and tested on high-quality datasets. Companies must ensure that the data is relevant, free of errors, and complete and that bias mitigation techniques are implemented.
Furthermore, comprehensive documentation must be prepared before the AI system is placed on the market. It must include a) general description and intended purpose, b) system architecture and components, c) data management and preprocessing methods, d) risk management procedures, e) human oversight measures. This documentation ensures transparency, conformity assessment, and post-market monitoring.
The AI Act mandates transparency obligations that high-risk AI systems be accompanied by clear instructions for use, including system capabilities and limitations, human oversight instructions, known or foreseeable risks. Regarding human oversight, companies must implement human oversight mechanisms that are appropriate to the risks, such as manual interventions and real-time supervision.
AI systems must also achieve consistent performance and be resilient to errors and adversarial manipulation. In particular, AI providers must ensure accuracy over time, perform robustness testing, protect against cybersecurity threats.
Conformity Assessment and CE Marking
Compliance activities involve a conformity assessment procedure that results in the CE marking of the AI system. Internal assessment is allowed for certain high-risk systems unless they involve biometric identification, while third-party assessment by notified bodies is required for more sensitive applications.
Providers of high-risk AI must also implement a quality management system covering several aspects, such as strategy and procedures for compliance, techniques for system testing and validation, corrective and preventive measures, monitoring processes for performance and updates.
Post-Market Monitoring and Incident Reporting
The AI Act holds that providers must actively monitor the performance of AI systems after deployment, by respecting monitoring obligations, chief among them being collecting user feedback, reviewing performance logs, identifying emerging risks or unexpected behavior.
Similarly to other EU regulations (e.g. GDPR), serious incidents and malfunctions encompassing breaches of fundamental rights safety incidents must be reported to the relevant market surveillance authority within 15 days of becoming aware.
Responsibilities of Other Actors
In addition to providers, the AI Act outlines obligations for other stakeholders.
Deployers are entities that use an AI system in the course of their professional activities, by applying AI systems in a real-world context (e.g. a bank using an AI system to assess creditworthiness).
They must follow the provider’s instructions for intended use and safety (similar to how machines or medical devices come with operational requirements) and must monitor the system’s performance during operation and report serious incidents or malfunctions.
In some cases, deployers must inform individuals that they are interacting with an AI system and must ensure compliance with the data protection regulations (e.g. conducting data protection impact assessments where profiling or automated decision-making is involved).
The AI Act also regulates the role of other actors.
Importers and distributors must verify that AI systems carry CE markings and meet documentation requirements. They must also cooperate with authorities during inspections or investigations. Non-EU providers must appoint an authorized representative in the EU responsible for regulatory communications and compliance support.
Penalties
The AI Act introduces substantial fines for non-compliance:
up to €35 million or 7% of global turnover for prohibited practices
up to €15 million or 3% of turnover for high-risk system non-compliance
up to €7.5 million or 1% of turnover for incorrect or incomplete documentation.
AI Impact Assessments
Analogous to the GDPR’s Data Protection Impact Assessment (DPIA), companies are encouraged to conduct AI Impact Assessments to evaluate potential legal, ethical, and societal risks.
The FRIA (Fundamental Rights Impact Assessment) is a structured assessment that should follow specific methods and metrics. It is compulsory for deployers before putting certain AI systems into use, when the AI systems are likely to significantly impact individuals’ rights and freedoms. Through the FRIA, deployers should describe the intended use of the AI system, identify potential impacts on fundamental rights, outline mitigation and control measures.
It is mandatory for high-risk systems and must be completed before the deployer puts the system into use for the first time.
AI Policy
It is highly advisable — and increasingly necessary — for private companies to adopt internal regulations (or policies) on the use of generative AI (GenAI) systems, even if it is not yet explicitly mandatory under the law in all cases.
Internal regulations are strictly suggested to avoid potential misuses of GenAI systems, such as unauthorized disclosure of confidential data to AI tools, AI-generated content liability (e.g., copyright violations or misinformation), violations of data protection, IP rights, and non-discrimination rules.
The principal purpose of the AI policies is that of defining roles and responsibilities internally, creating a traceable, transparent process for GenAI use. The internal policy should clarify what GenAI tools may be used for (e.g., internal drafts, data analysis) and what they must not be used for (e.g., client communication without review, personal data input).
Conclusion
The EU Artificial Intelligence Act ushers in a new era of AI governance with far-reaching implications for companies. Its risk-based, lifecycle approach requires businesses to adopt comprehensive compliance frameworks that address legal, technical, and ethical dimensions.
Effective compliance demands strategic investment in governance structures, documentation, testing, and monitoring. Although the AI Act presents challenges, it also offers opportunities for companies to build trust, differentiate themselves, and contribute to responsible innovation.
As enforcement begins and regulatory guidance evolves, companies that proactively embrace compliance will not only reduce legal risk but also enhance their reputational standing and market competitiveness.
