When the EU Artificial Intelligence Act entered into force in August 2024, it became the world's first comprehensive, horizontal regulation governing AI systems. For organizations operating globally, the implications are immediate and far-reaching. The Act applies extraterritorially to any organization that places AI systems on the EU market or whose AI outputs affect individuals within the EU, regardless of where that organization is headquartered. The penalty structure underscores the seriousness of the regulation: fines can reach the higher of 35 million EUR or 7% of global annual turnover for the most serious infringements.
This guide is designed for Compliance Officers and CTOs who need a practical, end-to-end view of how to classify AI systems, understand the obligations attached to each classification tier, and build a compliance roadmap through August 2027.
1. Scope and Applicability
1.1 Who is in scope?
The Act draws a wide perimeter around the organizations it governs. Any entity whose AI impacts people in the EU should assume it falls within scope, but four roles carry the most direct obligations.
Providers are organizations that develop an AI system or general-purpose AI (GPAI) model and place it on the market or put it into service under their own name or trademark. Deployers are organizations that use an AI system in the course of their professional activities, even if they did not build the system themselves. Importers and distributors face compliance obligations when they bring AI systems into the EU market. Product manufacturers who integrate AI systems into products already covered by EU product safety legislation, such as machinery or medical devices, must also comply.
The extraterritorial reach of the Act is particularly significant for non-EU companies. If your organization offers AI-enabled services to EU users and those systems fall under the Act's definitions, compliance is not optional simply because your headquarters sit outside the EU.
1.2 What is an AI system under the Act?
The Act employs a broad, technology-neutral definition of AI. It covers machine learning approaches, logic- and knowledge-based systems, and statistical methods that generate outputs such as predictions, recommendations, or decisions influencing real-world environments.
In practical terms, if you use models or software that infer patterns from data and produce outputs influencing decisions or environments, it is likely considered an AI system under the regulation. The breadth of this definition is deliberate: it is designed to remain relevant as the technology evolves.
2. Risk-Based Classification Framework
The entire regulatory architecture of the EU AI Act rests on a risk-based approach. The first and most consequential compliance step for any organization is to classify each of its AI systems into one of four tiers. That classification determines the nature and intensity of the obligations that follow.
2.1 Unacceptable Risk (Prohibited Practices)
At the top of the risk hierarchy sit AI practices that the EU has determined pose such fundamental threats to rights and safety that they are banned outright. These prohibitions took effect in August 2024.
Social scoring of individuals by public authorities, or on their behalf, based on social behavior or personal characteristics that leads to detrimental or disproportionate treatment is prohibited. The same applies to AI systems that exploit the vulnerabilities of specific groups, whether defined by age, disability, or socioeconomic situation, in ways that materially distort behavior and cause or are likely to cause harm. Subliminal manipulation techniques that materially distort a person's behavior and are likely to cause significant harm are likewise banned. Finally, real-time remote biometric identification in publicly accessible spaces for law enforcement purposes is prohibited, subject only to narrow, strictly regulated exceptions.
If any system within your portfolio falls into these categories, it must be phased out or redesigned immediately. There is no transition period for prohibited practices.
2.2 High-Risk AI Systems
High-risk systems face the heaviest compliance burden under the Act, and the range of use cases captured by this tier is broad. The regulation identifies eight domains of particular concern.
Biometrics covers systems used for biometric identification and categorization of natural persons. Critical infrastructure encompasses AI systems that manage or operate infrastructure in sectors like energy and transport, where failure could endanger life or health. Education and vocational training includes systems that determine access, admissions, or assessment outcomes. Employment and worker management captures AI used in recruitment, candidate screening, promotion decisions, task allocation, performance evaluation, and termination. Access to essential services covers credit scoring, access to social benefits, healthcare triage, and emergency services dispatch. Law enforcement includes risk assessments, evidence evaluation, and predictive policing under strict conditions. Migration, asylum, and border control encompasses risk assessments, security checks, and credibility assessments. Administration of justice and democratic processes captures tools assisting judicial decision-making or influencing democratic processes.
For a system to be classified as high-risk, two conditions generally must be met. First, the system must fall into one of the use cases enumerated in Annex III of the Act. Second, it must significantly influence decisions that produce legal or similarly significant effects on individuals. Meeting both thresholds triggers the full suite of high-risk obligations detailed in Section 3 below.
2.3 Limited-Risk AI Systems
Limited-risk systems do not face the full weight of high-risk requirements, but they do trigger transparency obligations that organizations must not overlook. This tier captures chatbots and conversational agents where users may reasonably believe they are interacting with a human, emotion recognition systems, biometric categorization systems that sort individuals by characteristics such as age group, and deepfakes or synthetic media designed to resemble real people, objects, places, or events.
The core obligation is straightforward: you must ensure users are informed they are interacting with AI or viewing AI-generated content, unless the artificial nature of the content is obvious from context.
2.4 Minimal-Risk AI Systems
The majority of AI systems deployed today, including spam filters, product recommendation engines, many internal analytics tools, and most general-purpose AI applications that do not trigger a specific high-risk use case, fall into the minimal-risk tier. These systems face no specific obligations under the AI Act itself, though they remain subject to other applicable legislation such as the GDPR and consumer protection laws.
Even at this tier, however, regulators have signaled that they expect organizations to adopt voluntary best practices in governance, documentation, and human oversight. Treating minimal-risk as a license for unchecked deployment would be a strategic miscalculation.
3. High-Risk AI: Core Compliance Requirements
Organizations operating high-risk AI systems must satisfy a comprehensive set of obligations before placing those systems on the market or putting them into service. The requirements form an integrated compliance framework, and each element reinforces the others.
3.1 Risk Management System (Article 9)
Article 9 of the Act requires a documented, continuous risk management process spanning the entire lifecycle of the AI system. The process begins with hazard identification, where the organization must identify reasonably foreseeable risks to health, safety, fundamental rights, and the potential for discrimination. Risk analysis and evaluation follow, requiring the organization to assess severity and likelihood and to prioritize high-impact risks accordingly. Risk control measures, both technical and organizational, must then be designed and implemented to reduce risks to acceptable levels. Testing and verification ensure those controls work as intended under realistic conditions. Finally, the process must be iterative: risk assessments require updating whenever models, data, or operational context change, or when incidents occur.
In practice, this means maintaining a risk register for each AI system, aligning with frameworks such as ISO 31000 and relevant sectoral standards where possible, and integrating risk reviews into existing MLOps or software development lifecycle gates.
3.2 Data Governance and Data Management (Article 10)
The quality and integrity of training, validation, and testing data sit at the heart of the Act's approach to high-risk AI. Data must be relevant to the intended purpose, representative of the population and context in which the system will operate, free of errors as far as possible, and complete enough to capture edge cases. Critically, data must be assessed and mitigated for bias that could lead to discriminatory outcomes.
Operationalizing these requirements means documenting data sources, collection methods, and preprocessing steps with full traceability. It means performing data quality and bias assessments that include subgroup performance analysis. And it means implementing data governance policies that address access control, lineage tracking, retention schedules, and update procedures.
3.3 Technical Documentation (Article 11, Annex IV)
Before market placement, providers must prepare comprehensive technical documentation that enables regulatory authorities to assess compliance. This documentation typically covers the system's description, purpose, and intended users; its architecture and components, including any third-party models and services; training, validation, and testing procedures; performance metrics spanning accuracy, robustness, and cybersecurity posture; the risk management process and its results; the design of human oversight mechanisms; and the post-market monitoring plan.
This is not a one-time exercise. The documentation must be kept up to date throughout the system's lifecycle and made available to market surveillance authorities upon request.
3.4 Logging and Traceability (Article 12)
High-risk systems must automatically log events at a level of detail sufficient to support traceability of system behavior, investigation of incidents and malfunctions, and auditability of key decisions and model outputs.
Implementation requires logging inputs, outputs, model version, configuration state, and key decision points. Logs must be tamper-resistant, time-stamped, and retained for defined periods. At the same time, the logging regime must align with privacy and security requirements, particularly GDPR data minimization principles, creating a tension that organizations must carefully navigate.
3.5 Transparency and Instructions for Use (Article 13)
Providers must supply deployers with clear, accessible instructions for use that cover the system's intended purpose and limitations, its performance metrics and known failure modes, required data quality and input conditions, human oversight requirements and recommended operating procedures, and cybersecurity and maintenance guidance. The information must be tailored to non-expert professional users, enabling them to understand the system's capabilities and boundaries without requiring deep technical expertise.
3.6 Human Oversight (Article 14)
The Act requires that high-risk systems be designed for effective human supervision. Humans interacting with these systems must be able to understand system outputs at an appropriate level, detect anomalies, errors, or biases in those outputs, override or disregard AI recommendations when professional judgment warrants it, and intervene or halt the system entirely when risk materializes.
Achieving this in practice demands that providers build in explanations or interpretable signals relevant to the specific use case, implement safeguards such as approval workflows, decision thresholds, or dual-control mechanisms for critical decisions, and ensure that human overseers receive adequate training. Clear accountability must be defined in organizational policies so that oversight is not merely a design feature but a lived operational practice.
3.7 Accuracy, Robustness, and Cybersecurity (Article 15)
High-risk systems must achieve and maintain appropriate levels of accuracy, robustness, and cybersecurity throughout their operational lifecycle. Organizations must define target performance levels and acceptable error rates, test robustness against distribution shifts, adversarial inputs, and data quality degradation, implement cybersecurity controls that protect models, data, and pipelines from tampering, and monitor production performance continuously, triggering retraining or recalibration when defined thresholds are breached.
3.8 Quality Management System (Article 17)
Providers of high-risk AI must operate a Quality Management System (QMS) that addresses organizational structure and responsibilities, procedures for design, development, testing, and validation, supplier and third-party management including relationships with GPAI providers, documentation and record-keeping standards, and corrective and preventive action processes.
Aligning with ISO 9001 and AI-specific standards such as ISO/IEC 42001 for AI management systems can provide a structured path toward demonstrating compliance, particularly where organizations already maintain certified management systems.
3.9 Conformity Assessment and CE Marking (Article 43)
Before placing a high-risk AI system on the market or putting it into service, the provider must complete a conformity assessment and affix the CE marking. For most high-risk systems that rely on harmonized standards, a self-assessment pathway is available. However, a third-party assessment by a notified body is mandatory for certain categories, including remote biometric identification systems and systems where no relevant harmonized standards exist or where existing standards prove insufficient.
The process produces two formal outputs: an EU declaration of conformity and the CE marking affixed to the system or its accompanying documentation.
3.10 Post-Market Monitoring and Incident Reporting (Article 72)
Compliance does not end at market placement. Providers must establish a post-market monitoring system designed to collect and analyze performance and incident data, detect serious incidents or emerging malfunction trends, and feed insights back into risk management and model update cycles.
Serious incidents and malfunctions that constitute a breach of obligations or pose risks to health, safety, or fundamental rights must be reported to competent authorities within prescribed timelines. A formal post-market monitoring plan must be maintained as an integral component of the technical documentation.
4. General-Purpose AI (GPAI) Models
The Act introduces a distinct set of obligations for general-purpose AI models, recognizing that foundation models used across multiple downstream applications present unique regulatory challenges that do not map neatly onto the system-level classification framework.
4.1 Obligations for All GPAI Providers (Effective August 2025)
Organizations that provide a GPAI model, such as a large language model, capable of integration into many downstream applications must meet four core obligations. They must prepare technical documentation describing the model's architecture, training approach, data sources at a high level, capabilities, limitations, and known risks. They must provide information and documentation to downstream providers sufficient to enable those providers to comply with their own obligations, particularly when building high-risk systems on top of the GPAI model. They must ensure copyright compliance, including respecting EU copyright law and enabling rights holders to opt out of text and data mining where applicable. And they must publish a sufficiently detailed summary of training data, focusing on categories and sources rather than disclosing raw datasets.
4.2 Systemic Risk GPAI Models (>10^25 FLOPs)
GPAI models trained with computational resources exceeding 10^25 FLOPs are presumed to pose systemic risk under the Act. This threshold triggers additional obligations that reflect the outsized potential impact of very large models with broad capabilities.
These organizations must conduct model evaluation and testing specifically targeting systemic risks such as potential for misuse, disinformation generation, and cyber-offense capabilities. They must establish incident response mechanisms and report serious incidents to regulators. They must implement enhanced cybersecurity and resilience measures commensurate with the model's capabilities. And they must provide energy consumption and efficiency reporting covering training runs and major model updates.
5. Implementation Timeline and Milestones
The Act's phased implementation schedule is the critical input for any compliance planning exercise.
The first milestone has already passed: prohibited practices were banned effective August 2024. In February 2025, EU-level governance structures, including the European AI Office, became operational, and the development of guidance documents and harmonized standards accelerated. August 2025 marks the date when obligations for GPAI providers take effect. August 2026 is the most consequential deadline for most organizations, as core obligations for new high-risk AI systems apply from that date, including conformity assessment and CE marking requirements. Finally, August 2027 is the date by which full compliance is required, including for existing high-risk systems that were already in use before the Act entered into force.
For most organizations, the August 2026 deadline for new high-risk systems should be treated as the primary planning horizon, with the additional year to August 2027 reserved for bringing legacy systems into full compliance.
6. Practical Compliance Roadmap (2024-2027)
6.1 Step 1: Build an AI Inventory and Classification
The foundation of any compliance program under the EU AI Act is a central inventory of all AI systems and models in use or under development. For each system, organizations should record the purpose and use case, the affected users and jurisdictions, the decision impact (specifically whether the system produces legal or similarly significant effects), and any dependencies on GPAI models or third-party services. Each system should then be classified into the appropriate risk tier: unacceptable, high-risk, limited-risk, or minimal-risk.
This inventory exercise often reveals AI deployments that business units have adopted without central visibility, making the discovery process itself a valuable governance outcome.
6.2 Step 2: Governance and Operating Model
With the inventory complete, organizations should establish an AI governance committee drawing representation from compliance, legal, security, data science, product, and HR functions. This committee should define policies and standards governing AI development, procurement, and deployment across the enterprise. Rather than creating parallel processes, organizations should integrate AI risk reviews into existing risk, compliance, and change-management frameworks, ensuring that AI governance becomes a natural extension of the organization's operating model rather than an isolated compliance exercise.
6.3 Step 3: High-Risk System Remediation and Design
For each system classified as high-risk, the organization should conduct a gap assessment against the requirements set out in Articles 9 through 15, as well as Articles 17, 43, and 72. Remediation priorities should be set based on a combination of risk severity, business criticality, and the regulatory timeline. The remediation work itself spans risk management and documentation, data governance and bias mitigation, logging and monitoring infrastructure, human oversight mechanisms, and QMS integration and conformity assessment planning.
6.4 Step 4: GPAI Strategy and Contracts
Organizations must identify where they provide GPAI models and where they consume them, as the obligations differ materially between these two roles. Providers need to build processes for technical documentation, training data summaries, and downstream information sharing, as well as evaluation and incident response capabilities for models that cross the systemic risk threshold. Organizations that consume GPAI models should update vendor due diligence and contracting practices to require AI Act-aligned documentation and ongoing compliance support from their providers.
6.5 Step 5: Training and Culture
Regulatory compliance is ultimately only as strong as the people responsible for implementing it. Organizations should train developers, data scientists, product managers, and compliance teams on AI Act requirements at a level appropriate to their roles. Targeted, deeper training should be provided to human overseers of high-risk systems, given the specific responsibilities the Act places on individuals in that role. More broadly, organizations should embed ethics and fundamental rights considerations into design reviews, shifting these concerns from afterthought to integral component of the development process.
6.6 Step 6: Ongoing Monitoring and Continuous Improvement
Compliance under the EU AI Act is not a point-in-time achievement but an ongoing operational commitment. Organizations must implement post-market monitoring for all high-risk systems, regularly reviewing performance data, incident reports, and user feedback. Models, documentation, and risk assessments must be updated as operational context shifts, as the regulatory environment evolves, and as the organization's own understanding of its AI systems matures.
7. Relationship with GDPR and Other Laws
The EU AI Act does not replace the GDPR or sector-specific regulations. It layers additional requirements on top of the existing regulatory framework.
The GDPR continues to govern personal data processing, lawful basis, data subject rights, Data Protection Impact Assessments (DPIAs), and international data transfers. EU product safety laws remain applicable where AI is integrated into regulated products such as medical devices or machinery. Consumer protection and anti-discrimination laws continue to apply to AI-enabled services without modification.
The intersections between the AI Act and GDPR deserve particular attention. The AI Act's data governance requirements are complementary to GDPR's data quality and fairness principles, and many high-risk AI deployments will trigger the requirement to conduct a DPIA under GDPR as well. The logging and monitoring obligations under the AI Act must also be reconciled with GDPR's data minimization and purpose limitation principles, a balancing act that will require careful architectural and policy decisions.
8. Modifications, Roles, and Liability
8.1 Substantial Modifications
The Act introduces a consequential concept: if an organization substantially modifies an AI system, whether by changing its intended purpose, retraining it with new data that materially alters performance, or making significant architectural changes, that organization may become the provider of what the regulation treats as a new system. This reclassification carries with it the full provider obligations, including re-assessing risk classification, re-conducting conformity assessment where required, and updating technical documentation and CE marking.
8.2 Shared Responsibilities
The Act distributes compliance responsibilities across the AI value chain. Providers bear primary responsibility for design-time compliance, documentation, and conformity assessment. Deployers must use systems in accordance with provider instructions, implement human oversight, and monitor for incidents during operation. Importers and distributors must verify that the systems they place on the EU market comply with applicable requirements and that documentation and CE marking are in order.
Given this distributed model, contracts across the value chain should clearly allocate responsibilities and information-sharing obligations, ensuring that no compliance gap emerges at the handoff points between providers, deployers, and intermediaries.
9. Key Takeaways for Compliance Officers and CTOs
The EU AI Act represents a structural shift in how AI systems must be developed, deployed, and governed. Its global reach means that any organization whose AI systems affect individuals in the EU is subject to its requirements, regardless of corporate domicile.
The risk-based classification framework is the linchpin of the entire regulatory structure. Getting classification right determines everything that follows, from the intensity of obligations to the timeline for compliance. High-risk systems face the most demanding requirements: risk management, data governance, technical documentation, logging, transparency, human oversight, quality management, conformity assessment, and post-market monitoring form an integrated compliance framework that must be operational before these systems reach the market.
GPAI providers face their own distinct obligations from August 2025, with additional duties imposed on models that cross the 10^25 FLOPs systemic risk threshold. The phased implementation schedule provides some breathing room, with prohibited practices already banned since 2024, GPAI obligations taking effect in 2025, new high-risk system requirements arriving in 2026, and full compliance including legacy systems required by 2027.
The financial consequences of non-compliance, with fines reaching up to 7% of global annual turnover, make this a board-level risk management issue, not merely a compliance exercise. Organizations that begin now with AI inventories, classification, governance structures, and remediation plans will be best positioned to meet the deadlines ahead. Those that delay face the prospect of compressed timelines, higher remediation costs, and regulatory exposure that grows with each passing quarter.
Common Questions
Your system is likely high-risk if it falls under Annex III categories such as biometrics, critical infrastructure, education, employment, essential services, law enforcement, migration, or justice, and it significantly influences decisions that produce legal or similarly significant effects on individuals. In such cases, the full set of high-risk obligations applies.
Most high-risk AI systems can be self-certified through an internal conformity assessment when relevant harmonized standards are applied. However, systems like remote biometric identification or those without applicable standards require a third-party conformity assessment by a notified body before being placed on the EU market.
The EU AI Act and GDPR apply simultaneously. The AI Act governs AI system design, risk management, transparency, and safety, while GDPR regulates personal data processing, lawful basis, and individual rights. AI Act data governance requirements complement GDPR’s principles of data quality, fairness, and accountability, and many high-risk AI deployments will also require GDPR Data Protection Impact Assessments.
If you substantially modify an AI system—for example by changing its intended purpose, significantly altering its architecture, or retraining it in a way that materially affects performance or risk—you become the provider of a new system. You must then re-classify it, update technical documentation, and, where applicable, perform a new conformity assessment and ensure CE marking.
Penalties for Non-Compliance
The EU AI Act allows fines up to the higher of 35 million EUR or 7% of global annual turnover for the most serious infringements, such as deploying prohibited AI practices or failing to comply with systemic-risk GPAI obligations. Compliance planning should be treated as a board-level risk priority.
Maximum share of global annual turnover that can be fined for the most serious EU AI Act violations
Source: Regulation (EU) 2024/1689 Artificial Intelligence Act
"For most organizations, the critical path to EU AI Act compliance is building an accurate AI inventory, classifying systems by risk, and remediating high-risk use cases well before the August 2026 deadline."
— EU AI Act Compliance Guide
References
- EU AI Act — Regulatory Framework for Artificial Intelligence. European Commission (2024). View source
- General Data Protection Regulation (GDPR) — Official Text. European Commission (2016). View source
- AI Risk Management Framework (AI RMF 1.0). National Institute of Standards and Technology (NIST) (2023). View source
- ISO/IEC 42001:2023 — Artificial Intelligence Management System. International Organization for Standardization (2023). View source
- OECD Principles on Artificial Intelligence. OECD (2019). View source
- Model AI Governance Framework (Second Edition). PDPC and IMDA Singapore (2020). View source
- ASEAN Guide on AI Governance and Ethics. ASEAN Secretariat (2024). View source

