The EU Artificial Intelligence Act (EU AI Act) entered into force in August 2024, creating the first comprehensive, horizontal AI regulation in the world. It applies extraterritorially to any organization that places AI systems on the EU market or whose AI outputs affect individuals in the EU, regardless of where the provider is established. Penalties can reach the higher of 35M EUR or 7% of global annual turnover for the most serious infringements.
This guide is designed for Compliance Officers and CTOs who need a practical, end‑to‑end view of how to classify AI systems, understand obligations, and build a compliance roadmap through August 2027.
1. Scope and Applicability
1.1 Who is in scope?
You are likely in scope if you are any of the following and your AI impacts people in the EU:
- Provider: Develops an AI system or general‑purpose AI (GPAI) model and places it on the market or puts it into service under your name or trademark.
- Deployer: Uses an AI system in the course of your professional activities.
- Importer/Distributor: Imports or distributes AI systems in the EU.
- Product Manufacturer: Integrates AI systems into products covered by EU product safety legislation (e.g., machinery, medical devices).
Extraterritorial reach means non‑EU companies offering AI‑enabled services to EU users must comply if their systems fall under the Act.
1.2 What is an AI system under the Act?
The Act uses a broad, technology‑neutral definition of AI, covering machine learning, logic‑ and knowledge‑based approaches, and statistical methods that generate outputs such as predictions, recommendations, or decisions influencing environments.
If you use models or software that:
- infer patterns from data, and
- produce outputs that influence decisions or environments,
it is likely considered an AI system.
2. Risk-Based Classification Framework
The EU AI Act is built on a risk‑based approach. Your first compliance step is to classify each AI system.
2.1 Unacceptable Risk (Prohibited Practices)
These AI practices are banned in the EU from August 2024:
- Social scoring of individuals by public authorities (or on their behalf) based on social behaviour or personal characteristics, leading to detrimental or disproportionate treatment.
- Exploitation of vulnerabilities of specific groups (e.g., age, disability, socio‑economic situation) in a way that materially distorts behaviour and causes or is likely to cause harm.
- Subliminal manipulation that materially distorts a person’s behaviour and causes or is likely to cause significant harm.
- Real‑time remote biometric identification in publicly accessible spaces for law enforcement, subject to narrow, strictly regulated exceptions.
If any system falls into these categories, it must be phased out or redesigned immediately.
2.2 High-Risk AI Systems
High‑risk systems face the heaviest obligations. They include:
- Biometrics: Biometric identification and categorization of natural persons.
- Critical infrastructure: Systems that manage or operate critical infrastructure (e.g., energy, transport) where failure could endanger life or health.
- Education and vocational training: Systems determining access, admissions, or assessment.
- Employment and worker management: Recruitment, candidate screening, promotion, task allocation, performance evaluation, termination.
- Access to essential services: Credit scoring, access to social benefits, healthcare triage, emergency services.
- Law enforcement: Risk assessments, evidence evaluation, predictive policing (subject to strict conditions).
- Migration, asylum, and border control: Risk assessments, security checks, credibility assessments.
- Administration of justice and democratic processes: Tools that assist judicial decision‑making or influence democratic processes.
Two conditions generally need to be met:
- The system falls into an Annex III use case (e.g., employment, credit, law enforcement), and
- It significantly influences decisions that produce legal or similarly significant effects on individuals.
2.3 Limited-Risk AI Systems
Limited‑risk systems trigger transparency obligations, not full high‑risk requirements. Examples include:
- Chatbots and conversational agents where users may reasonably believe they are interacting with a human.
- Emotion recognition systems.
- Biometric categorization systems (e.g., categorizing by age group).
- Deepfakes and synthetic media that resemble real people, objects, places, or events.
You must ensure users are informed they are interacting with AI or viewing AI‑generated content, unless obvious from context.
2.4 Minimal-Risk AI Systems
Minimal‑risk systems face no specific obligations under the Act but remain subject to other laws (e.g., GDPR, consumer protection). Examples:
- Spam filters.
- Product or content recommendation systems.
- Many internal analytics tools.
- Most general‑purpose AI use where no specific high‑risk use case is triggered.
Even for minimal‑risk systems, regulators expect voluntary best practices in governance, documentation, and human oversight.
3. High-Risk AI: Core Compliance Requirements
If a system is high‑risk, you must meet a comprehensive set of obligations before placing it on the market or putting it into service.
3.1 Risk Management System (Article 9)
You must implement a documented, continuous risk management process covering the entire lifecycle:
- Hazard identification: Identify reasonably foreseeable risks to health, safety, fundamental rights, and discrimination.
- Risk analysis and evaluation: Assess severity and likelihood; prioritize high‑impact risks.
- Risk control: Design and implement technical and organizational measures to reduce risks to acceptable levels.
- Testing and verification: Validate that controls work as intended under realistic conditions.
- Iterative updates: Update risk assessments when models, data, or context change, or when incidents occur.
Practical steps:
- Maintain a risk register per AI system.
- Align with ISO 31000 and relevant sectoral standards where possible.
- Integrate risk reviews into your MLOps or SDLC gates.
3.2 Data Governance and Data Management (Article 10)
Training, validation, and testing data must be:
- Relevant to the intended purpose.
- Representative of the population and context.
- Free of errors as far as possible.
- Complete and sufficiently broad to capture edge cases.
- Assessed and mitigated for bias that could lead to discriminatory outcomes.
Required measures:
- Document data sources, collection methods, and preprocessing steps.
- Perform data quality and bias assessments, including subgroup performance analysis.
- Implement data governance policies covering access control, lineage, retention, and updates.
3.3 Technical Documentation (Article 11, Annex IV)
Before market placement, you must prepare comprehensive technical documentation enabling authorities to assess compliance. It typically includes:
- System description, purpose, and intended users.
- Architecture and components (including third‑party models and services).
- Training, validation, and testing procedures.
- Performance metrics (accuracy, robustness, cybersecurity posture).
- Risk management process and results.
- Human oversight design.
- Post‑market monitoring plan.
This documentation must be kept up to date and made available to market surveillance authorities upon request.
3.4 Logging and Traceability (Article 12)
High‑risk systems must automatically log events to support:
- Traceability of system behaviour.
- Investigation of incidents and malfunctions.
- Auditability of key decisions and model outputs.
Implementation considerations:
- Log inputs, outputs, model version, configuration, and key decision points.
- Ensure logs are tamper‑resistant, time‑stamped, and retained for defined periods.
- Align with privacy and security requirements (e.g., GDPR data minimization).
3.5 Transparency and Instructions for Use (Article 13)
You must provide clear, accessible instructions for use to deployers, including:
- Intended purpose and limitations.
- Performance metrics and known failure modes.
- Required data quality and input conditions.
- Human oversight requirements and recommended operating procedures.
- Cybersecurity and maintenance guidance.
Information must be tailored to non‑expert professional users, enabling them to understand capabilities and limitations.
3.6 Human Oversight (Article 14)
High‑risk systems must be designed for effective human supervision so that humans can:
- Understand system outputs at an appropriate level.
- Detect anomalies, errors, or biases.
- Override or disregard AI recommendations when necessary.
- Intervene or stop the system in case of risk.
Design measures:
- Provide explanations or interpretable signals relevant to the use case.
- Implement safeguards such as approval workflows, thresholds, or dual‑control for critical decisions.
- Train human overseers and define clear accountability in policies.
3.7 Accuracy, Robustness, and Cybersecurity (Article 15)
High‑risk systems must achieve appropriate levels of accuracy, robustness, and cybersecurity throughout their lifecycle.
You must:
- Define target performance levels and acceptable error rates.
- Test robustness against distribution shifts, adversarial inputs, and data quality issues.
- Implement cybersecurity controls to protect models, data, and pipelines from tampering.
- Monitor performance in production and trigger retraining or recalibration when thresholds are breached.
3.8 Quality Management System (Article 17)
Providers of high‑risk AI must operate a Quality Management System (QMS) covering:
- Organizational structure and responsibilities.
- Procedures for design, development, testing, and validation.
- Supplier and third‑party management (including GPAI providers).
- Documentation and record‑keeping.
- Corrective and preventive actions.
Aligning with ISO 9001 and AI‑specific standards (e.g., ISO/IEC 42001 for AI management systems) can help demonstrate compliance.
3.9 Conformity Assessment and CE Marking (Article 43)
Before placing a high‑risk AI system on the market or putting it into service, you must complete a conformity assessment and affix the CE marking.
- Self‑assessment is allowed for most high‑risk systems that rely on harmonized standards.
- Third‑party assessment by a notified body is required for certain categories, including:
- Remote biometric identification systems.
- Systems where no relevant harmonized standards exist or where they are insufficient.
Outputs:
- EU declaration of conformity.
- CE marking on the system or its documentation.
3.10 Post-Market Monitoring and Incident Reporting (Article 72)
Providers must establish a post‑market monitoring system to:
- Collect and analyze performance and incident data.
- Detect serious incidents or malfunction trends.
- Feed insights back into risk management and model updates.
You must:
- Report serious incidents and malfunctioning that constitute a breach of obligations or pose risks to health, safety, or fundamental rights to competent authorities within prescribed timelines.
- Maintain a post‑market monitoring plan as part of your technical documentation.
4. General-Purpose AI (GPAI) Models
The Act introduces specific obligations for general‑purpose AI models, including foundation models used across multiple downstream applications.
4.1 Obligations for All GPAI Providers (Effective August 2025)
If you provide a GPAI model (e.g., a large language model) that can be integrated into many applications, you must:
- Prepare technical documentation describing:
- Model architecture and training approach.
- Training data sources at a high level.
- Capabilities, limitations, and known risks.
- Provide information and documentation to downstream providers so they can comply with their own obligations (especially if they build high‑risk systems on top of your model).
- Ensure copyright compliance, including respecting EU copyright law and enabling rights holders to opt out of text and data mining where applicable.
- Publish a sufficiently detailed summary of training data used, focusing on categories and sources rather than raw datasets.
4.2 Systemic Risk GPAI Models (>10^25 FLOPs)
GPAI models trained with computational resources above 10^25 FLOPs are presumed to pose systemic risk and face additional obligations, including:
- Model evaluation and testing for systemic risks (e.g., misuse, disinformation, cyber‑offense capabilities).
- Incident response mechanisms and reporting for serious incidents.
- Enhanced cybersecurity and resilience measures.
- Energy consumption and efficiency reporting related to training and major updates.
These obligations are designed to ensure that very large models with broad capabilities are developed and deployed responsibly.
5. Implementation Timeline and Milestones
Understanding the phased implementation is critical for planning.
- August 2024: Prohibited practices (unacceptable risk) banned.
- February 2025: EU‑level governance structures (e.g., European AI Office) become operational; guidance and standards development accelerate.
- August 2025: Obligations for GPAI providers take effect.
- August 2026: Core obligations for new high‑risk AI systems apply, including conformity assessment and CE marking.
- August 2027: Full compliance required, including for existing high‑risk systems already in use before the Act.
For most organizations, August 2026 is the critical deadline for new high‑risk systems, with a further year to bring legacy systems into full compliance.
6. Practical Compliance Roadmap (2024–2027)
6.1 Step 1: Build an AI Inventory and Classification
- Create a central inventory of all AI systems and models in use or development.
- For each system, record:
- Purpose and use case.
- Affected users and jurisdictions.
- Decision impact (legal or similarly significant effects?).
- Dependencies on GPAI models or third‑party services.
- Classify each system into unacceptable, high‑risk, limited‑risk, or minimal‑risk.
6.2 Step 2: Governance and Operating Model
- Establish an AI governance committee (compliance, legal, security, data science, product, HR, etc.).
- Define policies and standards for AI development, procurement, and deployment.
- Integrate AI risk reviews into existing risk, compliance, and change‑management processes.
6.3 Step 3: High-Risk System Remediation and Design
For each high‑risk system:
- Conduct a gap assessment against Articles 9–15 and 17, 43, 72.
- Prioritize remediation based on risk, business criticality, and timeline.
- Implement:
- Risk management and documentation.
- Data governance and bias mitigation.
- Logging and monitoring.
- Human oversight mechanisms.
- QMS integration and conformity assessment planning.
6.4 Step 4: GPAI Strategy and Contracts
- Identify where you provide GPAI models and where you consume them.
- For providers:
- Build processes for technical documentation, training data summaries, and downstream information sharing.
- Design evaluation and incident response for systemic risk if applicable.
- For consumers:
- Update vendor due diligence and contracts to require AI Act‑aligned documentation and support.
6.5 Step 5: Training and Culture
- Train developers, data scientists, product managers, and compliance teams on AI Act requirements.
- Provide targeted training for human overseers of high‑risk systems.
- Embed ethics and fundamental rights considerations into design reviews.
6.6 Step 6: Ongoing Monitoring and Continuous Improvement
- Implement post‑market monitoring for high‑risk systems.
- Regularly review performance, incidents, and user feedback.
- Update models, documentation, and risk assessments as context and regulations evolve.
7. Relationship with GDPR and Other Laws
The EU AI Act does not replace GDPR or sector‑specific regulations. Instead, it layers on top:
- GDPR: Governs personal data processing, lawful basis, data subject rights, DPIAs, and international transfers.
- EU product safety laws: Apply where AI is integrated into regulated products (e.g., medical devices, machinery).
- Consumer protection and anti‑discrimination laws: Continue to apply to AI‑enabled services.
Key intersections with GDPR:
- AI Act’s data governance requirements complement GDPR’s data quality and fairness principles.
- Many high‑risk AI deployments will require Data Protection Impact Assessments (DPIAs) under GDPR.
- Logging and monitoring must respect data minimization and purpose limitation.
8. Modifications, Roles, and Liability
8.1 Substantial Modifications
If you substantially modify an AI system (e.g., change intended purpose, retrain with new data that alters performance, or materially change architecture), you may become the provider of a new system and must:
- Re‑assess risk classification.
- Re‑do conformity assessment where required.
- Update technical documentation and CE marking.
8.2 Shared Responsibilities
- Providers: Primarily responsible for design‑time compliance, documentation, and conformity assessment.
- Deployers: Must use systems in line with instructions, implement human oversight, and monitor for incidents.
- Importers/Distributors: Ensure that systems they place on the EU market comply and that documentation and CE marking are in place.
Contracts should clearly allocate responsibilities and information‑sharing obligations across the value chain.
9. Key Takeaways for Compliance Officers and CTOs
- The EU AI Act applies globally to any AI system impacting individuals in the EU.
- A risk‑based framework classifies AI into unacceptable, high‑risk, limited‑risk, and minimal‑risk categories.
- High‑risk systems face extensive obligations: risk management, data governance, documentation, logging, transparency, human oversight, QMS, conformity assessment, and post‑market monitoring.
- GPAI providers face specific obligations from August 2025, with additional duties for systemic‑risk models.
- Deadlines are phased: prohibited practices (2024), governance structures (2025), GPAI (2025), high‑risk (2026), and full compliance including legacy systems (2027).
- Non‑compliance can lead to fines up to 7% of global annual turnover for the most serious infringements.
- Organizations should start now with AI inventories, classification, governance structures, and remediation plans.
Frequently Asked Questions
Is my AI system high-risk?
Check whether your system falls under Annex III categories such as biometrics, critical infrastructure, education, employment, essential services, law enforcement, migration, or justice. If it does and it significantly influences decisions with legal or similarly significant effects on individuals, it is likely high‑risk and subject to the full set of obligations.
Can I self-certify or do I need a third-party assessment?
Most high‑risk systems can undergo self‑assessment, provided they follow relevant harmonized standards. However, certain systems—such as remote biometric identification and those without applicable standards—require a third‑party conformity assessment by a notified body before market placement.
How does the AI Act relate to GDPR?
Both apply in parallel. The AI Act focuses on system‑level safety, transparency, and risk management, while GDPR governs personal data processing, lawful basis, and data subject rights. Many concepts align: AI Act data governance supports GDPR’s data quality and fairness principles, and high‑risk AI deployments often require GDPR DPIAs.
What if I modify an existing AI system?
If your changes amount to a substantial modification—such as altering the intended purpose, significantly changing model architecture, or retraining in a way that materially affects performance or risk—you are treated as the provider of a new system and must ensure full compliance, including updated documentation and, where applicable, a new conformity assessment.
Frequently Asked Questions
Your system is likely high-risk if it falls under Annex III categories such as biometrics, critical infrastructure, education, employment, essential services, law enforcement, migration, or justice, and it significantly influences decisions that produce legal or similarly significant effects on individuals. In such cases, the full set of high-risk obligations applies.
Most high-risk AI systems can be self-certified through an internal conformity assessment when relevant harmonized standards are applied. However, systems like remote biometric identification or those without applicable standards require a third-party conformity assessment by a notified body before being placed on the EU market.
The EU AI Act and GDPR apply simultaneously. The AI Act governs AI system design, risk management, transparency, and safety, while GDPR regulates personal data processing, lawful basis, and individual rights. AI Act data governance requirements complement GDPR’s principles of data quality, fairness, and accountability, and many high-risk AI deployments will also require GDPR Data Protection Impact Assessments.
If you substantially modify an AI system—for example by changing its intended purpose, significantly altering its architecture, or retraining it in a way that materially affects performance or risk—you become the provider of a new system. You must then re-classify it, update technical documentation, and, where applicable, perform a new conformity assessment and ensure CE marking.
Penalties for Non-Compliance
The EU AI Act allows fines up to the higher of 35 million EUR or 7% of global annual turnover for the most serious infringements, such as deploying prohibited AI practices or failing to comply with systemic-risk GPAI obligations. Compliance planning should be treated as a board-level risk priority.
Maximum share of global annual turnover that can be fined for the most serious EU AI Act violations
Source: Regulation (EU) 2024/1689 Artificial Intelligence Act
"For most organizations, the critical path to EU AI Act compliance is building an accurate AI inventory, classifying systems by risk, and remediating high-risk use cases well before the August 2026 deadline."
— EU AI Act Compliance Guide
References
- Regulation (EU) 2024/1689 Artificial Intelligence Act. European Parliament and Council of the European Union (2024)
- AI Act Compliance Guidance. European Commission (2024)
- High-Risk AI Systems Classification Guidance. European AI Office (2025)
