Back to Insights
AI Governance & Risk ManagementChecklist

EU AI Act Compliance Checklist

July 9, 202511 min readMichael Lansdowne Hauge
Updated March 15, 2026
For:CISOData Science/ML

Step-by-step checklist for achieving EU AI Act compliance by August 2027.

Summarize and fact-check this article with:
Muslim Man Lawyer Formal - ai governance & risk management insights

Key Takeaways

  • 1.Begin system inventory and risk classification immediately, as all other obligations depend on it.
  • 2.Expect 6–12 months of remediation work for high-risk systems to meet Articles 9–15 and Annex IV/VIII requirements.
  • 3.Plan 2–4 months for conformity assessment, CE marking, and registration in the EU database for high-risk AI.
  • 4.Maintain robust, up-to-date technical documentation as your primary evidence of compliance during inspections.
  • 5.Implement post-market monitoring and incident reporting as ongoing obligations, not one-off tasks.
  • 6.GPAI and systemic-risk GPAI providers face earlier obligations starting August 2025.
  • 7.Limited-risk systems must meet transparency obligations such as AI interaction disclosure and synthetic content labelling.

Organizations operating within or selling into the European Union face a defining regulatory moment. The EU AI Act establishes the world's first comprehensive legal framework for artificial intelligence, and its staggered enforcement timeline demands that leadership teams treat compliance not as a single deadline but as an ongoing, multiphase program. This checklist translates the regulation's requirements into a structured sequence of workstreams, organized by phase, function, and system category, so that executive sponsors and compliance leads can track progress against each obligation with precision.

Phase 1: Classification (Now through Q1 2025)

Every compliance program begins with an accurate inventory. Organizations should catalog every AI system they develop, deploy, or distribute, then assess whether each falls within the AI Act's statutory definition of an artificial intelligence system. Each system must be classified by risk level against the criteria set out in Annex III of the regulation. Any system that constitutes a prohibited practice under Article 5 must be identified and flagged for immediate cessation. Alongside classification, organizations must assign the correct regulatory role to each entity in the value chain, distinguishing between provider, deployer, distributor, and importer as defined by the Act. All classification decisions, together with the rationale supporting them, should be documented in a format that can withstand regulatory scrutiny.

Phase 2: Gap Analysis (Q1 through Q2 2025)

For High-Risk Systems

With classification complete, the focus shifts to measuring the distance between current practices and the Act's substantive requirements. For each high-risk system, organizations should compare existing processes against the obligations set forth in Articles 9 through 15, evaluate current documentation against the specifications prescribed in Annex IV, and assess data quality and governance practices against the standards established under Article 10. Quality management systems should be reviewed against the framework described in Annex VIII. User-facing information and instructions for use warrant particular attention, as the regulation imposes specific transparency and intelligibility standards. The output of this phase should be a prioritized gap register that maps each deficiency to the relevant article or annex provision.

Phase 3: Remediation (Q2 2025 through Q2 2026)

Implement Core Requirements

Remediation represents the most resource-intensive phase of the compliance program and typically requires six to twelve months for high-risk systems. Organizations must establish a risk management system (Article 9) that operates as a continuous, iterative process throughout the AI system lifecycle. Data governance practices (Article 10) must be implemented to ensure the quality, relevance, and representativeness of training, validation, and testing datasets. Technical documentation conforming to Annex IV must be prepared for each high-risk system. Event logging capabilities (Article 12) must be deployed to enable automatic recording of system operations. Human oversight mechanisms (Article 14) must be designed to enable authorized personnel to understand, monitor, and where necessary override system outputs. A quality management system (Annex VIII) must be established, and post-market monitoring (Article 72) processes must be put in place to ensure ongoing surveillance once systems are operational.

Phase 4: Conformity Assessment (Q3 through Q4 2026)

Before placing a high-risk system on the market or putting it into service, organizations must complete a formal conformity assessment. The first decision is whether to pursue an internal control assessment or to engage a notified body for third-party evaluation. The conformity assessment must then be conducted in accordance with the selected procedure. Organizations must prepare an EU declaration of conformity, affix the CE marking on the product or its accompanying documentation, and register the system in the EU database for high-risk AI systems. All conformity assessment documentation must be maintained and kept current for the duration of the system's market presence.

Phase 5: Ongoing Compliance (August 2026 Onward)

Compliance does not end at market entry. Organizations must operate their post-market monitoring systems on a continuous basis. Serious incidents must be reported per Article 73 within the timeframes specified by the regulation. Technical documentation must be kept current and reflective of the system as deployed. Any substantial modification to a high-risk system triggers a reassessment obligation. Organizations must also be prepared to respond to requests from market surveillance authorities and to update documentation whenever system changes occur.

GPAI Model Providers (All)

Obligations Effective August 2025

Providers of general-purpose AI (GPAI) models face a distinct set of obligations that activate ahead of the high-risk system timeline. By August 2, 2025, all GPAI providers must prepare technical documentation describing model capabilities and limitations, provide information to downstream providers integrating the model into their own systems, implement a copyright compliance policy, and publish a sufficiently detailed summary of the content used for model training.

Systemic Risk GPAI Additional Requirements (Models Exceeding 10^25 FLOPs)

Models classified as posing systemic risk, defined by a training compute threshold exceeding 10^25 FLOPs, face additional obligations. These providers must conduct model evaluations including adversarial testing, track and document serious incidents, implement cybersecurity protections commensurate with the risk profile, and report energy consumption associated with model training.

Limited-Risk Systems

Obligations Effective August 2026

Systems classified as limited risk carry targeted transparency obligations. Deployers of chatbots must disclose AI interaction to users, ensuring individuals know they are communicating with a machine. Synthetic content including deepfakes must be marked as AI-generated. Where emotion recognition systems are in use, individuals must be informed. Biometric categorization must likewise be communicated clearly to affected persons.

Documentation Checklist

Requirements for All High-Risk Systems

Documentation is not ancillary to compliance; it is the primary evidence that compliance exists. For every high-risk system, organizations must maintain technical documentation (Annex IV), risk assessment and management records, data governance documentation, testing and validation reports, quality management system records, conformity assessment documentation, an EU declaration of conformity, post-market monitoring logs, incident reports and corrective action records, and change logs capturing all system updates. Under Article 11, these records must be maintained throughout the system lifecycle and for ten years following market withdrawal.

Key Takeaways

Classification is the foundation upon which every subsequent compliance activity rests, and organizations that have not yet begun should start immediately. The remediation phase alone typically requires six to twelve months for high-risk systems, making early action essential rather than optional. Conformity assessment procedures can take an additional two to four months, depending on the assessment pathway selected and the responsiveness of notified bodies. Documentation serves as the organization's evidence of compliance during inspections and enforcement actions; it is not a formality but a legal safeguard. Post-market monitoring is an ongoing obligation that persists for as long as a system remains in service. The August 2026 deadline for new high-risk systems is approaching faster than most compliance timelines can accommodate without dedicated resourcing.

Citations

  1. Regulation (EU) 2024/1689, Artificial Intelligence Act, European Parliament and Council, 2024.
  2. AI Act Implementation Roadmap, European Commission, 2024.

Implementation Timeline Milestones Organizations Must Track

The European Union Artificial Intelligence Act entered into force on August 1, 2024, but its obligations activate through a staggered enforcement timeline that creates distinct compliance deadlines for different system categories.

February 2, 2025: Prohibited Practices Deadline. Organizations must cease operating systems classified as unacceptable risk, including social scoring mechanisms, real-time biometric identification in publicly accessible spaces (with limited law enforcement exceptions), emotion recognition systems in workplace and educational contexts, and cognitive behavioral manipulation techniques targeting vulnerable populations.

August 2, 2025: General-Purpose Model Obligations. Providers of general-purpose artificial intelligence models, including OpenAI, Anthropic, Google DeepMind, Meta, and Mistral, must comply with transparency requirements including technical documentation, training data summaries addressing copyright compliance, and downstream provider notification obligations. Models classified as posing systemic risk face additional requirements including adversarial testing, incident monitoring, and cybersecurity protection measures.

August 2, 2026: High-Risk System Requirements. The most extensive compliance obligations activate for systems deployed in categories enumerated in Annex III: biometric identification, critical infrastructure management, educational access and vocational training assessment, employment and worker management, essential private and public services including credit scoring and insurance pricing, law enforcement, migration and border control, and justice administration.

August 2, 2027: Product Safety Integration. High-risk systems embedded in products already regulated under Union harmonization legislation, including medical devices, machinery, toys, civil aviation, motor vehicles, and marine equipment, must achieve full compliance including conformity assessment procedures conducted by notified bodies designated by Member State authorities.

Detailed Compliance Checklist Organized by Organizational Function

The legal function carries primary responsibility for establishing the regulatory architecture of the compliance program. This begins with completing a system inventory that classifies all deployed artificial intelligence applications against the risk categories defined in Articles 5 and 6 and Annex III. For each high-risk system, the legal team must establish legal basis documentation addressing Article 9 risk management requirements. Data governance practices must be reviewed and updated to satisfy Article 10 requirements covering training, validation, and testing dataset quality. Technical documentation packages conforming to Annex IV specifications must be prepared for each high-risk system. The conformity assessment pathway, whether self-assessment under Annex VI or third-party assessment under Annex VII, must be verified based on the system's classification. Finally, all applicable systems must be registered in the European Union database established under Article 71.

Technology and Engineering Department

Engineering teams bear responsibility for translating regulatory requirements into technical controls. Logging capabilities must be implemented to satisfy Article 12 automatic recording requirements for high-risk systems. Human oversight mechanisms must be established per Article 14, enabling authorized personnel to understand, monitor, and override system outputs when necessary. Accuracy, robustness, and cybersecurity standards must be verified against Article 15, including resilience against adversarial attacks. Model training methodologies, hyperparameter selections, and evaluation benchmark results must be documented with sufficient detail to support regulatory review. Monitoring systems must be configured to enable post-market surveillance obligations under Article 72. Version control and model lifecycle management should be implemented using established platforms such as MLflow, Weights and Biases, Neptune.ai, or Comet ML.

Human Resources and Workforce Management

Employment-related AI applications receive heightened scrutiny under the regulation. HR departments must identify all applications meeting high-risk classification, including recruitment screening, performance evaluation, promotion recommendation, and termination decision support systems. Transparency obligations under Article 13 must be verified to ensure affected employees receive meaningful information about system logic and decision criteria. In Member States with strong worker representation traditions, including Germany, France, the Netherlands, Belgium, and the Nordic countries, coordination with works councils or employee representative bodies is required per national transposition requirements. Human review procedures for automated employment decisions must comply with both the AI Act and the automated decision-making restrictions established under Article 22 of the General Data Protection Regulation.

Procurement and Vendor Management

The AI Act's role-based framework extends compliance obligations across the supply chain. Procurement teams must audit existing vendor contracts for artificial intelligence components that require reclassification under the Act's provider, deployer, importer, and distributor role definitions established in Article 3. Contract terms must be renegotiated to address compliance responsibility allocation, indemnification provisions, and documentation access requirements. Vendor conformity declarations and technical documentation must be verified before any system is deployed. Ongoing vendor monitoring protocols must be established to track compliance status, incident notifications, and system modification disclosures required under Article 23 obligations for authorized representatives.

Penalty Framework and Enforcement Mechanisms

National market surveillance authorities designated by each Member State enforce compliance under a penalty framework scaled by violation severity. The regulation establishes three tiers of maximum penalties. Violations involving prohibited practices carry fines of up to 35 million euros or 7% of global annual turnover, whichever is greater. Breaches of high-risk system requirements carry fines of up to 15 million euros or 3% of global annual turnover. Providing incorrect information to authorities carries fines of up to 7.5 million euros or 1.5% of global annual turnover. Small and medium enterprises and startups face proportionally reduced maximum penalties under provisions negotiated during the trilogue negotiations concluded in December 2023.

The breadth of high-risk system classifications under Annex III spans eight enumerated domains: biometric identification, critical infrastructure management, educational access determination, employment and workforce management, essential services eligibility, law enforcement analytics, migration administration, and justice system decision support. Conformity assessment procedures distinguish between self-assessment pathways available for most Annex III categories and mandatory third-party notified body audits required under Article 43 for biometric categorization and remote identification systems. Designated conformity assessment bodies operating under European Commission notification procedures include BSI (British Standards Institution), TUV Rheinland, Bureau Veritas, and DNV.

Technical documentation requirements under Article 11 mandate maintenance of algorithmic system descriptions, design specifications, development methodologies, validation and testing procedures, and risk management system documentation in perpetuity throughout the system lifecycle plus ten years following market withdrawal. Quality management system obligations under Article 17 reference ISO 9001 process-based approaches adapted for algorithmic contexts, incorporating resource management, design control, data governance, post-market monitoring, and serious incident reporting procedures notified through the EU AI Database registration maintained by the European AI Office.

Prohibited practice provisions under Article 5 enumerate cognitive behavioral manipulation, social scoring, predictive policing targeting individuals, untargeted facial image scraping, emotion recognition in workplace and educational institutions, and biometric categorization systems inferring sensitive characteristics including race, political opinions, trade union membership, religious beliefs, and sexual orientation. Transparency obligations under Articles 50 and 52 require deployers of chatbots, deepfake generators, emotion recognition systems, and biometric categorization systems to implement disclosure mechanisms ensuring natural persons receive notification of algorithmic interaction through clear, distinguishable indicators embedded within user interface presentation layers. Timeline milestones proceed through phased enforcement commencing February 2025 for prohibited practices, August 2025 for general-purpose AI model obligations, August 2026 for high-risk system requirements, and August 2027 for embedded product integration provisions.

Practical Next Steps

Translating regulatory obligations into organizational action requires more than awareness of the requirements. It demands governance infrastructure capable of sustaining compliance over time. The first priority is establishing a cross-functional governance committee with clear decision-making authority and regular review cadences that bring together legal, engineering, HR, and procurement leadership. Current governance processes should be documented thoroughly and measured against regulatory requirements in each operating market, producing a gap analysis that is specific enough to drive workstream assignments.

Standardized templates for governance reviews, approval workflows, and compliance documentation reduce friction and create consistency across business units, particularly in organizations operating multiple high-risk systems across different Annex III categories. Quarterly governance assessments should be built into the operating rhythm to ensure the framework evolves alongside regulatory guidance, enforcement precedent, and organizational changes. Internal capability building through targeted training programs for stakeholders across business functions ensures that compliance knowledge is distributed rather than concentrated in a single team.

Effective governance structures require deliberate investment in organizational alignment, executive accountability, and transparent reporting mechanisms. Without these foundational elements, governance frameworks remain theoretical documents rather than living operational systems.

Common Questions

Begin now. You will need time for system classification, gap analysis, 6–12 months of remediation for high-risk systems, and 2–4 months for conformity assessment before the key 2026–2027 deadlines.

Yes, you can reuse existing documentation if it covers Annex IV requirements. Identify gaps against Annex IV and supplement what is missing rather than rebuilding everything from scratch.

If you miss the deadline, you cannot legally place or operate the non-compliant AI system on the EU market and you may face penalties and intervention from market surveillance authorities.

You demonstrate compliance by maintaining complete technical documentation, risk management records, testing and validation reports, quality management system evidence, and conformity assessment documentation that can be produced on request.

High-Risk Systems Face the Earliest Hard Deadlines

New high-risk AI systems must comply with the EU AI Act by August 2026, with additional obligations for GPAI providers starting August 2025. Back-plan from these dates, allowing at least 6–12 months for remediation and 2–4 months for conformity assessment.

6–12 months

Typical remediation timeline for high-risk AI systems

Source: AI Act Implementation Roadmap - European Commission - 2024

"Classification is the single most important early decision in EU AI Act compliance—every subsequent obligation flows from how you scope and categorize your systems."

EU AI Act Compliance Guidance

References

  1. EU AI Act — Regulatory Framework for Artificial Intelligence. European Commission (2024). View source
  2. General Data Protection Regulation (GDPR) — Official Text. European Commission (2016). View source
  3. AI Risk Management Framework (AI RMF 1.0). National Institute of Standards and Technology (NIST) (2023). View source
  4. ISO/IEC 42001:2023 — Artificial Intelligence Management System. International Organization for Standardization (2023). View source
  5. Model AI Governance Framework (Second Edition). PDPC and IMDA Singapore (2020). View source
  6. OECD Principles on Artificial Intelligence. OECD (2019). View source
  7. ASEAN Guide on AI Governance and Ethics. ASEAN Secretariat (2024). View source
Michael Lansdowne Hauge

Managing Partner · HRDF-Certified Trainer (Malaysia), Delivered Training for Big Four, MBB, and Fortune 500 Clients, 100+ Angel Investments (Seed–Series C), Dartmouth College, Economics & Asian Studies

Advises leadership teams across Southeast Asia on AI strategy, readiness, and implementation. HRDF-certified trainer with engagements for a Big Four accounting firm, a leading global management consulting firm, and the world's largest ERP software company.

AI StrategyAI GovernanceExecutive AI TrainingDigital TransformationASEAN MarketsAI ImplementationAI Readiness AssessmentsResponsible AIPrompt EngineeringAI Literacy Programs

EXPLORE MORE

Other AI Governance & Risk Management Solutions

INSIGHTS

Related reading

Talk to Us About AI Governance & Risk Management

We work with organizations across Southeast Asia on ai governance & risk management programs. Let us know what you are working on.