Use this checklist to systematically achieve EU AI Act compliance before the August 2027 deadline.
Phase 1: Classification (Now - Q1 2025)
- Inventory all AI systems developed or deployed
- Assess scope: does each fall under AI Act definition?
- Classify risk level per Annex III criteria
- Identify any prohibited practices (Article 5)
- Assign roles: provider, deployer, distributor, importer
- Document classification decisions and rationale
Phase 2: Gap Analysis (Q1-Q2 2025)
For High-Risk Systems:
- Compare current practices to Articles 9-15 requirements
- Evaluate existing documentation against Annex IV
- Assess data quality and governance vs Article 10
- Review quality management against Annex VIII
- Check user information and instructions for use
- Document compliance gaps with priorities
Phase 3: Remediation (Q2 2025 - Q2 2026)
Implement Core Requirements:
- Establish risk management system (Article 9)
- Implement data governance practices (Article 10)
- Prepare technical documentation (Annex IV)
- Deploy event logging capabilities (Article 12)
- Design human oversight mechanisms (Article 14)
- Set up quality management system (Annex VIII)
- Establish post-market monitoring (Article 72)
Phase 4: Conformity Assessment (Q3-Q4 2026)
- Select assessment route: internal control or notified body
- Conduct conformity assessment per selected procedure
- Prepare EU declaration of conformity
- Affix CE marking on product or documentation
- Register system in EU database for high-risk AI
- Maintain conformity assessment documentation
Phase 5: Ongoing Compliance (August 2026+)
- Operate post-market monitoring systems
- Report serious incidents per Article 73
- Keep technical documentation current
- Re-assess when substantially modified
- Respond to market surveillance authority requests
- Update documentation for system changes
GPAI Model Providers (All)
Effective August 2025:
- Prepare technical documentation
- Provide information to downstream providers
- Implement copyright compliance policy
- Publish training content summary
Systemic Risk GPAI Additional (>10^25 FLOPs):
- Conduct model evaluation and adversarial testing
- Track and document serious incidents
- Implement cybersecurity protections
- Report energy consumption for training
Limited-Risk Systems
Effective August 2026:
- Disclose AI interaction to users (chatbots)
- Mark synthetic content as AI-generated (deepfakes)
- Inform individuals of emotion recognition use
- Clarify biometric categorization to affected persons
Documentation Checklist
Maintain for All High-Risk Systems:
- Technical documentation (Annex IV)
- Risk assessment and management records
- Data governance documentation
- Testing and validation reports
- Quality management system records
- Conformity assessment documentation
- EU declaration of conformity
- Post-market monitoring logs
- Incident reports and corrective actions
- Change logs for system updates
Key Takeaways
- Start classification now—foundation of all compliance work
- Remediation phase takes 6-12 months for high-risk systems
- Conformity assessment can take 2-4 months
- Documentation is evidence of compliance during inspections
- Post-market monitoring is ongoing obligation post-launch
- August 2026 deadline for new high-risk systems approaching
Citations
- Regulation EU 2024/1689 Artificial Intelligence Act - European Parliament - 2024
- AI Act Implementation Roadmap - European Commission - 2024
Implementation Timeline Milestones Organizations Must Track
The European Union Artificial Intelligence Act entered into force on August 1, 2024, but its obligations activate through a staggered enforcement timeline that creates distinct compliance deadlines for different system categories:
February 2, 2025 — Prohibited Practices Deadline. Organizations must cease operating systems classified as unacceptable risk, including social scoring mechanisms, real-time biometric identification in publicly accessible spaces (with limited law enforcement exceptions), emotion recognition systems in workplace and educational contexts, and cognitive behavioral manipulation techniques targeting vulnerable populations.
August 2, 2025 — General-Purpose Model Obligations. Providers of general-purpose artificial intelligence models including OpenAI, Anthropic, Google DeepMind, Meta, and Mistral must comply with transparency requirements including technical documentation, training data summaries addressing copyright compliance, and downstream provider notification obligations. Models classified as posing systemic risk face additional requirements including adversarial testing, incident monitoring, and cybersecurity protection measures.
August 2, 2026 — High-Risk System Requirements. The most extensive compliance obligations activate for systems deployed in categories enumerated in Annex III: biometric identification, critical infrastructure management, educational access and vocational training assessment, employment and worker management, essential private and public services including credit scoring and insurance pricing, law enforcement, migration and border control, and justice administration.
August 2, 2027 — Product Safety Integration. High-risk systems embedded in products already regulated under Union harmonization legislation — including medical devices, machinery, toys, civil aviation, motor vehicles, and marine equipment — must achieve full compliance including conformity assessment procedures conducted by notified bodies designated by Member State authorities.
Detailed Compliance Checklist Organized by Organizational Function
Legal and Compliance Department
- Complete system inventory classifying all deployed artificial intelligence applications against risk categories defined in Articles 5, 6, and Annex III
- Establish legal basis documentation for each high-risk system addressing Article 9 risk management requirements
- Review and update data governance practices per Article 10 covering training, validation, and testing dataset quality requirements
- Prepare technical documentation packages conforming to Annex IV specifications for each high-risk system
- Verify conformity assessment pathway — self-assessment under Annex VI or third-party assessment under Annex VII — based on system classification
- Register applicable systems in the European Union database established under Article 71
Technology and Engineering Department
- Implement logging capabilities satisfying Article 12 automatic recording requirements for high-risk systems
- Establish human oversight mechanisms per Article 14 enabling authorized personnel to understand, monitor, and override system outputs
- Verify accuracy, robustness, and cybersecurity standards per Article 15 including resilience against adversarial attacks
- Document model training methodologies, hyperparameter selections, and evaluation benchmark results
- Configure monitoring systems enabling post-market surveillance obligations under Article 72
- Implement version control and model lifecycle management using platforms like MLflow, Weights and Biases, Neptune.ai, or Comet ML
Human Resources and Workforce Management
- Identify all employment-related applications meeting high-risk classification including recruitment screening, performance evaluation, promotion recommendation, and termination decision support systems
- Verify transparency obligations per Article 13 ensuring affected employees receive meaningful information about system logic and decision criteria
- Coordinate with works councils or employee representative bodies per national transposition requirements in Germany, France, Netherlands, Belgium, and Nordic Member States
- Establish human review procedures for automated employment decisions complying with both Artificial Intelligence Act requirements and General Data Protection Regulation Article 22 automated decision-making restrictions
Procurement and Vendor Management
- Audit existing vendor contracts for artificial intelligence components requiring reclassification under the Act's provider, deployer, importer, and distributor role definitions established in Article 3
- Negotiate updated terms addressing compliance responsibility allocation, indemnification provisions, and documentation access requirements
- Verify vendor conformity declarations and technical documentation availability before system deployment
- Establish vendor monitoring protocols tracking ongoing compliance status, incident notifications, and system modification disclosures required under Article 23 obligations for authorized representatives
Penalty Framework and Enforcement Mechanisms
National market surveillance authorities designated by each Member State enforce compliance with penalty ranges scaled by violation severity: up to thirty-five million euros or seven percent of global annual turnover for prohibited practice violations; up to fifteen million euros or three percent for high-risk system requirement breaches; and up to seven point five million euros or one point five percent for providing incorrect information to authorities. Small and medium enterprises and startups face proportionally reduced maximum penalties under provisions negotiated during trilogue negotiations concluded in December 2023.
Checklist comprehensiveness requires addressing Annex III high-risk system classifications spanning biometric identification, critical infrastructure management, educational access determination, employment workforce management, essential services eligibility, law enforcement analytics, migration administration, and justice system decision-support across all eight enumerated domains. Conformity assessment procedures distinguish between self-assessment pathways available for most Annex III categories and mandatory third-party notified body audits required under Article 43 for biometric categorization and remote identification systems, with designated conformity assessment bodies including BSI (British Standards Institution), TÜV Rheinland, Bureau Veritas, and DNV operating under European Commission notification procedures. Technical documentation requirements under Article 11 mandate maintenance of algorithmic system descriptions, design specifications, development methodologies, validation and testing procedures, and risk management system documentation in perpetuity throughout the system lifecycle plus ten years post-market withdrawal. Quality management system obligations under Article 17 reference ISO 9001 process-based approaches adapted for algorithmic contexts incorporating resource management, design control, data governance, post-market monitoring, and serious incident reporting procedures notified through EU AI Database registration maintained by the European AI Office. Prohibited practice provisions under Article 5 enumerate cognitive behavioral manipulation, social scoring, predictive policing targeting individuals, untargeted facial image scraping, emotion recognition in workplace and educational institutions, and biometric categorization systems inferring sensitive characteristics including race, political opinions, trade union membership, religious beliefs, and sexual orientation. Transparency obligations under Articles 50 and 52 require deployers of chatbots, deepfake generators, emotion recognition systems, and biometric categorization systems to implement disclosure mechanisms ensuring natural persons receive notification of algorithmic interaction through clear, distinguishable indicators embedded within user interface presentation layers. Timeline milestones proceed through phased enforcement commencing February 2025 for prohibited practices, August 2025 for general-purpose AI model obligations, August 2026 for high-risk system requirements, and August 2027 for embedded product integration provisions.
Practical Next Steps
To put these insights into practice for eu ai act compliance checklist, consider the following action items:
- Establish a cross-functional governance committee with clear decision-making authority and regular review cadences.
- Document your current governance processes and identify gaps against regulatory requirements in your operating markets.
- Create standardized templates for governance reviews, approval workflows, and compliance documentation.
- Schedule quarterly governance assessments to ensure your framework evolves alongside regulatory and organizational changes.
- Build internal governance capabilities through targeted training programs for stakeholders across different business functions.
Effective governance structures require deliberate investment in organizational alignment, executive accountability, and transparent reporting mechanisms. Without these foundational elements, governance frameworks remain theoretical documents rather than living operational systems.
Common Questions
Begin now. You will need time for system classification, gap analysis, 6–12 months of remediation for high-risk systems, and 2–4 months for conformity assessment before the key 2026–2027 deadlines.
Yes, you can reuse existing documentation if it covers Annex IV requirements. Identify gaps against Annex IV and supplement what is missing rather than rebuilding everything from scratch.
If you miss the deadline, you cannot legally place or operate the non-compliant AI system on the EU market and you may face penalties and intervention from market surveillance authorities.
You demonstrate compliance by maintaining complete technical documentation, risk management records, testing and validation reports, quality management system evidence, and conformity assessment documentation that can be produced on request.
High-Risk Systems Face the Earliest Hard Deadlines
New high-risk AI systems must comply with the EU AI Act by August 2026, with additional obligations for GPAI providers starting August 2025. Back-plan from these dates, allowing at least 6–12 months for remediation and 2–4 months for conformity assessment.
Typical remediation timeline for high-risk AI systems
Source: AI Act Implementation Roadmap - European Commission - 2024
"Classification is the single most important early decision in EU AI Act compliance—every subsequent obligation flows from how you scope and categorize your systems."
— EU AI Act Compliance Guidance
References
- EU AI Act — Regulatory Framework for Artificial Intelligence. European Commission (2024). View source
- General Data Protection Regulation (GDPR) — Official Text. European Commission (2016). View source
- AI Risk Management Framework (AI RMF 1.0). National Institute of Standards and Technology (NIST) (2023). View source
- ISO/IEC 42001:2023 — Artificial Intelligence Management System. International Organization for Standardization (2023). View source
- Model AI Governance Framework (Second Edition). PDPC and IMDA Singapore (2020). View source
- OECD Principles on Artificial Intelligence. OECD (2019). View source
- ASEAN Guide on AI Governance and Ethics. ASEAN Secretariat (2024). View source
