Back to Insights
AI Compliance & RegulationGuide

EU AI Act: Complete Business Compliance Guide for 2026

February 12, 202618 min readMichael Lansdowne Hauge
Updated March 15, 2026
For:Legal/ComplianceCTO/CIOCHROIT ManagerCISOBoard MemberProduct ManagerData Science/ML

The EU AI Act is the world's first comprehensive AI regulation. With high-risk AI compliance requirements taking effect in August 2026, businesses need to start preparing now. Here is your complete guide.

Summarize and fact-check this article with:
Business team in modern European office reviewing compliance documentation

Key Takeaways

  • 1.Applies to any company developing or deploying AI that affects people in the EU — regardless of where the company is based
  • 2.Risk-based framework: prohibited, high-risk, limited risk, and minimal risk categories
  • 3.Prohibited AI practices (social scoring, certain biometric uses) have been banned since February 2025
  • 4.High-risk AI compliance — including impact assessments and documentation — required by August 2026
  • 5.Penalties reach €35 million or 7% of global annual turnover, whichever is higher
  • 6.AI literacy training for staff is already required as of February 2025
  • 7.GPAI model obligations (documentation, copyright compliance) effective since August 2025

What Is the EU AI Act?

The European Union Artificial Intelligence Act represents the world's first comprehensive legal framework governing the development, deployment, and use of artificial intelligence. Formally adopted in March 2024 and entering into force on August 1, 2024, the regulation is being phased in through 2027, giving organizations a defined but narrowing window to achieve compliance.

At its core, the Act employs a risk-based approach: the greater the threat an AI system poses to individuals' health, safety, or fundamental rights, the more stringent the regulatory requirements become. Critically, the law's reach extends well beyond European borders. Any organization that develops or deploys AI systems affecting people within the EU market falls under its jurisdiction, regardless of where that organization is headquartered.

Why This Matters for Your Business

The commercial implications of this regulation are difficult to overstate. If your company develops AI systems, sells AI-powered products, or deploys AI tools that affect individuals in the European Union, you are subject to the EU AI Act. That scope captures a wide range of enterprises: SaaS companies with EU customers whose products incorporate AI, multinational corporations deploying AI tools for EU-based employees, AI vendors whose products are integrated into EU organizations' workflows, and companies in virtually any sector that use AI-driven processes to make decisions affecting EU residents.

The enforcement provisions underscore the seriousness of the regulation. Penalties for non-compliance reach up to 35 million EUR or 7% of global annual turnover, whichever figure is higher. For context, according to the regulation's text itself, these fines are structured to exceed even the GDPR's penalty framework, which caps at 4% of global turnover.

Risk Classification Framework

The EU AI Act organizes AI systems into four distinct risk tiers, each carrying different regulatory obligations. Understanding where your systems fall within this hierarchy is the essential first step toward compliance.

Prohibited AI Practices (Effective February 2, 2025)

The Act's most immediate impact arrived on February 2, 2025, when a set of AI applications became entirely banned across the EU. These prohibitions target uses the European Parliament identified as fundamentally incompatible with EU values and fundamental rights.

Government-administered social scoring systems that evaluate individuals' trustworthiness based on social behavior are now unlawful, as is real-time remote biometric identification in public spaces by law enforcement (with only narrow exceptions for specific serious crimes). The Act also prohibits AI systems designed to exploit the vulnerabilities of specific groups, including children and people with disabilities, along with subliminal manipulation techniques that distort behavior and cause harm. Emotion recognition technology is banned in workplaces and educational institutions, though exceptions exist for safety and medical purposes. Untargeted scraping of facial images from the internet or CCTV footage for facial recognition databases is prohibited, as is biometric categorization based on sensitive characteristics such as race, political opinions, or religious beliefs.

High-Risk AI Systems (Effective August 2, 2026)

The most operationally consequential tier for most businesses is the high-risk category, which takes full effect on August 2, 2026. These are AI systems used in domains that significantly affect people's rights and safety, and they fall into two broad groups.

The first group encompasses safety components of products already subject to EU regulation. This includes AI embedded in medical devices, aviation systems, automotive systems, machinery, elevators, toys, and recreational craft. These systems will need to comply with AI Act requirements in addition to existing product safety regulations, with a final deadline of August 2, 2027.

The second group covers standalone high-risk AI applications across several sensitive domains. In employment, this means any AI used for recruiting, screening, hiring, performance evaluation, promotions, or termination decisions. In education, it covers AI systems that determine access to educational opportunities, evaluate students, or assign individuals to institutions. Critical infrastructure management, including AI systems overseeing water, gas, electricity, heating, and digital infrastructure, falls within scope. Financial services applications such as creditworthiness assessment, credit scoring, and insurance risk evaluation are classified as high-risk. Law enforcement use cases, including risk assessment of individuals and evaluation of evidence reliability, are similarly classified. The category also captures AI used in migration and border control for visa and asylum application assessment, as well as AI deployed by judicial authorities to research or interpret facts and law.

Limited Risk AI (Transparency Obligations Only)

A third tier applies lighter-touch requirements centered on transparency. Chatbots must disclose to users that they are interacting with an AI system. AI-generated content, including deepfakes, must be clearly labeled. And users must be informed when they are subject to emotion recognition or biometric categorization systems.

Minimal Risk AI (No Specific Requirements)

The vast majority of AI applications in use today, including AI-enabled video games, spam filters, most business software with AI features, and inventory management systems, fall into the minimal risk category and face no specific obligations under the Act.

Implementation Timeline

The phased implementation schedule creates a series of discrete compliance deadlines that organizations must track carefully.

DateMilestone
August 1, 2024EU AI Act enters into force
February 2, 2025Prohibited AI practices banned; AI literacy obligations begin
August 2, 2025General-Purpose AI (GPAI) model obligations apply
August 2, 2026High-risk AI system requirements take full effect
August 2, 2027High-risk AI in regulated products (medical devices, automotive, etc.)

Requirements for High-Risk AI Systems

The obligations imposed on high-risk AI systems are the most extensive in the regulation. They differ depending on whether your organization is a provider (developer) or a deployer (user) of the system.

For AI Developers (Providers)

Providers of high-risk AI systems must establish and maintain a continuous risk management process that spans the system's entire lifecycle. Training, validation, and testing datasets must meet strict data governance standards, meaning they must be relevant, representative, free of material errors, and sufficiently complete for the system's intended purpose. Before placing a system on the market, providers must produce detailed technical documentation demonstrating compliance. The systems themselves must be engineered to automatically record operational events through logging capabilities that enable post-incident analysis.

Transparency obligations require providers to furnish deployers with clear instructions covering the system's intended purpose, expected level of accuracy, and known limitations. Systems must be designed to allow effective human oversight, including the ability for human operators to override or interrupt automated processes. Appropriate levels of accuracy, robustness, and cybersecurity resilience must be demonstrated, and providers must undergo conformity assessment before market placement and register their systems in the EU's public database.

For AI Deployers (Users of High-Risk Systems)

Organizations deploying high-risk AI systems bear their own set of obligations. They must operate systems strictly according to providers' instructions and assign competent individuals with genuine authority to override the system's outputs. Input data quality is the deployer's responsibility, and they must monitor system operation on an ongoing basis, reporting incidents to the provider. Before deployment, a fundamental rights impact assessment is required. System-generated logs must be retained for a minimum of six months, and natural persons subject to the system's decisions must be notified of that fact.

General-Purpose AI (GPAI) Model Obligations

Effective August 2, 2025, providers of general-purpose AI models, including large language models, face a distinct set of requirements. They must maintain and make available technical documentation, provide sufficient information to downstream AI system providers who build on their models, establish clear policies for complying with EU copyright law, and publish a sufficiently detailed summary of the content used to train the model.

Models presenting systemic risk, defined in the Act as those trained with computational resources exceeding 10^25 floating point operations (FLOPs), face additional requirements. These include performing model evaluations with adversarial testing, assessing and mitigating systemic risks, tracking and reporting serious incidents, and ensuring adequate cybersecurity protections.

Penalties

The penalty structure is tiered to match the severity of the violation, reinforcing the Act's risk-based architecture.

Violation TypeMaximum Penalty
Prohibited AI practices35M EUR or 7% global turnover
High-risk AI obligations15M EUR or 3% global turnover
Incorrect information to authorities7.5M EUR or 1.5% global turnover
SMEs and startupsProportionally reduced caps

For SMEs and startups, the Act provides proportionally reduced penalty caps, reflecting the European Commission's stated objective of avoiding a chilling effect on innovation while maintaining meaningful enforcement.

How to Comply: Practical Steps

Compliance with the EU AI Act is not a single event but a structured program of work. The following six-step framework provides a practical roadmap for organizations beginning their compliance journey.

Step 1: AI System Inventory

The foundation of any compliance program is a comprehensive inventory of every AI system your organization develops, deploys, or uses. For each system, the inventory should document what the system does and how it functions, what data it processes, which individuals or groups it affects, and what decisions it influences or automates. Many organizations discover through this exercise that AI is more deeply embedded in their operations than leadership previously recognized.

Step 2: Risk Classification

With the inventory complete, each system must be mapped to the appropriate risk category under the Act. Particular scrutiny should be directed at systems used in employment decisions, educational access, financial services, and healthcare, as these are the domains most likely to trigger high-risk classification. This classification exercise often benefits from cross-functional input, drawing on legal, technical, and business perspectives.

Step 3: Gap Analysis

A structured comparison of your current practices against the requirements for each applicable risk level will reveal where your organization falls short. Common gap areas include technical documentation, data governance procedures, human oversight mechanisms, monitoring and logging infrastructure, and transparency disclosures to affected individuals. The gap analysis should be specific enough to translate directly into a prioritized work plan.

Step 4: Compliance Roadmap

Building a time-bound plan to close identified gaps before the relevant deadlines is the critical bridge between analysis and action. The immediate priority is confirming that no prohibited practices are in use, given that the February 2025 deadline has already passed. GPAI model compliance must be achieved by August 2025, and full high-risk AI system compliance by August 2026. Each workstream should have a named owner, defined milestones, and clear success criteria.

Step 5: AI Literacy Training

The Act includes a specific requirement that organizations ensure staff involved with AI possess "sufficient AI literacy," as stated in Article 4 of the regulation. This obligation, which took effect in February 2025, applies broadly to technical teams developing or deploying AI, business teams making decisions based on AI outputs, and compliance and legal teams responsible for AI governance. Training programs should be tailored to each audience's role and level of AI interaction.

Step 6: Establish Governance Framework

Sustainable compliance requires an organizational governance framework purpose-built for AI. This framework should define clear roles and responsibilities for AI oversight, establish incident reporting procedures aligned with the Act's requirements, institute regular compliance review cycles, and create disciplined documentation management practices. For many organizations, this will mean either creating a new AI governance function or significantly expanding the mandate of existing risk and compliance teams.

The EU AI Act does not exist in isolation. It intersects with and builds upon several other regulatory frameworks that organizations should consider as part of a holistic AI governance strategy. GDPR Article 22 already establishes a right not to be subject to solely automated decision-making, a principle that predates and now complements the AI Act. In the United States, NYC Local Law 144 imposes similar requirements on AI-powered hiring tools used in New York City, while the Colorado AI Act introduces comparable high-risk AI obligations at the state level. The United Kingdom's principles-based AI framework, though currently non-binding, may evolve into formal legislation by 2027, potentially creating yet another compliance obligation for organizations operating across multiple jurisdictions.

Common Questions

Yes. The EU AI Act has extraterritorial reach. It applies to any provider or deployer of AI systems, regardless of where they are established, if the AI system is placed on the EU market or its output is used in the EU. This is similar to how GDPR applies to non-EU companies processing EU residents' data.

The high-risk AI system requirements take full effect on August 2, 2026, for standalone high-risk applications (employment, education, financial services, etc.). Requirements for high-risk AI in regulated products like medical devices take effect on August 2, 2027.

The maximum penalty is 35 million EUR or 7% of global annual turnover, whichever is higher. This applies to violations involving prohibited AI practices. Other violations carry penalties of up to 15 million EUR or 3% of global turnover. SMEs and startups receive proportionally reduced caps.

Most chatbots and customer service AI systems are classified as "limited risk" rather than high-risk. They must meet transparency obligations — users must be informed they are interacting with AI — but they are not subject to the full high-risk compliance requirements unless they are used in a high-risk context like employment screening or financial services.

GPAI stands for General-Purpose AI — models like large language models that can be used for a wide range of tasks. If your company provides or develops GPAI models, you must comply with transparency and documentation requirements starting August 2, 2025. If you only use commercially available LLMs (like GPT-4 or Claude), the GPAI obligations fall on the model provider, not your company.

Effective February 2, 2025, organizations must ensure that their staff and other persons dealing with AI systems on their behalf have a sufficient level of AI literacy. This means providing appropriate training and awareness programs based on the technical knowledge, experience, education, and context of the AI systems being used.

The EU AI Act complements GDPR rather than replacing it. If your AI system processes personal data, you must comply with both. GDPR Article 22 already gives individuals the right not to be subject to solely automated decision-making with legal effects. The AI Act adds additional requirements around risk management, documentation, transparency, and human oversight.

References

  1. Regulation (EU) 2024/1689 — Artificial Intelligence Act. European Parliament and Council (2024). View source
  2. EU AI Act Implementation Timeline. EU Artificial Intelligence Act (Community Resource) (2024). View source
  3. Regulatory Framework for AI. European Commission (2024). View source
  4. Latest Wave of EU AI Act Obligations Take Effect. DLA Piper (2025). View source
  5. EU AI Act Published: Which Provisions Apply When?. Mayer Brown (2024). View source
  6. A Comprehensive EU AI Act Summary (January 2026 Update). Software Improvement Group (2026). View source
  7. EU Regulation on AI. Baker McKenzie (2024). View source
Michael Lansdowne Hauge

Managing Partner · HRDF-Certified Trainer (Malaysia), Delivered Training for Big Four, MBB, and Fortune 500 Clients, 100+ Angel Investments (Seed–Series C), Dartmouth College, Economics & Asian Studies

Advises leadership teams across Southeast Asia on AI strategy, readiness, and implementation. HRDF-certified trainer with engagements for a Big Four accounting firm, a leading global management consulting firm, and the world's largest ERP software company.

AI StrategyAI GovernanceExecutive AI TrainingDigital TransformationASEAN MarketsAI ImplementationAI Readiness AssessmentsResponsible AIPrompt EngineeringAI Literacy Programs

EXPLORE MORE

Other AI Compliance & Regulation Solutions

INSIGHTS

Related reading

Talk to Us About AI Compliance & Regulation

We work with organizations across Southeast Asia on ai compliance & regulation programs. Let us know what you are working on.