Back to Insights
AI Governance & Risk ManagementGuidePractitioner

EU AI Act Risk Classification Guide

May 3, 202510 min readPertama Partners

Understand the four EU AI Act risk tiers and determine where your AI systems fall, from prohibited to minimal-risk uses.

Pakistani Man Executive - ai governance & risk management insights

Key Takeaways

  • 1.The EU AI Act defines four risk tiers: unacceptable, high-risk, limited-risk, and minimal-risk.
  • 2.Classification determines which obligations, timelines, and penalties apply to your AI system.
  • 3.High-risk status generally requires both an Annex III use case and significant decision-making impact.
  • 4.Limited-risk systems trigger transparency and labeling duties for AI interactions and synthetic content.
  • 5.General-purpose AI models face separate GPAI obligations regardless of downstream risk tier.
  • 6.Misclassification can lead to substantial fines, market withdrawal, and reputational harm.

The EU AI Act uses four risk tiers with obligations that scale by risk level. Correct classification is the foundation of compliance and drives all downstream obligations, timelines, and penalties.

Four Risk Tiers

Unacceptable Risk: Prohibited

Banned practices (Article 5)

These AI practices are outright prohibited in the EU:

  • Social scoring by public authorities that leads to detrimental or unfair treatment
  • Exploitation of vulnerabilities of specific groups (e.g., age, disability) in a way that is likely to cause harm
  • Subliminal or manipulative techniques that materially distort behavior and cause or are likely to cause harm
  • Real-time remote biometric identification in publicly accessible spaces for law enforcement, except in narrowly defined and strictly regulated circumstances

Penalties

  • Administrative fines up to 35M EUR or 7% of global annual turnover (whichever is higher)

Effective date

  • Prohibitions apply from 2 August 2024 and are already in force.

If your system falls into this category, it must not be placed on the EU market or put into service.


High Risk: Heavy Obligations

High-risk systems are permitted but subject to strict requirements. They are primarily defined by Annex III and certain product safety legislation.

Annex III high-risk categories (non-exhaustive headline list):

  • Biometric identification and categorization
    Remote biometric identification, verification, or categorization of natural persons.

  • Management and operation of critical infrastructure
    AI that impacts the safety or continuity of critical infrastructure (e.g., energy, transport).

  • Education and vocational training
    Systems used for admissions, allocation of learning opportunities, or assessment of learners.

  • Employment, workers management, and access to self-employment
    Recruitment, candidate screening, promotion, performance evaluation, or dismissal.

  • Access to and enjoyment of essential private and public services
    Creditworthiness and credit scoring, triage for emergency services, social benefits eligibility.

  • Law enforcement
    Systems supporting evidence evaluation, individual risk assessment, or predictive policing.

  • Migration, asylum, and border control management
    Assessment of applications, risk profiling, or verification at borders.

  • Administration of justice and democratic processes
    Tools that assist courts or administrative bodies in researching case law or interpreting facts and law.

Product safety components
AI that is a safety component of regulated products (e.g., machinery, medical devices, toys, aviation) is also high-risk when covered by EU product safety harmonization legislation.

Core obligations for high-risk systems

Providers of high-risk AI must implement and document at least:

  • Risk management system: Continuous, documented risk identification, analysis, and mitigation.
  • Data governance and data quality: Relevant, representative, and free of known errors as far as possible; bias management.
  • Technical documentation: Comprehensive documentation enabling assessment of compliance.
  • Logging and traceability: Automatic recording of events to support audit and incident investigation.
  • Transparency and information to users: Clear instructions for use, capabilities, and limitations.
  • Human oversight: Designed so that humans can effectively oversee, intervene, and override.
  • Accuracy, robustness, and cybersecurity: Performance thresholds and resilience against attacks.
  • Conformity assessment and CE marking: Before placing on the market or putting into service.

Penalties

  • Fines up to 15M EUR or 3% of global annual turnover.

Effective date

  • High-risk obligations apply from 2 August 2026.

Limited Risk: Transparency Obligations

Limited-risk systems are allowed but must meet specific transparency requirements.

Systems requiring disclosure

  • Chatbots and conversational AI
    Users must be informed that they are interacting with an AI system, unless obvious from the context.

  • Emotion recognition systems
    Users must be informed when their emotions or intentions are being inferred.

  • Biometric categorization
    When used outside the high-risk contexts, users must be informed.

  • Deepfakes and synthetic content
    AI-generated or manipulated image, audio, or video content must be clearly disclosed as such, subject to narrow exceptions (e.g., law enforcement, artistic expression with safeguards).

Obligations

  • Provide clear, timely disclosure that users are interacting with AI.
  • Label or otherwise disclose AI-generated or manipulated content.

Penalties

  • Fines up to 7.5M EUR or 1.5% of global annual turnover.

Effective date

  • Transparency obligations apply from 2 August 2026.

Minimal Risk: No Mandatory Requirements

Most current AI systems fall into this category. They can be used without specific obligations under the EU AI Act, although general EU law (e.g., GDPR, consumer protection) still applies.

Examples

  • Spam filters and anti-fraud pattern detection that do not make legally significant decisions
  • General recommendation systems for content or products that do not determine access to essential services
  • Inventory management and logistics optimization tools
  • AI in video games or entertainment that does not create legal or similarly significant effects

Obligations

  • No mandatory AI Act requirements.
  • Voluntary codes of conduct are encouraged to promote trustworthy AI practices.

Classification Decision Tree

Use this high-level decision tree to classify your system:

Step 1 – Prohibited practices check
Q1: Does the system perform any practice listed in Article 5 (e.g., social scoring by public authorities, exploitative manipulation, unlawful real-time biometric ID)?

  • If yes: Unacceptable risk → Do not deploy in the EU.
  • If no: Continue.

Step 2 – Annex III scope check
Q2: Is the system used in any Annex III high-risk area (biometrics, critical infrastructure, education, employment, essential services, law enforcement, migration/asylum, justice)?

  • If yes: Go to Step 3.
  • If no: Go to Step 4.

Step 3 – Significant decision-making test
Q3: Does the system make or meaningfully influence decisions that produce legal effects or similarly significant impacts on individuals (e.g., access to jobs, credit, education, benefits, liberty)?

  • If yes: Classify as High-risk.
  • If no: It may fall outside high-risk; assess context carefully and document rationale.

Step 4 – Transparency-triggering uses
Q4: Does the system interact directly with users or generate content that users could mistake for human-generated (e.g., chatbots, avatars, deepfakes, emotion recognition)?

  • If yes: Classify as Limited-risk and apply transparency obligations.
  • If no: Go to Step 5.

Step 5 – Default category
If none of the above apply, classify the system as Minimal-risk under the EU AI Act.

Document each step and your answers; this record will be important for audits and regulatory inquiries.


Edge Cases and Clarifications

Systems that assist but do not decide

Some AI tools support human decision-making without issuing binding outputs.

  • If humans routinely rubber-stamp AI recommendations, or the AI output is de facto decisive, regulators may still treat the system as high-risk when used in Annex III contexts.
  • If there is genuine, documented human judgment, with the ability and practice of overriding AI outputs, the system may fall outside high-risk classification, even if used in sensitive domains.

In borderline cases, assess:

  • How often humans override AI outputs
  • Training and guidance given to human decision-makers
  • Whether the AI output is presented as advisory or authoritative

Product safety components

AI that functions as a safety component of regulated products (e.g., medical devices, machinery, aviation systems, toys) is automatically high-risk when the product is covered by EU harmonization legislation. In these cases:

  • The AI is assessed as part of the overall product conformity assessment.
  • Sector-specific rules and standards will interact with AI Act obligations.

General-purpose AI (GPAI)

General-purpose models (e.g., large language models, foundation models) are regulated separately:

  • GPAI providers face obligations under Articles 51–56, including documentation, usage policies, and in some cases model evaluation and incident reporting.
  • These obligations apply regardless of whether downstream applications are high-risk.
  • If a GPAI model is integrated into a high-risk use case, both GPAI and high-risk obligations can apply in parallel (to different actors in the value chain).

Multiple uses and mixed-risk systems

A single system can support multiple use cases with different risk profiles.

  • If one use is high-risk and uses are not clearly separable, treat the entire system as high-risk.
  • If uses are functionally and organizationally separable (e.g., distinct modules, access controls, and documentation), you may classify each use separately, but you must be able to demonstrate this separation.

Reclassification triggers

You should reassess classification:

  • At initial design and deployment
  • After any substantial modification (new features, new domains, or new data sources that change risk profile)
  • When Annex III is updated (the European Commission can add new high-risk categories)

Maintain a versioned record of classifications and the reasoning behind them.


Key Takeaways

  1. The EU AI Act defines four tiers: unacceptable (banned), high-risk (heavily regulated), limited-risk (transparency), and minimal-risk (no AI Act-specific duties).
  2. Classification is the gateway to compliance: it determines which obligations, timelines, and penalties apply to your system.
  3. High-risk status generally requires both an Annex III use case and significant decision-making impact on individuals.
  4. Many conversational and content-generating systems will be limited-risk, triggering transparency and labeling duties.
  5. Edge cases (assistive tools, GPAI, safety components, mixed-use systems) require careful documentation and often legal input.
  6. Misclassification can create compliance gaps and expose organizations to substantial financial and reputational penalties.

Frequently Asked Questions

What if my AI assists decisions rather than making them?

It depends on how decisions are made in practice. If human reviewers typically accept AI outputs without meaningful scrutiny, regulators may treat the system as high-risk when used in Annex III contexts. If humans apply independent judgment, regularly override AI outputs where appropriate, and this is supported by training and procedures, the system may fall outside high-risk classification.

Can a system be in multiple risk levels at once?

Yes. A single AI system can support several use cases with different risk profiles. You must apply the most restrictive classification to any non-separable part of the system. If one use is high-risk and you cannot clearly separate it from other uses (technically and organizationally), treat the entire system as high-risk.

How often should we reassess our AI system’s classification?

Reassess at initial deployment, after any substantial modification (e.g., new features, domains, or data sources that change impact), and whenever Annex III is updated by the European Commission. Document each reassessment and the rationale.

Are general-purpose models automatically high-risk?

No. General-purpose AI models are subject to a separate set of obligations (Articles 51–56) that apply regardless of risk tier. They are not automatically high-risk, but when integrated into an Annex III use case that significantly affects individuals, the downstream application may be high-risk.

Do minimal-risk systems have no compliance obligations at all?

Minimal-risk systems have no AI Act-specific obligations, but they remain subject to other EU laws such as GDPR, consumer protection, and product safety rules. Voluntary adherence to trustworthy AI principles and internal governance is still recommended.

Frequently Asked Questions

If human reviewers typically accept AI outputs without meaningful scrutiny, regulators may treat the system as high-risk when used in Annex III contexts. If humans apply independent judgment, regularly override AI outputs where appropriate, and this is supported by training and procedures, the system may fall outside high-risk classification.

Yes. A single AI system can support several use cases with different risk profiles. You must apply the most restrictive classification to any non-separable part of the system. If one use is high-risk and you cannot clearly separate it from other uses, treat the entire system as high-risk.

Reassess at initial deployment, after any substantial modification that changes impact or use, and whenever Annex III is updated by the European Commission. Document each reassessment and the rationale.

No. General-purpose AI models are subject to separate GPAI obligations under Articles 51–56 but are not automatically high-risk. When integrated into Annex III use cases with significant effects on individuals, the downstream application may be classified as high-risk.

Minimal-risk systems have no AI Act-specific obligations, but they must still comply with other EU laws such as GDPR, consumer protection, and product safety rules. Voluntary governance and ethical guidelines are still recommended.

Misclassification Risk

Under-classifying a high-risk system as limited- or minimal-risk can expose your organization to significant fines, forced withdrawal from the EU market, and reputational damage. Always document your reasoning and, for edge cases, seek legal review.

35M EUR or 7%

Maximum administrative fine for prohibited AI practices under the EU AI Act

Source: Regulation (EU) 2024/1689

"In the EU AI Act, classification is not a paperwork exercise—it determines whether your AI system can be deployed at all, and under what conditions."

EU AI Act Risk Management Guidance

References

  1. Regulation (EU) 2024/1689 on Artificial Intelligence (AI Act). European Parliament and Council of the European Union (2024)
  2. High-Risk AI Systems Interpretive Guidelines. European Commission (2025)
EU AI ActRisk ClassificationAI GovernanceComplianceRegulation

Explore Further

Key terms:Classification

Ready to Apply These Insights to Your Organization?

Book a complimentary AI Readiness Audit to identify opportunities specific to your context.

Book an AI Readiness Audit