What Is the EU AI Act?
The European Union Artificial Intelligence Act is the world's first comprehensive legal framework for regulating artificial intelligence. It was formally adopted in March 2024, entered into force on August 1, 2024, and is being implemented in phases through 2027.
The EU AI Act takes a risk-based approach — the higher the risk an AI system poses to individuals' health, safety, or fundamental rights, the stricter the requirements. It applies to any organization that develops or deploys AI systems in the EU market, regardless of where the organization is based.
Why This Matters for Your Business
If your company develops AI systems, sells AI-powered products, or deploys AI tools that affect people in the EU, you are subject to this law. This includes:
- SaaS companies with EU customers whose products use AI
- Multinational corporations deploying AI tools for EU-based employees
- AI vendors whose products are used by EU organizations
- Companies in any sector that use AI for decisions affecting EU residents
The penalties for non-compliance are severe — up to 35 million EUR or 7% of global annual turnover, whichever is higher.
Risk Classification Framework
The EU AI Act categorizes AI systems into four risk levels:
Prohibited AI Practices (Effective February 2, 2025)
These AI applications are banned entirely:
- Social scoring by governments that evaluates trustworthiness based on social behavior
- Real-time remote biometric identification in public spaces by law enforcement (with narrow exceptions)
- Exploitation of vulnerabilities of specific groups (children, people with disabilities)
- Subliminal manipulation techniques that distort behavior and cause harm
- Emotion recognition in workplaces and educational institutions (with exceptions for safety/medical purposes)
- Untargeted scraping of facial images from the internet or CCTV footage for facial recognition databases
- Biometric categorization using sensitive characteristics (race, political opinions, religious beliefs)
High-Risk AI Systems (Effective August 2, 2026)
These are AI systems used in areas that significantly affect people's rights and safety:
Category 1 — Safety components of regulated products:
- Medical devices
- Aviation systems
- Automotive systems
- Machinery and elevators
- Toys and recreational craft
Category 2 — Standalone high-risk applications:
- Employment: AI used for recruiting, screening, hiring, performance evaluation, promotions, termination
- Education: AI determining access to education, evaluating students, assigning people to educational institutions
- Critical infrastructure: AI managing water, gas, electricity, heating, and digital infrastructure
- Financial services: AI for creditworthiness assessment, credit scoring, insurance risk assessment and pricing
- Law enforcement: AI for risk assessment of individuals, polygraphs, evaluation of evidence reliability
- Migration and border control: AI for visa/asylum application assessment, security risk screening
- Justice and democratic processes: AI used by judicial authorities to research/interpret facts and law
Limited Risk AI (Transparency Obligations Only)
- Chatbots: Must disclose to users they are interacting with AI
- Deepfakes: AI-generated content must be labeled
- Emotion recognition systems: Users must be informed
- Biometric categorization systems: Users must be informed
Minimal Risk AI (No Specific Requirements)
- AI-enabled video games
- Spam filters
- Most business software with AI features
- Inventory management systems
Implementation Timeline
| Date | Milestone |
|---|---|
| August 1, 2024 | EU AI Act enters into force |
| February 2, 2025 | Prohibited AI practices banned; AI literacy obligations begin |
| August 2, 2025 | General-Purpose AI (GPAI) model obligations apply |
| August 2, 2026 | High-risk AI system requirements take full effect |
| August 2, 2027 | High-risk AI in regulated products (medical devices, automotive, etc.) |
Requirements for High-Risk AI Systems
Organizations deploying high-risk AI must meet these obligations:
For AI Developers (Providers)
- Risk management system: Establish and maintain a continuous risk management process throughout the AI system's lifecycle
- Data governance: Training, validation, and testing datasets must be relevant, representative, free of errors, and complete
- Technical documentation: Detailed documentation demonstrating compliance before the system is placed on the market
- Record-keeping: AI systems must automatically record events (logs) during operation
- Transparency: Provide clear instructions for deployers, including the system's intended purpose, level of accuracy, and known limitations
- Human oversight: Design systems to allow effective human oversight, including the ability to override or interrupt the system
- Accuracy, robustness, and cybersecurity: Systems must achieve appropriate levels of accuracy and resilience to errors and attacks
- Conformity assessment: Undergo assessment before market placement
- EU database registration: Register in the EU public database before deployment
For AI Deployers (Users of High-Risk Systems)
- Use as intended: Deploy systems according to providers' instructions
- Human oversight: Assign competent individuals with authority to override the system
- Data quality: Ensure input data is relevant and representative
- Monitoring: Monitor operation and report incidents to the provider
- Impact assessment: Conduct a fundamental rights impact assessment before deployment
- Record-keeping: Keep logs generated by the system for at least 6 months
- Inform individuals: Notify natural persons that they are subject to a high-risk AI system
General-Purpose AI (GPAI) Model Obligations
Effective August 2, 2025, providers of GPAI models (like large language models) must:
- Maintain and make available technical documentation
- Provide information and documentation to downstream AI system providers
- Establish a policy for complying with EU copyright law
- Publish a sufficiently detailed summary of training data content
Additional requirements for GPAI with systemic risk (models trained with more than 10^25 FLOPs):
- Perform model evaluations including adversarial testing
- Assess and mitigate systemic risks
- Track and report serious incidents
- Ensure adequate cybersecurity protections
Penalties
| Violation Type | Maximum Penalty |
|---|---|
| Prohibited AI practices | 35M EUR or 7% global turnover |
| High-risk AI obligations | 15M EUR or 3% global turnover |
| Incorrect information to authorities | 7.5M EUR or 1.5% global turnover |
| SMEs and startups | Proportionally reduced caps |
How to Comply: Practical Steps
Step 1: AI System Inventory
Create a comprehensive inventory of every AI system your organization develops, deploys, or uses. For each system, document:
- What it does and how it works
- What data it processes
- Who it affects
- What decisions it influences
Step 2: Risk Classification
Map each AI system to the appropriate risk category. Pay special attention to systems used in employment, education, financial services, and healthcare — these are most likely to be classified as high-risk.
Step 3: Gap Analysis
Compare your current practices against the requirements for each risk level. Identify gaps in:
- Documentation
- Data governance
- Human oversight mechanisms
- Monitoring and logging
- Transparency to affected individuals
Step 4: Compliance Roadmap
Build a prioritized plan to close gaps before the relevant deadlines. Focus on:
- Now: Ensure no prohibited practices are in use
- By August 2025: GPAI model compliance
- By August 2026: Full high-risk AI system compliance
Step 5: AI Literacy Training
The Act requires organizations to ensure staff involved with AI have "sufficient AI literacy." Implement training programs for:
- Technical teams developing or deploying AI
- Business teams making decisions based on AI outputs
- Compliance and legal teams monitoring AI governance
Step 6: Establish Governance Framework
Create or update your AI governance framework to include:
- Clear roles and responsibilities
- Incident reporting procedures
- Regular compliance reviews
- Documentation management
Related Regulations
- GDPR Article 22: Right not to be subject to solely automated decision-making (pre-dates EU AI Act)
- NYC Local Law 144: Similar requirements for AI hiring tools in New York City
- Colorado AI Act: US state law with comparable high-risk AI requirements
- UK AI Framework: Principles-based approach that may evolve into formal legislation by 2027
Frequently Asked Questions
Yes. The EU AI Act has extraterritorial reach. It applies to any provider or deployer of AI systems, regardless of where they are established, if the AI system is placed on the EU market or its output is used in the EU. This is similar to how GDPR applies to non-EU companies processing EU residents' data.
The high-risk AI system requirements take full effect on August 2, 2026, for standalone high-risk applications (employment, education, financial services, etc.). Requirements for high-risk AI in regulated products like medical devices take effect on August 2, 2027.
The maximum penalty is 35 million EUR or 7% of global annual turnover, whichever is higher. This applies to violations involving prohibited AI practices. Other violations carry penalties of up to 15 million EUR or 3% of global turnover. SMEs and startups receive proportionally reduced caps.
Most chatbots and customer service AI systems are classified as "limited risk" rather than high-risk. They must meet transparency obligations — users must be informed they are interacting with AI — but they are not subject to the full high-risk compliance requirements unless they are used in a high-risk context like employment screening or financial services.
GPAI stands for General-Purpose AI — models like large language models that can be used for a wide range of tasks. If your company provides or develops GPAI models, you must comply with transparency and documentation requirements starting August 2, 2025. If you only use commercially available LLMs (like GPT-4 or Claude), the GPAI obligations fall on the model provider, not your company.
Effective February 2, 2025, organizations must ensure that their staff and other persons dealing with AI systems on their behalf have a sufficient level of AI literacy. This means providing appropriate training and awareness programs based on the technical knowledge, experience, education, and context of the AI systems being used.
The EU AI Act complements GDPR rather than replacing it. If your AI system processes personal data, you must comply with both. GDPR Article 22 already gives individuals the right not to be subject to solely automated decision-making with legal effects. The AI Act adds additional requirements around risk management, documentation, transparency, and human oversight.
References
- Regulation (EU) 2024/1689 — Artificial Intelligence Act. European Parliament and Council (2024). View source
- EU AI Act Implementation Timeline. European AI Office (2024). View source
- High-Level Summary of the AI Act. European Commission (2024). View source
