What is AI Act Prohibited Practices?
AI applications banned under EU AI Act Article 5 including subliminal manipulation, exploitation of vulnerabilities, social scoring by authorities, real-time remote biometric identification in public spaces (with narrow exceptions), and emotion recognition in workplace/education. Violations subject to maximum penalties.
This glossary term is currently being developed. Detailed content covering regulatory framework, compliance requirements, implementation timeline, and business implications will be added soon. For immediate assistance with AI regulation and compliance, please contact Pertama Partners for advisory services.
The EU AI Act's prohibited practices provisions carry the heaviest penalties in global AI regulation, with fines reaching EUR 35M or 7% of worldwide revenue for violations. Companies selling AI products into European markets must conduct thorough compliance audits, since practices considered routine in Asia like real-time facial recognition in public spaces are explicitly banned. For mid-market companies, the compliance burden represents both a challenge and an opportunity, as competitors who fail to adapt lose market access entirely. Proactive compliance also builds customer trust that translates into 15-25% higher enterprise contract win rates when competing against vendors lacking EU certification.
- Absolute ban on social credit systems by public authorities
- Restrictions on emotion recognition systems in sensitive contexts
- Limited exceptions for law enforcement biometric ID with judicial approval
- Prohibition on predictive policing based solely on profiling
- Ban on AI systems exploiting age or disability vulnerabilities
- Audit current AI deployments against all eight prohibited practice categories within 90 days, since violations carry fines up to EUR 35M or 7% of global annual turnover.
- Remove any social scoring mechanisms from customer loyalty or employee performance systems, as behavioral scoring by private entities falls under prohibited classifications.
- Document legitimate use cases for emotion recognition technology in workplace settings, since generalized emotional inference is banned except for medical or safety purposes.
- Establish a quarterly compliance review process with legal counsel specializing in EU AI Act to catch prohibited practice violations before enforcement begins.
- Audit current AI deployments against all eight prohibited practice categories within 90 days, since violations carry fines up to EUR 35M or 7% of global annual turnover.
- Remove any social scoring mechanisms from customer loyalty or employee performance systems, as behavioral scoring by private entities falls under prohibited classifications.
- Document legitimate use cases for emotion recognition technology in workplace settings, since generalized emotional inference is banned except for medical or safety purposes.
- Establish a quarterly compliance review process with legal counsel specializing in EU AI Act to catch prohibited practice violations before enforcement begins.
Common Questions
How does this regulation apply to our AI deployment?
Application depends on your AI system's risk classification, deployment location, and data processing activities. Consult with legal experts for specific guidance.
What are the compliance deadlines and penalties?
Deadlines vary by jurisdiction and AI system type. Non-compliance can result in significant fines, operational restrictions, or system bans.
More Questions
Implement robust governance frameworks, regular audits, documentation practices, and stay updated on regulatory changes through expert advisory.
References
- NIST Artificial Intelligence Risk Management Framework (AI RMF 1.0). National Institute of Standards and Technology (NIST) (2023). View source
- Stanford HAI AI Index Report 2025. Stanford Institute for Human-Centered AI (2025). View source
AI Regulation refers to the laws, rules, standards, and government policies that govern the development, deployment, and use of artificial intelligence systems. It encompasses mandatory legal requirements, voluntary guidelines, industry standards, and regulatory frameworks designed to manage AI risks while enabling innovation and economic benefit.
AI systems listed in Annex III of EU AI Act requiring strict compliance including biometric identification, critical infrastructure, education/employment systems, law enforcement, migration/border control, and justice administration. Must meet requirements for data governance, documentation, transparency, human oversight, and accuracy before market placement.
Dedicated enforcement body within European Commission responsible for supervising general-purpose AI models, coordinating national AI authorities, maintaining AI Pact, and ensuring consistent AI Act implementation across member states. Established 2024 with powers to conduct investigations and impose penalties.
Specific EU AI Act requirements for foundation models and general-purpose AI systems including technical documentation, copyright compliance, detailed training content summaries, and additional obligations for systemic risk models (>10^25 FLOPs). Providers must publish model cards and cooperate with evaluations.
Mandatory pre-market evaluation procedure for high-risk AI systems under EU AI Act involving technical documentation review, quality management verification, and compliance testing against harmonized standards. Conducted by notified bodies or through internal controls depending on AI system type and intended use.
Need help implementing AI Act Prohibited Practices?
Pertama Partners helps businesses across Southeast Asia adopt AI strategically. Let's discuss how ai act prohibited practices fits into your AI roadmap.