What is New York City AI Employment Law (Local Law 144)?
First US law regulating AI in employment decisions, requiring annual bias audits of automated employment decision tools (AEDTs), candidate notification about AI use, and publication of audit results. Applies to AI screening resumes, ranking candidates, or making hiring/promotion recommendations for NYC positions. Effective 2023 with civil penalties for violations.
This glossary term is currently being developed. Detailed content covering regulatory framework, compliance requirements, implementation timeline, and business implications will be added soon. For immediate assistance with AI regulation and compliance, please contact Pertama Partners for advisory services.
NYC Local Law 144 established the regulatory template that other jurisdictions across the US and internationally are now replicating for governing AI in hiring decisions. Companies proactively building compliant AI recruitment processes avoid $1,500 per-violation penalties while future-proofing against expanding regulatory coverage. The audit and transparency requirements also improve hiring model quality by surfacing discriminatory patterns that damage employer brand and invite litigation.
- Independent bias audit required within one year before use
- Notice to candidates at least 10 days before AI screening
- Publication of audit methodology, results, and data on employer website
- Alternative selection process must be available upon request
- Specific testing for bias across race, ethnicity, sex categories
- Annual bias audits by independent third parties cost $15,000-50,000 depending on tool complexity, and must examine impact ratios across race, ethnicity, and gender categories.
- Candidate notification requirements mandate disclosure 10 business days before automated screening, affecting recruitment timeline planning and process sequencing.
- Even companies outside New York City face compliance obligations when screening candidates residing in NYC, extending jurisdictional reach beyond headquarter location.
- Annual bias audits by independent third parties cost $15,000-50,000 depending on tool complexity, and must examine impact ratios across race, ethnicity, and gender categories.
- Candidate notification requirements mandate disclosure 10 business days before automated screening, affecting recruitment timeline planning and process sequencing.
- Even companies outside New York City face compliance obligations when screening candidates residing in NYC, extending jurisdictional reach beyond headquarter location.
Common Questions
How does this regulation apply to our AI deployment?
Application depends on your AI system's risk classification, deployment location, and data processing activities. Consult with legal experts for specific guidance.
What are the compliance deadlines and penalties?
Deadlines vary by jurisdiction and AI system type. Non-compliance can result in significant fines, operational restrictions, or system bans.
More Questions
Implement robust governance frameworks, regular audits, documentation practices, and stay updated on regulatory changes through expert advisory.
References
- NIST Artificial Intelligence Risk Management Framework (AI RMF 1.0). National Institute of Standards and Technology (NIST) (2023). View source
- Stanford HAI AI Index Report 2025. Stanford Institute for Human-Centered AI (2025). View source
AI Regulation refers to the laws, rules, standards, and government policies that govern the development, deployment, and use of artificial intelligence systems. It encompasses mandatory legal requirements, voluntary guidelines, industry standards, and regulatory frameworks designed to manage AI risks while enabling innovation and economic benefit.
AI systems listed in Annex III of EU AI Act requiring strict compliance including biometric identification, critical infrastructure, education/employment systems, law enforcement, migration/border control, and justice administration. Must meet requirements for data governance, documentation, transparency, human oversight, and accuracy before market placement.
AI applications banned under EU AI Act Article 5 including subliminal manipulation, exploitation of vulnerabilities, social scoring by authorities, real-time remote biometric identification in public spaces (with narrow exceptions), and emotion recognition in workplace/education. Violations subject to maximum penalties.
Dedicated enforcement body within European Commission responsible for supervising general-purpose AI models, coordinating national AI authorities, maintaining AI Pact, and ensuring consistent AI Act implementation across member states. Established 2024 with powers to conduct investigations and impose penalties.
Specific EU AI Act requirements for foundation models and general-purpose AI systems including technical documentation, copyright compliance, detailed training content summaries, and additional obligations for systemic risk models (>10^25 FLOPs). Providers must publish model cards and cooperate with evaluations.
Need help implementing New York City AI Employment Law (Local Law 144)?
Pertama Partners helps businesses across Southeast Asia adopt AI strategically. Let's discuss how new york city ai employment law (local law 144) fits into your AI roadmap.