Executive Summary: AI-powered hiring tools—from résumé screening algorithms to video interview analysis platforms—have become ubiquitous in modern talent acquisition. However, this efficiency comes with significant regulatory obligations. NYC Local Law 144 requires annual bias audits and candidate notification for automated employment decision tools (AEDTs). EEOC guidance holds employers liable for discriminatory outcomes even when AI vendors claim algorithmic neutrality. Illinois mandates disclosure and consent for AI video interview analysis. Maryland restricts facial recognition in employment decisions. The EU AI Act classifies recruitment AI as high-risk, triggering comprehensive compliance requirements. This guide provides a practical framework for HR and legal teams to navigate hiring AI regulations while maintaining fair, efficient recruitment processes.
Why Hiring AI Is Heavily Regulated
High-Stakes Decisions
Employment decisions have profound impacts:
- Economic: Access to income, benefits, career advancement
- Social: Professional identity, social status, community integration
- Legal: Employment is a protected right under civil rights statutes (Title VII, ADEA, ADA)
- Historical: Long history of employment discrimination creates heightened scrutiny of hiring practices
Documented Discrimination Risk
Research shows AI hiring tools can perpetuate bias:
- Amazon's recruiting algorithm (disclosed 2018): Downgraded résumés containing "women's" (e.g., women's chess club), trained on historical male-dominated applicant patterns
- HireVue controversy (2019-2021): Video analysis tools criticized for pseudoscientific claims, facial analysis bias, and lack of evidence for predicting job performance
- LinkedIn recruiter tools (2021 study): Algorithmic recommendations for "software developer" roles disproportionately suggested male candidates
- Résumé parsing errors: OCR and NLP systems systematically misparsing names, education, and experience for non-Western formats
Vendor Liability Doesn't Eliminate Employer Liability: Even if you purchase AI hiring tools from third-party vendors, you (the employer) remain liable for discriminatory outcomes under Title VII, ADEA, ADA, and state civil rights laws. "The vendor said it was unbiased" is not a defense.
NYC Local Law 144: Automated Employment Decision Tools
What Qualifies as an AEDT?
Definition:
- Computational process: Derived from machine learning, statistical modeling, data analytics, or AI
- Substantially assists or replaces: The tool is used to make employment decisions, not just facilitate them
- Decisional use: Screening candidates for employment OR evaluating employees for promotion
Examples of AEDTs:
- Résumé screening algorithms that rank or filter candidates
- Video interview platforms that analyze facial expressions, word choice, or voice tone
- Chatbots that pre-screen candidates based on application responses
- Personality or skills assessments scored algorithmically
- Game-based assessments analyzed by ML models
Not AEDTs (excluded from law):
- Applicant tracking systems (ATS) that store and organize applications without automated scoring
- Background check databases (unless algorithmically scored)
- Scheduling tools for interviews
- Job description generators
- Tools that only assist with sourcing (finding candidates) but don't evaluate them
Bias Audit Requirements
Annual Independent Audit:
- Must be conducted no more than one year before use
- Performed by independent auditor (not affiliated with employer or vendor)
- Published on employer's public website
Required Metrics:
- Selection Rate: % of candidates selected, broken down by race/ethnicity and sex
- Impact Ratio: Selection rate for each category ÷ selection rate for most-selected category
- 80% Threshold: Impact ratio < 0.80 triggers reporting requirement (indicates possible disparate impact)
Demographic Categories:
- Sex: Male, Female
- Race/Ethnicity: Hispanic or Latino, White, Black or African American, Native Hawaiian or Pacific Islander, Asian, Native American or Alaska Native, Two or More Races
Candidate Notice Requirements
When: At least 10 business days before AEDT is used
Where: Posted on careers page, job postings, or sent to candidates
What to Disclose:
- Statement that an AEDT will be used in hiring or promotion decisions
- Job qualifications and characteristics the AEDT will assess (e.g., "communication skills," "leadership potential," "cultural fit")
- Data sources analyzed by the AEDT (résumé, application, public social media, video interview)
Alternative Process:
- Candidates can request alternative selection process or reasonable accommodation
- Must provide accommodation if requested (e.g., human review, different assessment format)
Data Retention and Access
Employers must retain (for 3 years):
- Bias audit results
- AEDT vendor documentation
- Data on candidates and employees evaluated by AEDT (aggregated by demographic category)
Candidates can request:
- Information about their personal data collected and analyzed
- Results of bias audit (employers must provide summary)
Penalties
Civil fines:
- First violation: Up to $500 per violation
- Subsequent violations: Up to $1,500 per violation
- Each day of non-compliance = separate violation
Enhanced penalties:
- Failure to provide reasonable accommodation when requested
- Retaliation against candidates who request accommodations
Compliance Gap: As of early 2024, fewer than 15% of NYC employers using AEDTs had published bias audit results, despite the law's April 2023 enforcement date. Many employers are unaware the law applies to them.
EEOC Guidance on Hiring AI
Legal Framework: Title VII, ADEA, ADA
Discrimination Theories:
- Disparate Treatment: Intentional discrimination (rarely applicable to AI)
- Disparate Impact: Neutral practice that disproportionately excludes protected groups and isn't job-related/business necessity
- Failure to Accommodate: ADA requires reasonable accommodations for disabled applicants, including alternatives to AI assessments
Employer Liability:
- Employers are liable for discriminatory outcomes of AI tools even if they don't understand how the algorithm works
- "Algorithmic black box" is not a defense
- Vendor claims of "bias-free AI" don't shield employers from liability
May 2023 EEOC Guidance
Key Principles:
1. Employers Responsible for Third-Party Tools:
- You're liable for discrimination even if AI is purchased from vendors
- Vendor indemnification clauses may protect financially but don't eliminate EEOC jurisdiction
2. Adverse Impact Analysis Required:
- Regularly test hiring outcomes by race, sex, age, disability status
- If selection rates for protected groups are substantially lower (< 80% of highest group), investigate and justify
3. Job-Relatedness Required:
- AI criteria must be related to job performance
- "Personality traits" or "cultural fit" often lack validation
- Employers must show business necessity if disparate impact exists
4. ADA Accommodations:
- Disabled applicants can request alternative assessments if AI tool creates barrier
- Screen reader compatibility, extended time, alternative formats required
5. Algorithmic Redlining:
- Using ZIP code, school attended, or other proxies for race/ethnicity violates Title VII
- Correlational proxies create liability even if race isn't explicitly used
Practical EEOC Compliance Steps
Before Deployment:
- Validation Study: Demonstrate AI tool predicts job performance (criterion validity)
- Adverse Impact Testing: Test on historical data to identify potential disparate impact
- Alternatives Analysis: Consider less discriminatory selection methods achieving similar outcomes
During Use:
- Ongoing Monitoring: Track selection rates by protected categories quarterly
- Human Review: Ensure AI assists rather than replaces human decision-making
- Accommodation Process: Establish clear procedure for candidates to request alternatives
Documentation:
- Validation Evidence: Maintain records of job analysis, criterion-related validity studies
- Adverse Impact Data: Quarterly or annual reports on selection rates by demographic group
- Vendor Due Diligence: Document vendor claims, request validation studies, review audit results
Algorithmic Redlining: Using AI features like ZIP code, college attended, or gaps in employment history can create disparate impact even when race/sex aren't explicitly used. These "proxy variables" correlate with protected characteristics and trigger EEOC scrutiny.
State-Specific Hiring AI Requirements
Illinois: AI Video Interview Act (Artificial Intelligence Video Interview Act, 2020)
Applicability: Employers using AI to analyze video interviews
Requirements:
- Notice: Inform applicants that AI will analyze their video interview and explain how AI evaluates them
- Consent: Obtain explicit consent before conducting AI-analyzed video interview
- Limit Sharing: May not share video with third parties except AI vendor providing analysis
- Destruction: Must delete videos within 30 days of applicant request
- Alternative: Must provide alternative assessment method upon request
2024 Amendment (HB 3773):
- Employers must ensure AI is "tested for bias" before use
- Prohibits use of AI that cannot be shown to be free from racial, ethnic, or gender-based bias
- Creates requirement for annual bias testing (similar to NYC but specifically for video AI)
Maryland: Facial Recognition in Employment (HB 283, 2024)
Prohibitions:
- Cannot use facial recognition to make employment decisions without notice and consent
- Cannot use facial recognition for continuous surveillance of employees
Required Disclosures:
- How facial recognition is used (attendance, security, performance monitoring)
- What characteristics are analyzed (identity verification vs. emotion detection)
- Where data is stored and who has access
Reporting:
- Annual report to Maryland Department of Labor on accuracy rates by demographic group
- Must demonstrate no disparate impact by race, ethnicity, or sex
California: Proposed AB 331 (Automated Decision Systems)
Status: Pending (introduced 2023, not yet enacted)
Scope: Employment, housing, credit, education, healthcare automated decision systems
Requirements:
- Impact assessment before deployment
- Annual updates to assessment
- Public disclosure of assessment summary
- Risk mitigation plan for identified biases
Impact Assessments Must Include:
- Purpose and intended benefits
- Training data characteristics
- Foreseeable risks (bias, discrimination, privacy)
- Steps taken to mitigate risks
- Post-deployment monitoring plan
Other State Developments
New Jersey:
- Proposed legislation requiring notice when AI is used in hiring
- Candidate right to human review of AI decisions
Washington:
- Task force studying AI in employment (2024)
- Likely to propose legislation in 2025-2026
Colorado:
- Consumer privacy law (CPA) includes employment context
- Profiling and automated decision-making disclosures required
EU AI Act: High-Risk Recruitment Systems
Classification as High-Risk
Annex III, Category 4: AI systems used for:
- Recruitment and selection of natural persons
- Making decisions affecting terms of employment
- Allocation of tasks based on individual behavior or personal traits
- Monitoring and evaluating performance
- Promoting or terminating employment relationships
Compliance Obligations
Article 10: Data Governance:
- Training data must be relevant, representative, free of errors
- Must examine training data for bias
- Data collection must comply with GDPR (lawful basis, minimization, purpose limitation)
Article 13: Transparency:
- Clear information about AI system's capabilities and limitations
- Instructions for use by HR professionals
- Known inaccuracies and failure modes
Article 14: Human Oversight:
- Humans must be able to override AI decisions
- System must alert to high-risk situations
- Training for HR staff using AI tools
Article 15: Accuracy and Robustness:
- Testing for bias across demographic groups
- Regular updates to maintain accuracy
- Resilience to adversarial attacks (résumé optimization tactics)
Post-Market Monitoring:
- Ongoing performance tracking
- Incident reporting to authorities
- Corrective action when bias detected
GDPR Intersections
Article 22: Automated Decision-Making:
- Individuals have right not to be subject to solely automated decisions with legal/significant effects
- Must provide human review upon request
- Must explain logic of automated decision
Special Category Data (Article 9):
- Cannot process sensitive data (race, ethnicity, health) unless explicit consent or legal basis
- Inferred characteristics (e.g., inferring disability from résumé gaps) also restricted
Practical Compliance Framework
Step 1: Inventory Your AI Hiring Tools
What to document:
- Tool name and vendor
- Stage of hiring process (screening, interviewing, assessment)
- Data inputs (résumé, video, application responses, social media)
- Decision output (pass/fail, rank score, recommendation)
- Deployment date and geographic scope
Classify by Regulation:
- NYC Local Law 144: Used in NYC for screening or promotion?
- EEOC jurisdiction: Used anywhere in US?
- State-specific: Illinois (video AI), Maryland (facial recognition), California (if enacted)
- EU AI Act: Used in EU or for EU residents?
Step 2: Conduct Bias Audit
Data Collection:
- Historical data: 1+ years of applications and hiring outcomes
- Demographic data: Race/ethnicity, sex (via voluntary self-identification or probabilistic inference)
- System outputs: Scores, rankings, pass/fail decisions
Analysis:
- Calculate selection rates by demographic category
- Compute impact ratios (category rate / highest rate)
- Flag categories with impact ratio < 0.80 (80% threshold)
- Statistical significance testing (chi-square, Fisher's exact test)
Engage Independent Auditor:
- Not affiliated with employer or vendor
- Technical expertise in ML fairness
- Legal expertise in employment discrimination
Step 3: Candidate Notice and Transparency
Notice Content (based on NYC requirements, applicable broadly):
- "We use AI-powered tools to evaluate candidates for [position]"
- "The AI assesses the following qualities: [communication skills, problem-solving, leadership]"
- "The AI analyzes data from: [your résumé, video interview responses, skills assessment]"
- "You have the right to request an alternative evaluation method or accommodation"
Posting:
- Careers page (persistent notice for all roles using AI)
- Job-specific postings (for specific tools used in particular roles)
- Application confirmation email (direct notice to each candidate)
Accommodation Process:
- Clear instructions for requesting alternative (email, phone, form)
- Designated contact person (HR representative)
- Response timeline (e.g., within 5 business days)
- Alternative options (human-only review, different assessment format, extended time)
Step 4: Vendor Due Diligence
Questions for AI Hiring Vendors:
- Validation: "What evidence do you have that your tool predicts job performance?" (Request validation studies)
- Bias Testing: "Have you conducted bias audits? Can we see the results?" (Request demographic impact data)
- Explainability: "Can you explain how your algorithm makes decisions?" (GDPR Article 22 requirement)
- Training Data: "What data was your model trained on? Are there known biases?" (EU AI Act Article 10)
- Updates: "How often is the model retrained? Will we be notified of changes?" (Maintain compliance continuity)
- Liability: "Will you indemnify us if your tool creates discriminatory outcomes?" (Contract negotiation point)
Red Flags:
- Vendor refuses to provide validation evidence
- Claims algorithm is "100% bias-free" (impossible)
- Cannot explain how decisions are made
- No demographic impact data available
- Exaggerated claims about predicting performance or "culture fit"
Step 5: Human-in-the-Loop Safeguards
AI-Assisted (Not Automated) Decisions:
- AI provides scores/rankings, but humans make final decisions
- Hiring managers trained on AI limitations and bias risks
- Authority to override AI recommendations with documented justification
Review Checkpoints:
- Pre-Screen Review: Human reviews AI-rejected candidates (10% sample or all borderline scores)
- Final Review: Hiring manager assesses all AI-recommended candidates
- Audit Review: Annual review of AI decisions by HR compliance team
Training for HR Staff:
- How the AI works (inputs, outputs, limitations)
- How to interpret AI scores (confidence intervals, borderline cases)
- When to override AI (red flags, inconsistencies, accommodations)
- Legal obligations (EEOC, NYC, state laws)
Step 6: Ongoing Monitoring
Quarterly Metrics:
- Selection rates by race/ethnicity, sex, age (if data available)
- AI score distributions by demographic group
- Override rate (how often humans reject AI recommendations)
- Candidate complaints and accommodation requests
Annual Review:
- Update bias audit (NYC requirement)
- Validation study refresh (EEOC best practice)
- Vendor contract renewal (opportunity to renegotiate based on performance)
- Regulatory landscape check (new laws, EEOC guidance, case law)
Incident Response:
- If bias detected: immediately pause AI tool, investigate root cause, implement remediation
- If complaint filed: preserve all records, engage legal counsel, cooperate with EEOC/DOL investigation
- If new regulation enacted: assess applicability, implement compliance measures within grace period
False Confidence: High-accuracy AI (e.g., 95% accuracy) can still produce discriminatory outcomes if accuracy differs by demographic group (e.g., 98% for white candidates, 90% for Black candidates). Always disaggregate metrics by protected categories.
Frequently Asked Questions
Do I need bias audits if I use AI only for sourcing (finding candidates), not evaluation?
No, under NYC Local Law 144. The law applies only to AEDTs used to "substantially assist or replace discretionary decision-making" in screening or promotion. Sourcing tools (Boolean search, recommendation engines that surface candidates) don't make evaluative decisions, so they're excluded.
However, EEOC principles still apply: if your sourcing tool systematically under-represents protected groups (e.g., recommends mostly male candidates for engineering roles), you may face disparate impact liability. Monitor sourcing tool outputs by demographic diversity.
Can I require candidates to complete AI assessments as a condition of consideration?
Legally, yes, but with important caveats:
- ADA Accommodations: Must provide alternative if candidate requests accommodation due to disability
- NYC Law: Must offer alternative process upon request (not limited to disability)
- Risk: Mandating AI assessments increases legal risk if assessments produce disparate impact—courts more critical of mandatory barriers
Best Practice: Make AI assessments optional or offer alternatives proactively to reduce friction and legal risk.
What if my AI vendor won't provide bias audit data or validation studies?
This is a significant red flag. Options:
Short-term:
- Conduct your own audit using historical data from your use of the tool (NYC allows employer-conducted audits with independent auditor)
- Request contractual indemnification for discrimination claims
- Limit AI tool to "advisory" role (humans make final decisions)
Long-term:
- Switch to vendor with transparent validation and bias testing
- Build internal tools with full control over fairness testing
- Use AI only for narrow, low-risk tasks (e.g., scheduling, not screening)
Why Vendors Resist:
- Validation studies may show tool doesn't predict performance
- Bias audits may reveal disparate impact
- Proprietary concerns (trade secret claims)
None of these reasons excuse your compliance obligations—you're liable regardless of vendor cooperation.
How do I collect demographic data for bias audits without violating privacy laws?
US (EEOC approach):
- Voluntary self-identification (separate from application, clearly marked optional)
- Use for compliance purposes only (not hiring decisions)
- Store separately from application data
- Include "prefer not to answer" option
EU (GDPR approach):
- Special category data (race, ethnicity) requires explicit consent or legal basis (Article 9 exemption for compliance with legal obligations, Article 89 for statistical purposes)
- Data minimization: collect only categories required by law
- Anonymization: aggregate data for bias testing (no individual identification)
- Alternative: probabilistic inference methods (BISG) for race/ethnicity where consent can't be obtained
Both:
- Clear notice explaining purpose (bias testing, not hiring decisions)
- Retention limits (3-7 years depending on jurisdiction)
- Access controls (only compliance team, not hiring managers)
Do AI hiring tools violate GDPR Article 22 (automated decision-making rights)?
Potentially, yes. Article 22 gives individuals the right not to be subject to decisions based solely on automated processing that produce legal or similarly significant effects.
When Article 22 Applies:
- Decision is "solely automated" (no meaningful human involvement)
- Decision has "legal or similarly significant effects" (employment qualifies)
How to Comply:
- Human involvement: Ensure humans review AI recommendations and make final decisions (not rubber-stamping)
- Right to explanation: Provide meaningful information about logic of automated decision
- Right to contest: Allow candidates to challenge AI decisions and request human review
- Legal basis: If using automated decisions, establish lawful basis (e.g., contract necessity, explicit consent)
Practical Implementation:
- Hiring managers must substantively review AI outputs
- Candidates can request explanation of AI scoring
- Appeals process for candidates who believe they were unfairly rejected
What happens if our bias audit reveals disparate impact?
You have legal and ethical obligations to address it:
Immediate Steps:
- Pause or limit tool use: Stop using AI for final decisions; use only in advisory capacity
- Root cause analysis: Investigate why disparate impact exists (data bias, feature selection, threshold settings)
- Legal counsel: Consult employment attorney on exposure and remediation strategies
Remediation Options:
- Technical fixes: Retrain model with fairness constraints, adjust decision thresholds by group, remove biased features
- Procedural fixes: Human review of all AI rejections for underrepresented groups, appeals process
- Alternative tools: Switch to less discriminatory selection methods
Disclosure Considerations:
- NYC requires publishing audit results (including disparate impact findings)
- EEOC doesn't require proactive disclosure, but results discoverable in litigation
- Proactive disclosure with remediation plan may demonstrate good faith
What NOT to Do:
- Continue using tool without changes (ongoing liability)
- Hide audit results (obstruction, sanctions risk)
- Retaliate against candidates who complained (separate violation)
Can we use AI to assess "culture fit" or "soft skills"?
Legally complex and high-risk:
Problems with "Culture Fit":
- Vague, subjective criterion hard to validate
- Often proxy for homogeneity ("people like us")
- Disproportionately screens out candidates from underrepresented backgrounds
- Difficult to show job-relatedness (EEOC requirement)
"Soft Skills" Challenges:
- Communication, leadership, teamwork are legitimate job requirements
- But AI measurement is unreliable (e.g., video analysis of "enthusiasm" or "confidence")
- Risk of bias: AI trained on narrow sample may penalize non-dominant communication styles
Best Practices:
- Define soft skills concretely (e.g., "presents data clearly to non-technical stakeholders")
- Validate that AI measurement correlates with job performance
- Use structured interviews (humans assessing soft skills with rubrics) instead of AI
- If using AI, conduct extra scrutiny in bias audits
Key Takeaways
- Multiple Overlapping Laws: NYC Local Law 144, EEOC guidance, state-specific laws (IL, MD, CA), EU AI Act, and GDPR all regulate hiring AI. Compliance requires understanding which laws apply to your organization based on location and tool type.
- Employer Liability for Vendor Tools: You're legally responsible for discriminatory outcomes even if AI is purchased from third-party vendors. Vendor indemnification doesn't eliminate EEOC jurisdiction. Conduct your own due diligence and bias testing.
- Bias Audits Are Mandatory (NYC): If you use AEDTs in NYC, you must conduct annual independent bias audits, publish results publicly, and notify candidates at least 10 days before use. Non-compliance results in fines up to $1,500/day.
- Candidate Rights Are Expanding: Candidates can request alternative selection processes (NYC), must provide consent for video AI (IL), and have right to human review (EU GDPR Article 22). Establish clear accommodation procedures.
- 80% Threshold Signals Risk: If any demographic group's selection rate is less than 80% of the highest group's rate, you have potential disparate impact requiring investigation and justification under EEOC's four-fifths rule.
- Human-in-the-Loop Is Essential: AI should assist, not replace, human decision-making. Train hiring managers on AI limitations, ensure meaningful review of AI outputs, and empower humans to override AI recommendations.
- Ongoing Monitoring Required: Annual bias audits (NYC) and continuous monitoring (EEOC best practice, EU AI Act requirement) are necessary. Quarterly metrics on selection rates by demographic group help detect issues early and demonstrate diligence.
Citations
- NYC Department of Consumer and Worker Protection. (2023). Automated Employment Decision Tools (Local Law 144) Implementation.
- U.S. Equal Employment Opportunity Commission. (2023). The Americans with Disabilities Act and the Use of Software, Algorithms, and Artificial Intelligence to Assess Job Applicants and Employees.
- Illinois General Assembly. (2020). Artificial Intelligence Video Interview Act (820 ILCS 42).
- European Commission. (2024). EU AI Act: High-Risk AI Systems (Regulation 2024/1689, Annex III).
- Raghavan, M., et al. (2020). Mitigating Bias in Algorithmic Hiring: Evaluating Claims and Practices. FAT* '20.
Need help ensuring your hiring AI complies with employment regulations? Our team provides bias audits, vendor due diligence, policy development, and ongoing compliance monitoring to keep your recruitment technology fair and lawful.
Frequently Asked Questions
Under NYC Local Law 144, you do not need a bias audit if the AI is used solely for sourcing and does not score, rank, or filter candidates for selection decisions. However, EEOC disparate impact principles still apply, so you should monitor whether sourcing outputs systematically under-represent protected groups.
The employer is primarily liable under Title VII, ADEA, and ADA, even when using third-party AI tools. Vendor contracts can shift financial risk through indemnification, but they do not remove EEOC or court jurisdiction over the employer’s practices.
At minimum, annually to align with NYC Local Law 144 bias audit expectations, but quarterly monitoring is recommended for larger employers or high-volume roles so that emerging disparities can be detected and remediated quickly.
No. Regulators expect employers to review validation studies, bias testing results, and conduct their own impact analyses. Marketing claims are not a defense if the tool produces discriminatory outcomes.
The 80% rule (four-fifths rule) compares selection rates between groups. If any protected group’s selection rate is less than 80% of the highest group’s rate, this signals potential disparate impact and triggers a need for further investigation and justification.
Vendor Liability Doesn’t Eliminate Employer Liability
Buying an AI hiring tool from a third-party vendor does not shield you from discrimination claims. Under Title VII, ADEA, ADA, and state civil rights laws, employers remain responsible for the outcomes of tools they choose to deploy, regardless of vendor assurances about fairness or neutrality.
Mind the Compliance Gap in NYC
Despite enforcement beginning in April 2023, a small minority of NYC employers using automated employment decision tools have published required bias audits. Organizations operating in or hiring for NYC roles should confirm immediately whether any tools qualify as AEDTs and, if so, complete and publish an independent audit.
Design Human-in-the-Loop From Day One
Retrofitting human oversight after an AI system is live is costly and risky. Build workflows where AI assists rather than replaces recruiters, define clear override criteria, and train hiring managers on when and how to depart from AI recommendations.
Impact ratio threshold used by regulators to flag potential disparate impact in selection rates between demographic groups
Source: EEOC Uniform Guidelines on Employee Selection Procedures
Typical minimum retention period for AEDT bias audit records and related hiring data under NYC Local Law 144
"High overall accuracy in an AI hiring model is meaningless if error rates and selection rates differ sharply across demographic groups. Fairness requires disaggregated analysis, not just a single performance metric."
— Adapted from EEOC and EU AI Act fairness guidance
"Regulators increasingly treat recruitment AI as a high-risk activity on par with credit scoring and access to essential services, which means documentation, monitoring, and human oversight are no longer optional extras."
— Synthesis of NYC Local Law 144, EEOC guidance, and EU AI Act
