Back to Insights
AI Governance & Risk ManagementGuideAdvanced

US AI Regulations: State-by-State Guide

January 5, 202513 min readPertama Partners
For:CTO/CIO

Navigate fragmented US AI regulation across 15+ state laws governing automated decisions, bias audits, and transparency.

Muslim Man Lawyer Formal - ai governance & risk management insights

Key Takeaways

  • 1.The US lacks a comprehensive federal AI law, leaving organizations to navigate a patchwork of state, local, and sector-specific rules.
  • 2.California, Colorado, and Virginia privacy laws introduce automated decision-making rights, profiling opt-outs, and impact assessment obligations.
  • 3.NYC Local Law 144 effectively sets a national benchmark for bias audits and transparency in employment AI.
  • 4.Federal regulators like the FTC, EEOC, and CFPB are applying existing laws to AI, focusing on discrimination, transparency, and data practices.
  • 5.The NIST AI Risk Management Framework, while voluntary, is emerging as a key reference for demonstrating reasonable AI governance.
  • 6.A practical strategy is to implement the strictest requirements (impact assessments, opt-outs, bias audits, biometric consent) as a global baseline and then tailor for specific jurisdictions.
  • 7.Robust internal AI governance and documentation are essential to manage ongoing state-level fragmentation and regulatory evolution.

The US currently regulates AI through a fragmented mix of state privacy laws, city-level ordinances, sector-specific federal statutes, and voluntary frameworks. For compliance leaders, the practical challenge is to design a unified AI governance program that can withstand scrutiny across jurisdictions without building 50 different regimes.

This guide focuses on the most influential state and local laws, how they intersect with federal requirements, and how to build a multi-state compliance strategy that is realistic for enterprise deployment.


Why US AI Compliance Is Fragmented

  • No comprehensive federal AI statute exists.
  • States are extending privacy and consumer protection laws to automated decision-making and profiling.
  • Cities (notably New York City) are targeting high-risk use cases like hiring.
  • Federal regulators are stretching existing laws (credit, housing, employment, healthcare) to cover AI.

For a Compliance Officer or CTO, the key is to identify cross-cutting obligations (notice, consent, opt-outs, impact assessments, bias testing, documentation) and implement them as a global baseline.


Key State and Local Laws Shaping AI Compliance

California CPRA (Effective January 2023)

Scope: Personal information of California residents, including employees and contractors (subject to regulatory details).

AI-Relevant Provisions:

  • Automated decision-making rights (pending detailed regulations):
    • Right to opt-out of certain automated decision-making, including profiling.
    • Potential right to access meaningful information about logic involved and likely outcomes.
  • Profiling disclosures:
    • Privacy notices must describe categories of personal information used, purposes (including profiling/automated decisions), and sharing/sale.
  • Risk assessments (in forthcoming regulations):
    • For high-risk processing, including certain automated decision-making and profiling.

Penalties:

  • Up to $2,500 per violation or $7,500 per intentional violation or violations involving minors.
  • Enforced by the California Privacy Protection Agency (CPPA) and Attorney General.

Practical implications:

  • Treat any AI system that materially affects consumers (pricing, eligibility, employment, benefits) as high-risk.
  • Build mechanisms to honor opt-outs from profiling and automated decision-making where feasible.

Colorado Privacy Act (CPA) (Effective July 2023)

Scope: Personal data of Colorado residents; applies to controllers meeting certain thresholds.

AI-Relevant Provisions:

  • Profiling opt-out right:
    • Consumers can opt out of profiling in furtherance of decisions that produce legal or similarly significant effects.
  • Automated decision transparency:
    • Controllers must provide clear information about profiling and automated decisions, including logic and consequences, upon request.
  • Data protection assessments (DPAs):
    • Required for profiling that presents a reasonably foreseeable risk of unfair or deceptive treatment, financial or physical injury, or other significant harms.
    • Functionally similar to algorithm impact assessments for consequential decisions.

Penalties:

  • Up to $20,000 per violation under Colorado’s Consumer Protection Act (subject to statutory caps and aggregation rules).

Practical implications:

  • Treat Colorado DPAs as your gold standard impact assessment template.
  • Document purpose, data sources, model design, testing, and mitigation for any consequential AI system.

Virginia Consumer Data Protection Act (VCDPA) (Effective January 2023)

Scope: Personal data of Virginia residents; applies to controllers meeting certain thresholds.

AI-Relevant Provisions:

  • Opt-out of profiling:
    • Consumers can opt out of profiling in furtherance of decisions that produce legal or similarly significant effects.
  • Purpose limitation and data minimization:
    • Personal data must be limited to what is adequate, relevant, and reasonably necessary for disclosed purposes.
  • Data protection assessments:
    • Required for profiling that presents a reasonably foreseeable risk of harm.

Penalties:

  • Civil penalties up to $7,500 per violation enforced by the Virginia Attorney General.

Practical implications:

  • Align Virginia DPAs with Colorado’s, so one assessment satisfies both.
  • Use purpose limitation and minimization as guardrails for feature selection and data retention in AI systems.

New York City Local Law 144 (Automated Employment Decision Tools) (Effective July 2023)

Scope: Use of Automated Employment Decision Tools (AEDTs) in NYC for hiring and promotion decisions.

AI-Relevant Provisions:

  • Annual bias audit:
    • Conducted by an independent auditor.
    • Must evaluate disparate impact across sex, race/ethnicity, and other specified categories.
  • Public disclosure:
    • Summary of the most recent bias audit and distribution date must be publicly available.
  • Notice to candidates and employees:
    • At least 10 business days before use of an AEDT.
    • Notice must describe the job qualifications and characteristics the tool uses.

Penalties:

  • Civil penalties per violation, with escalating fines for repeated non-compliance.

Practical implications:

  • Treat NYC LL 144 as the baseline for employment AI nationwide.
  • Build repeatable annual audit processes and vendor requirements for any hiring or promotion AI.

Illinois AI Video Interview Act (Effective January 2020)

Scope: Employers using AI to analyze video interviews of job applicants for positions based in Illinois.

AI-Relevant Provisions:

  • Notice:
    • Inform applicants that AI may be used to analyze video interviews.
  • Explanation:
    • Provide information about how the AI works and what characteristics it evaluates.
  • Consent:
    • Obtain consent before using AI to evaluate the video.
  • Data handling:
    • Limit sharing of videos and delete upon request or within specified timeframes.

Penalties:

  • Civil liability and injunctive relief; enforcement through private actions and state authorities.

Practical implications:

  • Standardize AI interview notices and consent flows across all jurisdictions.
  • Implement strict retention and deletion controls for video data.

Illinois Biometric Information Privacy Act (BIPA)

Scope: Collection, use, and storage of biometric identifiers and biometric information of Illinois residents (e.g., facial geometry, fingerprints, iris scans).

AI-Relevant Provisions:

  • Informed written consent:
    • Before collecting or disclosing biometric identifiers.
  • Retention and destruction policy:
    • Publicly available schedule for retention and permanent destruction.
  • Private right of action:
    • Individuals can sue for statutory damages.

Penalties:

  • $1,000 per negligent violation and $5,000 per intentional or reckless violation, plus attorneys’ fees.
  • Significant class action exposure for AI systems using facial recognition or other biometrics.

Practical implications:

  • Treat BIPA as the strictest standard for biometric AI nationwide.
  • Avoid biometric features unless absolutely necessary; if used, implement robust consent, minimization, and deletion.

Federal Sector Regulations Impacting AI

While there is no general AI statute, several federal laws already apply to AI-driven decisions.

Fair Credit Reporting Act (FCRA)

Scope: Credit reports, background checks, and other consumer reports used for credit, employment, housing, and insurance decisions.

AI Implications:

  • Adverse action notices:
    • Required when decisions are based in whole or in part on a consumer report, including AI-scored reports.
  • Right to dispute accuracy:
    • Consumers can challenge incorrect information; systems must support correction workflows.
  • AI-based credit scoring:
    • If AI models rely on consumer report data, FCRA obligations apply.

Equal Credit Opportunity Act (ECOA)

Scope: Credit decisions for consumers and businesses.

AI Implications:

  • Anti-discrimination:
    • Prohibits discrimination on protected characteristics (e.g., race, sex, age, marital status).
  • Adverse action explanations:
    • Lenders must provide specific reasons for credit denials or less favorable terms, even when using complex AI models.
  • Model explainability:
    • Requires mechanisms to translate model outputs into understandable reasons.

Fair Housing Act (FHA)

Scope: Housing-related decisions, including rentals, sales, and advertising.

AI Implications:

  • Prohibition on discriminatory housing practices:
    • Applies to tenant screening, pricing, and targeted advertising using AI.
  • Disparate impact standard:
    • Liability can arise even without intent if AI systems produce discriminatory outcomes.

Title VII of the Civil Rights Act

Scope: Employment decisions (hiring, promotion, termination, compensation).

AI Implications:

  • Anti-discrimination:
    • Applies to algorithmic hiring, promotion, and performance evaluation tools.
  • Disparate impact:
    • Employers must validate that AI tools do not disproportionately disadvantage protected groups.
  • EEOC enforcement:
    • EEOC has signaled active scrutiny of AI in employment.

Health Insurance Portability and Accountability Act (HIPAA)

Scope: Protected health information (PHI) handled by covered entities and business associates.

AI Implications:

  • Privacy and security rules:
    • Apply to AI systems processing PHI (e.g., diagnostic models, clinical decision support).
  • Business associate agreements (BAAs):
    • Required with AI vendors handling PHI.
  • Breach notification:
    • AI-related security incidents involving PHI trigger notification duties.

Federal Agencies and AI Enforcement

FTC Act Section 5: Unfair or Deceptive Practices

The Federal Trade Commission uses Section 5 to police AI-related harms.

Key AI enforcement themes:

  • Deceptive AI claims:
    • Misrepresenting AI capabilities, accuracy, or independence.
  • Unfair data practices:
    • Collecting or using data in ways that cause substantial injury not reasonably avoidable by consumers.
  • Algorithmic bias as unfairness:
    • Failing to test for and mitigate discriminatory outcomes.
  • Security failures:
    • Inadequate security for training data, models, and outputs.

Equal Employment Opportunity Commission (EEOC)

Focus: Application of anti-discrimination laws to AI in employment.

Key expectations:

  • Title VII compliance for algorithmic hiring and promotion.
  • ADA accommodations:
    • AI tools must not screen out individuals with disabilities unfairly and must support reasonable accommodations.
  • Disparate impact testing:
    • Employers should regularly test AI tools for adverse impact and adjust or discontinue tools that create unlawful disparities.

Consumer Financial Protection Bureau (CFPB)

Focus: AI in consumer finance, especially credit underwriting and servicing.

Key expectations:

  • Adverse action explanations under ECOA and FCRA.
  • Fairness testing:
    • Lenders should monitor AI models for discriminatory patterns.
  • Algorithmic transparency:
    • CFPB has indicated that complexity is not a defense for failing to provide clear reasons for decisions.

NIST AI Risk Management Framework (AI RMF)

Status: Voluntary guidance published January 2023.

Purpose: Provide a structured approach to managing AI risks and building trustworthy systems.

Core functions:

  • Govern: Establish organizational policies, roles, and accountability for AI.
  • Map: Understand the context, intended use, and potential impacts of AI systems.
  • Measure: Assess risks, performance, robustness, and fairness.
  • Manage: Prioritize and implement risk treatments (mitigate, transfer, accept, avoid).

Why it matters for compliance:

  • Regulators and courts may treat adherence as evidence of reasonable AI governance.
  • Provides a common language for Compliance, Risk, and Engineering to collaborate.
  • Can be used as the backbone of internal AI policies and control frameworks.

Executive Order 14110 (October 2023)

Scope: Federal agency AI use and broader federal policy direction.

Key directives:

  • Safety and security testing for powerful foundation models.
  • Privacy-preserving techniques (e.g., differential privacy, federated learning) for federal AI use.
  • Algorithmic discrimination prevention in federal programs.
  • Federal procurement standards for AI systems.
  • International AI governance cooperation and standards alignment.

Impact on private organizations:

  • Federal contractors will see AI-specific requirements in contracts and RFPs.
  • EO 14110 signals the types of controls (testing, documentation, safeguards) regulators may later require more broadly.

Building a Multi-State AI Compliance Strategy

1. Baseline Approach: Implement the Strictest Requirements

Instead of managing 50 separate compliance tracks, design a single, stringent baseline that satisfies the most demanding jurisdictions.

Step 1: Identify strictest requirements

  • Impact/algorithm assessments:
    • Colorado CPA and Virginia VCDPA DPAs for high-risk profiling.
  • Opt-out rights:
    • California CPRA and Colorado CPA for automated decision-making and profiling.
  • Bias audits:
    • NYC LL 144 for employment AI.
  • Biometric consent:
    • Illinois BIPA for any biometric-based AI.

Step 2: Apply strictest requirements globally

  • Treat all high-impact AI systems as if they are subject to:
    • Impact assessments before deployment and on major changes.
    • Opt-out or human review options where technically and operationally feasible.
    • Regular bias testing and documentation.
    • Explicit consent and strict controls for biometric data.

Step 3: Layer state-specific additions

  • Maintain a jurisdictional addendum to your AI policy that captures:
    • Variations in notice language and required content.
    • State-specific opt-out mechanisms and response timelines.
    • Local posting or disclosure requirements (e.g., NYC audit summaries).

2. Documentation Requirements Across Jurisdictions

For each material AI system, maintain a compliance dossier including:

  • Algorithm/impact assessments:
    • Purpose, scope, data sources, model type, and intended users.
    • Risk identification (bias, privacy, security, safety, explainability, robustness).
    • Mitigation measures and residual risk acceptance.
  • Bias and performance testing:
    • Metrics across protected groups where legally and ethically permissible.
    • Test design, datasets, and limitations.
    • Remediation steps taken when disparities are identified.
  • Privacy impact assessments (PIAs):
    • Mapping of personal data flows, legal bases, retention, and sharing.
    • Alignment with state privacy laws and HIPAA where applicable.
  • Vendor due diligence:
    • Security, privacy, and fairness controls of third-party AI providers.
    • Contractual obligations (audit rights, data use limits, incident reporting).
  • Incident logs and mitigation:
    • Record of AI-related incidents (e.g., erroneous denials, discriminatory outcomes, security events).
    • Root cause analysis and corrective actions.

3. Sector-Specific Overlays

Employment AI

  • NYC LL 144:
    • Annual independent bias audits for AEDTs used in NYC.
    • Public posting of audit summaries.
  • EEOC guidance:
    • Validate tools for job-relatedness and business necessity.
    • Monitor for disparate impact and adjust or discontinue problematic tools.
  • State hiring laws:
    • Additional notice, consent, or record-keeping requirements in certain states.

Implementation tips:

  • Maintain a central registry of all employment-related AI tools.
  • Standardize vendor contracts to require audit support and data access.

Credit and Lending AI

  • ECOA and FCRA:
    • Ensure adverse action notices are specific and understandable.
    • Maintain documentation of model features and rationale.
  • CFPB expectations:
    • Regular fairness testing and monitoring.
    • Governance over third-party models and data sources.

Implementation tips:

  • Build a reason code library aligned with model features.
  • Implement continuous monitoring for drift and disparate impact.

Healthcare AI

  • HIPAA:
    • Apply privacy and security rules to AI handling PHI.
    • Execute BAAs with AI vendors.
  • FDA rules (for diagnostic/therapeutic tools):
    • Some AI may be regulated as medical devices.
  • State telehealth and practice laws:
    • Govern how AI can be used in virtual care and clinical decision support.

Implementation tips:

  • Classify AI systems by clinical risk and regulatory status.
  • Integrate AI validation into clinical governance and quality assurance.

Housing AI

  • Fair Housing Act and HUD guidance:
    • Prohibit discriminatory tenant screening, pricing, and advertising.
  • State tenant protections:
    • Additional rules on screening criteria and adverse action notices.

Implementation tips:

  • Test screening and pricing models for disparate impact.
  • Provide clear explanations and appeal mechanisms for adverse decisions.

Operationalizing AI Governance

To make this sustainable, embed AI compliance into existing governance structures.

Core components:

  • AI policy and standards:
    • Define high-risk AI, approval thresholds, and mandatory controls.
  • AI review board:
    • Cross-functional group (Compliance, Legal, Risk, Security, Engineering, Product) reviewing high-risk use cases.
  • Model lifecycle controls:
    • Requirements at design, development, testing, deployment, and retirement.
  • Training and accountability:
    • Role-specific training for developers, product owners, and business users.
    • Clear ownership for each AI system.

Key Takeaways

  1. The US has no comprehensive federal AI law; organizations must navigate a growing patchwork of state, local, and sector-specific rules.
  2. State privacy laws (CA, CO, VA) introduce automated decision-making rights, profiling opt-outs, and impact assessment obligations.
  3. NYC Local Law 144 sets a de facto national standard for employment AI bias audits and transparency.
  4. Federal regulators (FTC, EEOC, CFPB, HUD, HHS) are applying existing laws to AI, focusing on discrimination, transparency, and data practices.
  5. The NIST AI RMF, while voluntary, is emerging as a reference framework for demonstrating reasonable AI governance.
  6. A practical compliance strategy is to implement the strictest requirements (CO/VA impact assessments, CA opt-outs, NYC audits, Illinois biometric consent) as a global baseline.
  7. State-level fragmentation is likely to continue for several years, making robust internal AI governance more important than chasing individual statutes.

Frequently Asked Questions

Is there a federal US AI law?

No. There is no comprehensive federal AI statute. Executive Order 14110 directs federal agencies and sets policy priorities, but binding obligations for most private organizations still come from state laws and sector-specific federal rules.

Which state has the strictest AI requirements?

Different states lead in different areas: Colorado and Virginia for impact assessments on high-risk profiling, California for automated decision-making and opt-out rights, Illinois for biometric consent and liability, and New York City for mandatory bias audits in employment.

Do state AI and privacy laws apply to out-of-state companies?

Yes. These laws generally apply based on the residence of the consumer or worker, not the company’s location. If you process personal data of residents of CA, CO, VA, IL, or use AEDTs for NYC roles, you may be subject to those laws.

Is the NIST AI Risk Management Framework mandatory?

No. The NIST AI RMF is voluntary. However, regulators and courts may view adherence as evidence of reasonable AI governance, and federal agencies are encouraged to align with it, which can influence expectations for contractors and regulated entities.

How should we prioritize AI compliance efforts?

Start by inventorying high-impact AI systems, then implement a baseline of impact assessments, bias testing, privacy controls, and documentation aligned with CO/VA, CA, NYC LL 144, and BIPA. From there, add sector-specific overlays (credit, employment, healthcare, housing) and refine notices and opt-outs by jurisdiction.

Frequently Asked Questions

No. There is no comprehensive federal AI statute. Executive Order 14110 provides directives for federal agencies, but private organizations are mainly governed by state laws and sector-specific federal regulations.

Colorado and Virginia are strict on impact assessments for high-risk profiling, California leads on automated decision-making and opt-out rights, Illinois is strictest on biometric consent and liability, and New York City imposes mandatory bias audits for employment AI.

Yes. These laws typically apply based on where the consumer or worker resides, not where the company is located. Serving residents of a state can trigger that state’s obligations.

No. The NIST AI Risk Management Framework is voluntary, but regulators and courts may treat adherence as evidence of reasonable AI governance, and federal agencies are encouraged to align with it.

Inventory high-impact AI systems, implement the strictest requirements (impact assessments, opt-outs, bias audits, biometric consent) as a global baseline, then add sector-specific overlays and jurisdiction-specific notice and opt-out details.

Design Once, Comply Many Times

For most organizations, the most efficient path is to design a single, stringent AI governance baseline that satisfies the strictest state and sector requirements, then layer minor jurisdictional variations on top rather than building separate programs for each state.

15+

US states and localities with AI-relevant privacy or automated decision-making laws

Source: Synthesis of state privacy and AI-related legislation through 2024

"In the absence of a federal AI statute, regulators are using existing privacy, consumer protection, and anti-discrimination laws as powerful tools to police AI systems."

US AI regulatory landscape analysis

References

  1. California Consumer Privacy Act and California Privacy Rights Act. California Attorney General / California Privacy Protection Agency (2020)
  2. Colorado Privacy Act. Colorado General Assembly (2021)
  3. New York City Local Law 144 of 2021. New York City Council (2021)
  4. AI Risk Management Framework (AI RMF 1.0). National Institute of Standards and Technology (NIST) (2023)
  5. Executive Order 14110: Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence. The White House (2023)
US AI RegulationsState LawsComplianceAutomated Decision-MakingBias AuditsPrivacyRisk Management

Explore Further

Key terms:AI Regulation

Ready to Apply These Insights to Your Organization?

Book a complimentary AI Readiness Audit to identify opportunities specific to your context.

Book an AI Readiness Audit