The United States regulates artificial intelligence through a fragmented architecture of state privacy laws, city-level ordinances, sector-specific federal statutes, and voluntary frameworks. For compliance leaders, the practical challenge is not simply understanding each rule in isolation. It is designing a unified AI governance program that can withstand scrutiny across jurisdictions without building 50 different regimes.
This guide examines the most influential state and local laws, how they intersect with federal requirements, and how to build a multi-state compliance strategy that is realistic for enterprise deployment.
Why US AI Compliance Is Fragmented
The absence of a comprehensive federal AI statute has created a regulatory vacuum that states, cities, and federal agencies are each filling on their own terms. States are extending privacy and consumer protection laws to cover automated decision-making and profiling. Cities, most notably New York, are targeting high-risk use cases like hiring. Federal regulators, meanwhile, are stretching existing laws governing credit, housing, employment, and healthcare to reach AI-driven decisions.
For a Compliance Officer or CTO, the key is to identify cross-cutting obligations (notice, consent, opt-outs, impact assessments, bias testing, and documentation) and implement them as a global baseline. The alternative, managing each jurisdiction's requirements independently, quickly becomes unsustainable at scale.
Key State and Local Laws Shaping AI Compliance
California CPRA (Effective January 2023)
California's Consumer Privacy Rights Act applies to the personal information of California residents, including employees and contractors. Its AI-relevant provisions center on automated decision-making rights that are still taking shape through detailed regulations. Once finalized, these rules will grant consumers the right to opt out of certain automated decision-making and profiling, and potentially the right to access meaningful information about the logic involved and likely outcomes of such systems.
Privacy notices under the CPRA must already describe the categories of personal information used, the purposes of processing (including profiling and automated decisions), and any sharing or sale of data. Forthcoming regulations will also require risk assessments for high-risk processing, including certain forms of automated decision-making and profiling.
Penalties are significant: up to $2,500 per violation, or $7,500 per intentional violation or violations involving minors. Enforcement falls to the California Privacy Protection Agency (CPPA) and the Attorney General. In practice, organizations should treat any AI system that materially affects consumers (through pricing, eligibility, employment, or benefits) as high-risk, and build mechanisms to honor opt-outs from profiling and automated decision-making where feasible.
Colorado Privacy Act (CPA) (Effective July 2023)
The Colorado Privacy Act applies to personal data of Colorado residents and introduces some of the most structured AI compliance requirements in the country. Consumers can opt out of profiling that furthers decisions producing legal or similarly significant effects. Controllers must provide clear information about profiling and automated decisions, including the logic involved and potential consequences, upon request.
The law's most consequential provision for AI teams is its requirement for data protection assessments (DPAs). These are mandatory for any profiling that presents a reasonably foreseeable risk of unfair or deceptive treatment, financial or physical injury, or other significant harms. In practice, these DPAs function as algorithm impact assessments for consequential decisions. Violations can result in penalties of up to $20,000 per violation under Colorado's Consumer Protection Act.
Colorado's DPA requirements represent a strong model for AI impact assessment more broadly. Organizations that treat the Colorado DPA as their gold standard impact assessment template, documenting purpose, data sources, model design, testing methodology, and mitigation measures for any consequential AI system, will find themselves well-positioned across multiple jurisdictions.
Virginia Consumer Data Protection Act (VCDPA) (Effective January 2023)
Virginia's law mirrors many of Colorado's AI-relevant provisions. Consumers can opt out of profiling that furthers decisions producing legal or similarly significant effects. The VCDPA also imposes purpose limitation and data minimization requirements: personal data must be limited to what is adequate, relevant, and reasonably necessary for disclosed purposes. Data protection assessments are required for profiling that presents a reasonably foreseeable risk of harm.
Civil penalties reach up to $7,500 per violation, enforced by the Virginia Attorney General. The practical approach is to align Virginia DPAs with Colorado's framework so that a single assessment satisfies both jurisdictions. Purpose limitation and data minimization requirements also serve as useful guardrails for feature selection and data retention in AI systems.
New York City Local Law 144 (Automated Employment Decision Tools) (Effective July 2023)
New York City's Local Law 144 targets the use of Automated Employment Decision Tools (AEDTs) in hiring and promotion decisions within the city. The law requires an annual bias audit conducted by an independent auditor, evaluating disparate impact across sex, race/ethnicity, and other specified categories. A summary of the most recent bias audit and its distribution date must be publicly available. Candidates and employees must receive notice at least 10 business days before an AEDT is used, including a description of the job qualifications and characteristics the tool evaluates.
Civil penalties apply per violation, with escalating fines for repeated non-compliance. Although the law applies only within New York City, it has effectively set the baseline for employment AI nationwide. Organizations deploying hiring or promotion AI in any jurisdiction should build repeatable annual audit processes and vendor requirements modeled on LL 144's standards.
Illinois AI Video Interview Act (Effective January 2020)
Illinois was among the first states to regulate AI in employment, requiring employers that use AI to analyze video interviews for positions based in Illinois to meet specific transparency and consent requirements. Employers must inform applicants that AI may be used, explain how it works and what characteristics it evaluates, and obtain consent before the AI analyzes the video. The law also imposes strict data handling requirements: sharing of videos must be limited, and deletion must occur upon request or within specified timeframes.
Enforcement occurs through civil liability, injunctive relief, private actions, and state authorities. The practical response is to standardize AI interview notices and consent flows across all jurisdictions and implement strict retention and deletion controls for video data.
Illinois Biometric Information Privacy Act (BIPA)
BIPA governs the collection, use, and storage of biometric identifiers and biometric information of Illinois residents, covering facial geometry, fingerprints, iris scans, and similar data. The law requires informed written consent before collecting or disclosing biometric identifiers and mandates a publicly available schedule for retention and permanent destruction of biometric data.
What makes BIPA uniquely consequential is its private right of action. Individuals can sue for statutory damages of $1,000 per negligent violation and $5,000 per intentional or reckless violation, plus attorneys' fees. This creates significant class action exposure for any AI system using facial recognition or other biometrics. BIPA represents the strictest standard for biometric AI nationwide, and organizations should avoid biometric features unless absolutely necessary. Where biometric AI is deployed, robust consent mechanisms, data minimization practices, and deletion protocols are essential.
Federal Sector Regulations Impacting AI
While no general federal AI statute exists, several established federal laws already apply to AI-driven decisions across specific sectors.
Fair Credit Reporting Act (FCRA)
The FCRA governs credit reports, background checks, and other consumer reports used for credit, employment, housing, and insurance decisions. When decisions are based in whole or in part on a consumer report, including AI-scored reports, adverse action notices are required. Consumers retain the right to dispute inaccurate information, which means AI systems must support correction workflows. Any AI model that relies on consumer report data triggers full FCRA obligations.
Equal Credit Opportunity Act (ECOA)
ECOA prohibits discrimination on protected characteristics (race, sex, age, marital status, and others) in credit decisions for consumers and businesses. Lenders must provide specific reasons for credit denials or less favorable terms, even when using complex AI models. This creates a direct model explainability requirement: organizations need mechanisms to translate opaque model outputs into understandable, specific reasons that satisfy regulatory expectations.
Fair Housing Act (FHA)
The Fair Housing Act applies to housing-related decisions including rentals, sales, and advertising. Its prohibition on discriminatory housing practices extends to tenant screening, pricing, and targeted advertising that uses AI. Critically, liability can arise even without discriminatory intent. The disparate impact standard means that AI systems producing discriminatory outcomes in housing contexts create legal exposure regardless of the developer's intentions.
Title VII of the Civil Rights Act
Title VII's anti-discrimination requirements apply directly to algorithmic hiring, promotion, and performance evaluation tools. Employers must validate that AI tools do not disproportionately disadvantage protected groups under the disparate impact standard. The Equal Employment Opportunity Commission (EEOC) has signaled active scrutiny of AI in employment, making this an area of heightened enforcement risk.
Health Insurance Portability and Accountability Act (HIPAA)
HIPAA's privacy and security rules apply to AI systems processing protected health information (PHI), including diagnostic models and clinical decision support tools. Business associate agreements (BAAs) are required with AI vendors handling PHI, and AI-related security incidents involving PHI trigger breach notification duties.
Federal Agencies and AI Enforcement
FTC Act Section 5: Unfair or Deceptive Practices
The Federal Trade Commission uses Section 5 to police AI-related harms across several dimensions. Deceptive AI claims, such as misrepresenting capabilities, accuracy, or independence, fall squarely within the FTC's enforcement scope. The agency also targets unfair data practices that cause substantial injury not reasonably avoidable by consumers, algorithmic bias that results from inadequate testing and mitigation, and security failures related to training data, models, and outputs.
Equal Employment Opportunity Commission (EEOC)
The EEOC focuses on applying anti-discrimination laws to AI in employment. The commission expects Title VII compliance for algorithmic hiring and promotion tools, ADA-compliant accommodations (ensuring AI tools do not unfairly screen out individuals with disabilities), and regular disparate impact testing. Employers should test AI tools for adverse impact on an ongoing basis and be prepared to adjust or discontinue tools that create unlawful disparities.
Consumer Financial Protection Bureau (CFPB)
The CFPB oversees AI in consumer finance, with particular attention to credit underwriting and servicing. The bureau expects adverse action explanations that meet ECOA and FCRA standards, regular fairness testing and monitoring of AI models for discriminatory patterns, and meaningful algorithmic transparency. The CFPB has made clear that model complexity is not a defense for failing to provide clear reasons for decisions.
NIST AI Risk Management Framework (AI RMF)
The National Institute of Standards and Technology published its AI Risk Management Framework as voluntary guidance in January 2023. The framework provides a structured approach to managing AI risks and building trustworthy systems through four core functions: Govern (establishing organizational policies, roles, and accountability), Map (understanding context, intended use, and potential impacts), Measure (assessing risks, performance, robustness, and fairness), and Manage (prioritizing and implementing risk treatments).
Although voluntary, the NIST AI RMF carries significant practical weight. Regulators and courts may treat adherence as evidence of reasonable AI governance. The framework provides a common language for Compliance, Risk, and Engineering teams to collaborate, and it can serve as the backbone of internal AI policies and control frameworks.
Executive Order 14110 (October 2023)
Executive Order 14110 addresses federal agency AI use and sets a broader federal policy direction. Its key directives include safety and security testing for powerful foundation models, privacy-preserving techniques (such as differential privacy and federated learning) for federal AI use, algorithmic discrimination prevention in federal programs, federal procurement standards for AI systems, and international AI governance cooperation and standards alignment.
For private organizations, the most immediate impact falls on federal contractors, who will encounter AI-specific requirements in contracts and RFPs. More broadly, EO 14110 signals the types of controls (testing, documentation, and safeguards) that regulators may eventually require of all organizations deploying AI systems.
Building a Multi-State AI Compliance Strategy
1. Baseline Approach: Implement the Strictest Requirements
Rather than managing 50 separate compliance tracks, the most effective strategy is to design a single, stringent baseline that satisfies the most demanding jurisdictions.
The first step is identifying the strictest requirements across key compliance dimensions. For impact and algorithm assessments, Colorado's CPA and Virginia's VCDPA data protection assessments set the standard for high-risk profiling. For opt-out rights, California's CPRA and Colorado's CPA define the most comprehensive requirements around automated decision-making and profiling. For bias audits, NYC Local Law 144 establishes the benchmark for employment AI. For biometric consent, Illinois BIPA imposes the most stringent requirements for any biometric-based AI.
The second step is applying these strictest requirements globally. All high-impact AI systems should be treated as if they are subject to impact assessments before deployment and upon major changes, opt-out or human review options where technically and operationally feasible, regular bias testing and documentation, and explicit consent with strict controls for biometric data.
The third step is layering state-specific additions through a jurisdictional addendum to your AI policy. This addendum should capture variations in notice language and required content, state-specific opt-out mechanisms and response timelines, and local posting or disclosure requirements such as NYC's audit summary publication mandate.
2. Documentation Requirements Across Jurisdictions
For each material AI system, organizations should maintain a compliance dossier covering five essential areas. Algorithm and impact assessments should document purpose, scope, data sources, model type, intended users, identified risks (including bias, privacy, security, safety, explainability, and robustness), mitigation measures, and residual risk acceptance. Bias and performance testing records should capture metrics across protected groups where legally and ethically permissible, along with test design, datasets, limitations, and remediation steps taken when disparities are identified. Privacy impact assessments should map personal data flows, legal bases, retention policies, and sharing arrangements, aligned with applicable state privacy laws and HIPAA where relevant. Vendor due diligence documentation should address the security, privacy, and fairness controls of third-party AI providers, along with contractual obligations covering audit rights, data use limits, and incident reporting. Finally, incident logs should record AI-related incidents (erroneous denials, discriminatory outcomes, security events), root cause analyses, and corrective actions.
3. Sector-Specific Overlays
Employment AI
Employment AI faces the most layered regulatory scrutiny of any sector. NYC Local Law 144 requires annual independent bias audits for AEDTs and public posting of audit summaries. EEOC guidance demands validation of tools for job-relatedness and business necessity, ongoing monitoring for disparate impact, and willingness to adjust or discontinue problematic tools. Various states impose additional notice, consent, or record-keeping requirements.
Organizations should maintain a central registry of all employment-related AI tools and standardize vendor contracts to require audit support and data access.
Credit and Lending AI
ECOA and FCRA require that adverse action notices be specific and understandable, supported by documentation of model features and rationale. The CFPB expects regular fairness testing and monitoring, along with governance over third-party models and data sources.
A practical implementation approach includes building a reason code library aligned with model features and deploying continuous monitoring for model drift and disparate impact.
Healthcare AI
HIPAA's privacy and security rules apply to AI handling protected health information, requiring business associate agreements with AI vendors. Some AI tools may be regulated as medical devices under FDA rules, and state telehealth and practice laws govern how AI can be used in virtual care and clinical decision support.
Organizations should classify AI systems by clinical risk and regulatory status and integrate AI validation into clinical governance and quality assurance processes.
Housing AI
The Fair Housing Act and HUD guidance prohibit discriminatory tenant screening, pricing, and advertising. State-level tenant protections add further rules on screening criteria and adverse action notices.
Housing-sector organizations should test screening and pricing models for disparate impact and provide clear explanations and appeal mechanisms for adverse decisions.
Operationalizing AI Governance
Sustainable AI compliance requires embedding governance into existing organizational structures rather than treating it as a standalone initiative. The core components include an AI policy and standards framework that defines high-risk AI, approval thresholds, and mandatory controls. An AI review board, composed of representatives from Compliance, Legal, Risk, Security, Engineering, and Product, should evaluate high-risk use cases before deployment. Model lifecycle controls should impose requirements at every stage: design, development, testing, deployment, and retirement. Role-specific training for developers, product owners, and business users ensures accountability, with clear ownership assigned for each AI system.
Key Takeaways
The United States has no comprehensive federal AI law. Organizations must navigate a growing patchwork of state, local, and sector-specific rules that will only become more complex over time. State privacy laws in California, Colorado, and Virginia introduce automated decision-making rights, profiling opt-outs, and impact assessment obligations that affect any organization deploying consequential AI systems. NYC Local Law 144 has established a de facto national standard for employment AI bias audits and transparency, even though its jurisdiction is limited to a single city.
Federal regulators, including the FTC, EEOC, CFPB, HUD, and HHS, are actively applying existing laws to AI with a focus on discrimination, transparency, and data practices. The NIST AI Risk Management Framework, while voluntary, is emerging as the reference framework for demonstrating reasonable AI governance.
The most practical compliance strategy is to implement the strictest requirements as a global baseline: Colorado and Virginia impact assessments, California opt-outs, NYC bias audits, and Illinois biometric consent. State-level fragmentation is likely to persist for several years, making robust internal AI governance more important than chasing individual statutes.
Common Questions
No. There is no comprehensive federal AI statute. Executive Order 14110 provides directives for federal agencies, but private organizations are mainly governed by state laws and sector-specific federal regulations.
Colorado and Virginia are strict on impact assessments for high-risk profiling, California leads on automated decision-making and opt-out rights, Illinois is strictest on biometric consent and liability, and New York City imposes mandatory bias audits for employment AI.
Yes. These laws typically apply based on where the consumer or worker resides, not where the company is located. Serving residents of a state can trigger that state’s obligations.
No. The NIST AI Risk Management Framework is voluntary, but regulators and courts may treat adherence as evidence of reasonable AI governance, and federal agencies are encouraged to align with it.
Inventory high-impact AI systems, implement the strictest requirements (impact assessments, opt-outs, bias audits, biometric consent) as a global baseline, then add sector-specific overlays and jurisdiction-specific notice and opt-out details.
Design Once, Comply Many Times
For most organizations, the most efficient path is to design a single, stringent AI governance baseline that satisfies the strictest state and sector requirements, then layer minor jurisdictional variations on top rather than building separate programs for each state.
US states and localities with AI-relevant privacy or automated decision-making laws
Source: Synthesis of state privacy and AI-related legislation through 2024
"In the absence of a federal AI statute, regulators are using existing privacy, consumer protection, and anti-discrimination laws as powerful tools to police AI systems."
— US AI regulatory landscape analysis
References
- AI Risk Management Framework (AI RMF 1.0). National Institute of Standards and Technology (NIST) (2023). View source
- EU AI Act — Regulatory Framework for Artificial Intelligence. European Commission (2024). View source
- OECD Principles on Artificial Intelligence. OECD (2019). View source
- ISO/IEC 42001:2023 — Artificial Intelligence Management System. International Organization for Standardization (2023). View source
- Model AI Governance Framework (Second Edition). PDPC and IMDA Singapore (2020). View source
- ASEAN Guide on AI Governance and Ethics. ASEAN Secretariat (2024). View source
- Cybersecurity Framework (CSF) 2.0. National Institute of Standards and Technology (NIST) (2024). View source

