What Is the Colorado AI Act?
When Colorado Governor Jared Polis signed Senate Bill 24-205 on May 17, 2024, the state became the first in the nation to enact comprehensive AI legislation. The Colorado AI Act establishes binding requirements for both the developers who build high-risk AI systems and the deployers who put them to use, with a singular objective: preventing algorithmic discrimination before it harms consumers.
The law was originally scheduled to take effect on February 1, 2026. Subsequent legislation pushed that deadline to June 30, 2026, granting businesses a narrow but meaningful window to prepare.
Why Colorado's Law Matters
New York City's Local Law 144, which took effect in July 2023, broke ground as the first enforceable AI regulation in the United States, but its scope is limited to automated employment decision tools. Colorado's approach is fundamentally different in ambition. The Colorado AI Act covers any AI system that makes or substantially contributes to a consequential decision, and it defines "consequential" broadly enough to touch nearly every regulated industry in the state.
In employment, the law reaches hiring, termination, compensation, and promotion decisions. In education, it extends to enrollment, academic discipline, financial aid, and accreditation determinations. Financial services firms face coverage across lending, credit scoring, insurance pricing, and account access. Healthcare organizations must account for AI that influences cost calculations, coverage decisions, diagnoses, and treatment recommendations. Housing providers are subject to the law when AI informs renting, purchasing, or financing outcomes. And legal services firms must consider its reach when AI plays a role in representation or adjudication.
The cumulative effect of this breadth is significant. Organizations that assumed AI regulation was limited to hiring or financial services will find that Colorado's framework demands a far more comprehensive compliance posture.
Who Must Comply
The law draws a clear distinction between two categories of obligated entities, and many organizations will find they fall into both.
Developers
A developer, under the statute, is any company that designs, codes, or substantially modifies an AI system. This classification applies regardless of whether the system is used internally or sold to third parties. Companies building AI tools or platforms for commercial distribution carry developer obligations even when the end-use decisions are made by their customers.
Deployers
A deployer is any company that uses a high-risk AI system to make or inform consequential decisions. Organizations that purchase AI tools from vendors and apply them to decisions about employees, customers, patients, or applicants fall squarely within this definition.
The geographic scope of the law deserves particular attention from general counsel and compliance teams. The Colorado AI Act applies to any entity doing business in Colorado or deploying AI systems that affect Colorado residents. Consistent with the extraterritorial reach of many state consumer protection statutes, a company need not be physically located in Colorado to face obligations under SB 24-205.
What Makes an AI System "High-Risk"?
The threshold question for compliance is whether an AI system qualifies as high-risk. The statute defines a high-risk system as one that makes, or serves as a substantial factor in making, a consequential decision, meaning a decision that produces a material legal or similarly significant effect on a consumer in any of the covered domains: employment, education, financial services, healthcare, housing, or legal services.
Examples of High-Risk AI Systems
In practice, the high-risk classification captures AI systems that screen resumes and recommend interview candidates, determine insurance premiums or loan approvals, recommend medical diagnoses or treatment plans, evaluate tenant applications or set rental prices, and assess student eligibility for financial aid. Each of these use cases involves a decision with direct, material consequences for the affected individual.
Likely Not High-Risk
Conversely, AI tools that perform functions without material consumer impact generally fall outside the high-risk designation. Spell-check and grammar tools, general-purpose search engines in most deployment contexts, content recommendation systems that do not produce legal or similarly significant effects, and internal process optimization tools without direct consumer-facing consequences are unlikely to trigger compliance obligations.
Core Requirements
For Developers
The Colorado AI Act imposes a duty of reasonable care on developers to protect consumers from known or foreseeable risks of algorithmic discrimination. This obligation is operationalized through several concrete requirements.
Developers must provide deployers with comprehensive documentation covering the system's reasonably foreseeable uses and any known harmful applications, the type of data used to train the system alongside its known limitations and intended purpose, the evaluation methodology used to assess performance and mitigate algorithmic discrimination prior to deployment, and guidance on proper use, monitoring, and updating of the system.
Beyond documentation, developers must affirmatively disclose any known or reasonably foreseeable risks of algorithmic discrimination to both their deployer customers and the Colorado Attorney General. They must also publish a public statement summarizing the types of high-risk AI systems they develop and the measures they employ to manage discrimination risks.
For Deployers
Deployer obligations are more operationally intensive and will require sustained organizational commitment.
First, deployers must implement a risk management policy and program for every high-risk AI system in use. That program must identify and map all high-risk AI systems across the organization, describe each system's purpose, intended benefits, and intended uses, analyze the potential for algorithmic discrimination, articulate specific risk mitigation strategies, and detail post-deployment monitoring processes.
Second, deployers must complete an annual impact assessment for each high-risk AI system. The assessment must address the system's purpose, intended use, and deployment context, analyze whether the system poses risks of algorithmic discrimination, document the categories of data processed and outputs produced, specify the metrics used to evaluate performance and fairness, describe transparency measures provided to consumers, and outline post-deployment monitoring processes.
Third, the law establishes substantive consumer disclosure obligations. When a high-risk AI system produces an adverse decision affecting a consumer, the deployer must inform the consumer that an AI system was a factor in the decision, provide a plain-language explanation of the reasoning behind the decision, offer the consumer an opportunity to correct any incorrect data, and make available an appeal process that includes human review.
Fourth, deployers must review and update their risk management policies and impact assessments at least annually, or whenever significant changes are made to the underlying AI system.
The Affirmative Defense
Among the most strategically significant provisions in the Colorado AI Act is the affirmative defense available to both developers and deployers. If an entity faces an enforcement action, it can establish a defense by demonstrating two conditions: that it discovered and cured the violation before any complaint was filed, and that it complied in good faith with a nationally or internationally recognized AI risk management framework.
The statute recognizes several frameworks for this purpose. The NIST AI Risk Management Framework (AI RMF 1.0), published by the National Institute of Standards and Technology in January 2023, provides a voluntary, rights-preserving framework for managing AI risks. ISO/IEC 42001, the international standard for AI management systems, also qualifies. The Colorado Attorney General retains authority to recognize additional frameworks over time.
The practical implication is clear. Organizations that proactively adopt an established AI governance framework are building a defensible compliance position. The affirmative defense creates a powerful incentive to move beyond minimum compliance and embed recognized risk management practices into AI operations before the law takes effect.
Penalties
Enforcement of the Colorado AI Act rests exclusively with the Colorado Attorney General. The law does not create a private right of action, meaning individual consumers cannot bring lawsuits directly under the statute.
Violations are classified as deceptive trade practices under the Colorado Consumer Protection Act, carrying penalties of up to $20,000 per violation. The Attorney General can also seek injunctive relief, compelling organizations to cease violating conduct through court order. In determining enforcement actions, the AG is directed to consider the company's size, complexity, and the nature of the AI system in question, a provision that may provide some proportionality for smaller organizations making good-faith compliance efforts.
How to Comply: A Preparation Timeline
Now (February 2026)
The immediate priority is establishing visibility into the organization's AI footprint. Compliance teams should conduct a thorough inventory of all AI systems currently in use across business units, then classify each system by determining whether it makes or informs consequential decisions as defined by the statute. In parallel, leadership should evaluate and select an AI risk management framework, with the NIST AI RMF being the most widely recommended option given its alignment with the affirmative defense provision.
March Through April 2026
With the inventory complete, the focus shifts to policy development and assessment. Organizations should draft and implement a formal risk management policy that meets the statute's requirements, then begin conducting impact assessments for each high-risk system. This period should also include proactive vendor engagement, specifically requesting from AI developers and vendors the documentation that the law requires them to provide.
May 2026
The final month before the effective date should be dedicated to operationalizing consumer-facing obligations and institutional readiness. This means implementing notice and appeal processes for adverse AI-driven decisions, training staff on new obligations and procedures, and finalizing all required documentation including any public statements.
June 30, 2026: Effective Date
All requirements take effect. Organizations should be prepared to begin ongoing monitoring and initiate their annual review cycle from day one.
Common Questions About Scope
Does this apply to AI tools we buy from vendors?
Yes. Purchasing rather than building an AI system does not eliminate deployer obligations. Any organization that uses a high-risk AI system to make or inform consequential decisions bears compliance responsibilities, regardless of whether it developed the system in-house. That said, deployers can and should rely on documentation provided by the developer to satisfy certain requirements, making vendor engagement a critical early step in the compliance process.
What if we use AI tools for internal purposes only?
Internal use does not create a safe harbor. If an AI system makes or substantially contributes to consequential decisions about employees, including hiring, termination, compensation, and promotion, it qualifies as high-risk under the statute. Employment decisions are expressly enumerated in the law's definition of consequential decisions.
What about general-purpose AI like ChatGPT?
The statute is technology-neutral and decision-focused. If employees use general-purpose AI tools to make or inform consequential decisions, such as evaluating job applications or assessing loan eligibility, those specific use cases would likely fall within the law's scope. The relevant inquiry is the nature of the decision being made, not the architecture of the underlying model.
Related Regulations
The Colorado AI Act does not exist in isolation. Organizations building compliance programs should consider how it intersects with the broader regulatory landscape.
NYC Local Law 144 took effect in 2023 with a narrower focus on AI in hiring, making it a useful but limited precedent. The EU AI Act adopts a similar risk-based classification approach with overlapping high-risk categories, meaning multinational organizations may find efficiencies in harmonizing their compliance efforts. The Illinois Biometric Information Privacy Act (BIPA) creates additional obligations where AI systems process biometric data. The Texas Responsible AI Governance Act (TRAIGA) introduces broader governance requirements for AI deployed in Texas. And the NIST AI Risk Management Framework remains the recommended foundation for establishing the affirmative defense under Colorado's law and for building a durable AI governance program that can adapt as additional state and federal regulations emerge.
Common Questions
The Colorado AI Act takes effect on June 30, 2026. It was originally scheduled for February 1, 2026, but the effective date was delayed through subsequent legislation. Companies should begin preparing now to ensure compliance by the deadline.
No. The Colorado AI Act does not include a private right of action. Only the Colorado Attorney General can bring enforcement actions under the law. However, the AG has the authority to impose penalties of up to $20,000 per violation.
The affirmative defense allows companies to avoid liability by demonstrating they discovered and cured the violation before a complaint was filed, and that they complied in good faith with a recognized AI risk management framework such as the NIST AI RMF or ISO/IEC 42001. This is a strong incentive to adopt these frameworks proactively.
No. The law applies to any entity that develops or deploys high-risk AI systems that affect Colorado residents. If your AI tool makes consequential decisions about people in Colorado — regardless of where your company is headquartered — the law applies to you.
Algorithmic discrimination occurs when an AI system results in unlawful differential treatment or disparate impact on individuals based on protected characteristics, including race, color, national origin, sex, religion, age, disability, or other protected classes under Colorado or federal law.
Only for high-risk AI systems — those that make or substantially contribute to consequential decisions in employment, education, financial services, healthcare, housing, or legal services. AI tools used for non-consequential purposes (like spell-check or content recommendations) do not require impact assessments.
References
- SB24-205: Consumer Protections for Artificial Intelligence. Colorado General Assembly (2024). View source
- A Deep Dive into Colorado's Artificial Intelligence Act. National Association of Attorneys General (2024). View source
- Colorado Governor Signs Broad AI Bill Regulating Employment Decisions. Seyfarth Shaw (2024). View source
- Colorado's AI Law Delayed Until June 2026. Clark Hill (2025). View source
- Colorado Postpones Implementation of AI Act SB 24-205. Akin Gump (2025). View source
- AI Risk Management Framework (AI RMF 1.0). NIST (2023). View source
- SB24-205 Signed Bill Text (PDF). Colorado General Assembly (2024). View source

