What Is the Colorado AI Act?
The Colorado AI Act (Senate Bill 24-205) is the first comprehensive state-level AI law in the United States. Signed into law on May 17, 2024, it establishes requirements for both developers and deployers of high-risk AI systems to prevent algorithmic discrimination.
The law was originally set to take effect on February 1, 2026, but the effective date was delayed to June 30, 2026 through subsequent legislation, giving businesses additional time to prepare.
Why Colorado's Law Matters
While NYC Local Law 144 focuses specifically on AI in hiring, Colorado's law is much broader. It covers any AI system that makes or substantially contributes to consequential decisions across multiple sectors:
- Employment: Hiring, termination, compensation, promotion
- Education: Enrollment, academic discipline, financial aid, accreditation
- Financial services: Lending, credit, insurance rates, account access
- Healthcare: Cost, coverage, diagnosis, treatment
- Housing: Renting, buying, financing
- Legal services: Legal representation and legal decisions
This breadth means that companies across nearly every industry could be affected.
Who Must Comply
The law creates obligations for two types of entities:
Developers
Companies that design, code, or substantially modify an AI system. If you build AI tools or platforms — even if you sell them to other companies — you are a developer under this law.
Deployers
Companies that use a high-risk AI system to make or inform consequential decisions. If your company uses AI tools purchased from a vendor to make decisions about employees, customers, patients, or applicants, you are a deployer.
Geographic scope: The law applies to any entity doing business in Colorado or deploying AI systems that affect Colorado residents. Like many state laws, its reach extends beyond companies physically located in Colorado.
What Makes an AI System "High-Risk"?
An AI system is high-risk under the Colorado AI Act if it makes, or is a substantial factor in making, a consequential decision. A consequential decision is one that has a material legal or similarly significant effect on a consumer in the areas listed above (employment, education, financial services, healthcare, housing, legal services).
Examples of High-Risk AI Systems
- AI that screens resumes and recommends which candidates to interview
- AI that determines insurance premiums or loan approvals
- AI that recommends medical diagnoses or treatment plans
- AI that evaluates tenant applications or sets rental prices
- AI that determines student financial aid eligibility
Likely Not High-Risk
- AI spell-check or grammar tools
- AI-powered search engines (in most contexts)
- AI content recommendations that do not have legal or similarly significant effects
- AI tools used for internal process optimization without direct consumer impact
Core Requirements
For Developers
- Reasonable care: Use reasonable care to protect consumers from known or foreseeable risks of algorithmic discrimination
- Documentation: Provide deployers with:
- A general description of the system's reasonably foreseeable uses and known harmful uses
- Documentation describing the type of data used to train the system, known limitations, and the purpose of the system
- How the system was evaluated for performance and mitigation of algorithmic discrimination before deployment
- How the system should be used, monitored, and updated
- Known risks disclosure: Disclose any known or reasonably foreseeable risks of algorithmic discrimination to deployers and the Colorado Attorney General
- Public statement: Make available a statement summarizing the types of high-risk AI systems developed and how algorithmic discrimination risks are managed
For Deployers
-
Risk management policy: Implement a risk management policy and program for high-risk AI systems, which must:
- Identify and map all high-risk AI systems in use
- Describe the purpose, intended benefits, and intended uses of each system
- Analyze the potential risks of algorithmic discrimination
- Describe how risks will be mitigated
- Describe how systems are monitored post-deployment
-
Annual impact assessment: Complete an impact assessment for each high-risk AI system, including:
- The purpose, intended use, and deployment context
- An analysis of whether the system poses risks of algorithmic discrimination
- The categories of data processed and outputs produced
- Metrics used to evaluate the system's performance and fairness
- A description of any transparency measures provided to consumers
- A description of the post-deployment monitoring processes
-
Consumer disclosure: When a high-risk AI system makes an adverse decision affecting a consumer, the deployer must:
- Inform the consumer that the AI system was a factor in the decision
- Provide a plain-language explanation of the reason for the decision
- Provide an opportunity for the consumer to correct any incorrect data
- Provide an opportunity to appeal the decision with human review
-
Annual review: Update risk management policies and impact assessments at least annually, or when significant changes are made to the AI system
The Affirmative Defense
One of the most important provisions of the Colorado AI Act is the affirmative defense. If a deployer or developer faces an enforcement action, they can establish an affirmative defense by demonstrating that they:
- Discovered and cured the violation before any complaint was filed
- Complied in good faith with a nationally or internationally recognized AI risk management framework
Recognized frameworks include:
- NIST AI Risk Management Framework (AI RMF 1.0)
- ISO/IEC 42001 (AI Management System standard)
- Other frameworks recognized by the Colorado Attorney General
This is a significant incentive for companies to adopt established AI governance frameworks proactively.
Penalties
The Colorado AI Act is enforced exclusively by the Colorado Attorney General. There is no private right of action — individual consumers cannot sue directly.
- Violations are treated as deceptive trade practices under the Colorado Consumer Protection Act
- Penalties of up to $20,000 per violation
- The AG can also seek injunctive relief (court orders to stop the violating behavior)
- The AG must consider a company's size, complexity, and nature of the AI system when determining enforcement
How to Comply: A Preparation Timeline
Now (February 2026)
- Inventory: Identify all AI systems in use across your organization
- Classify: Determine which systems make or inform consequential decisions
- Assess frameworks: Choose an AI risk management framework (NIST AI RMF recommended)
March-April 2026
- Risk management policy: Draft and implement your risk management policy
- Impact assessments: Begin conducting impact assessments for each high-risk system
- Vendor engagement: Request documentation from AI vendors/developers about their systems
May 2026
- Consumer disclosure: Implement notice and appeal processes for adverse decisions
- Training: Train staff on new obligations and procedures
- Documentation: Finalize all documentation and make public statements available
June 30, 2026 — Effective Date
- All requirements take effect
- Begin ongoing monitoring and annual review cycle
Common Questions About Scope
Does this apply to AI tools we buy from vendors? Yes. If you deploy (use) a high-risk AI system, you have obligations as a deployer — even if you did not build the system. However, you can rely on documentation from the developer to meet some requirements.
What if we use AI tools for internal purposes only? If the AI system makes or substantially contributes to consequential decisions about employees (hiring, firing, compensation, promotion), it is still high-risk. Employment decisions are specifically covered by the law.
What about general-purpose AI like ChatGPT? If employees use general-purpose AI tools to make or inform consequential decisions (e.g., using ChatGPT to evaluate job applications), those use cases would likely be covered. The focus is on the decision being made, not the underlying technology.
Related Regulations
- NYC Local Law 144: More narrowly focused on AI in hiring, but already enforced since 2023
- EU AI Act: Similar risk-based approach with overlapping high-risk categories
- Illinois BIPA: Overlaps if your AI system processes biometric data
- Texas TRAIGA: Broader governance requirements for AI used in Texas
- NIST AI RMF: The recommended framework for establishing the affirmative defense
Frequently Asked Questions
The Colorado AI Act takes effect on June 30, 2026. It was originally scheduled for February 1, 2026, but the effective date was delayed through subsequent legislation. Companies should begin preparing now to ensure compliance by the deadline.
No. The Colorado AI Act does not include a private right of action. Only the Colorado Attorney General can bring enforcement actions under the law. However, the AG has the authority to impose penalties of up to $20,000 per violation.
The affirmative defense allows companies to avoid liability by demonstrating they discovered and cured the violation before a complaint was filed, and that they complied in good faith with a recognized AI risk management framework such as the NIST AI RMF or ISO/IEC 42001. This is a strong incentive to adopt these frameworks proactively.
No. The law applies to any entity that develops or deploys high-risk AI systems that affect Colorado residents. If your AI tool makes consequential decisions about people in Colorado — regardless of where your company is headquartered — the law applies to you.
Algorithmic discrimination occurs when an AI system results in unlawful differential treatment or disparate impact on individuals based on protected characteristics, including race, color, national origin, sex, religion, age, disability, or other protected classes under Colorado or federal law.
Only for high-risk AI systems — those that make or substantially contribute to consequential decisions in employment, education, financial services, healthcare, housing, or legal services. AI tools used for non-consequential purposes (like spell-check or content recommendations) do not require impact assessments.
References
- Senate Bill 24-205: Concerning Consumer Protections for Artificial Intelligence. Colorado General Assembly (2024). View source
- Artificial Intelligence Risk Management Framework (AI RMF 1.0). National Institute of Standards and Technology (NIST) (2023). View source
- Colorado AI Act SB24-205 Compliance Guide. TrustArc (2024). View source
