On this page
What Is the Colorado AI Act?
The Colorado AI Act (SB 24-205) is the first comprehensive US state law regulating the use of high-risk artificial intelligence systems, imposing a duty of care on both developers and deployers to protect consumers from algorithmic discrimination. Signed into law in May 2024, it took effect on February 1, 2026, with the Colorado Attorney General's enforcement authority activating on June 30, 2026.
The Act applies to any entity that develops or deploys a "high-risk AI system" - defined as any AI system that makes or is a substantial factor in making a "consequential decision" affecting consumers in Colorado. Consequential decisions span education, employment, financial services, government services, healthcare, housing, insurance, and legal services.
Unlike voluntary frameworks such as the NIST AI RMF, the Colorado AI Act creates enforceable legal obligations backed by state attorney general enforcement authority. Violations can result in penalties of up to $20,000 per violation under the Colorado Consumer Protection Act, with each affected consumer potentially constituting a separate violation.
For enterprises operating nationally, the Colorado AI Act is a harbinger of similar laws expected in other states. Building compliance now - using Areebi's governance platform - creates a foundation that will transfer to future state requirements.
Who Must Comply with the Colorado AI Act?
Any organization that develops or deploys a high-risk AI system that makes or substantially contributes to consequential decisions affecting Colorado consumers must comply, regardless of where the organization is headquartered.
The Act defines two categories of covered entities:
- Developers: Entities that create, code, or substantially modify AI systems. Developers must provide deployers with documentation including intended uses, known limitations, data requirements, risk mitigation guidance, and information needed for impact assessments.
- Deployers: Entities that use AI systems to make or substantially factor into consequential decisions. Deployers bear the primary compliance burden, including impact assessments, consumer disclosures, and risk management programs.
The "consequential decision" trigger covers a broad range of enterprise AI applications:
- Employment: AI-powered resume screening, candidate ranking, performance evaluation, promotion decisions, or termination recommendations
- Financial services: Credit scoring, lending decisions, insurance underwriting, fraud detection that affects access to services
- Healthcare: Clinical decision support, treatment recommendations, insurance coverage determinations
- Housing: Tenant screening, rental pricing algorithms, mortgage application evaluation
- Education: Admissions decisions, grading, disciplinary recommendations
If your organization uses AI in any of these domains and serves Colorado consumers, you are likely subject to the Act. Take Areebi's compliance assessment to determine your specific obligations.
The Duty of Care Obligation
The Colorado AI Act imposes a general duty of care requiring deployers to use "reasonable care" to protect consumers from known or reasonably foreseeable risks of algorithmic discrimination. This is the Act's most consequential provision because it creates a broad, principles-based obligation that extends beyond specific checklist requirements.
Algorithmic discrimination is defined as any condition in which the use of an AI system results in unlawful differential treatment or impact on consumers based on protected characteristics, including age, color, disability, ethnicity, genetic information, national origin, race, religion, sex, and veteran status.
The duty of care requires deployers to:
- Implement a risk management policy and program governing AI deployment
- Complete impact assessments for each high-risk AI system
- Review and update impact assessments on a reasonable cadence and when significant changes occur
- Notify consumers when AI is used to make consequential decisions
- Provide a mechanism for consumers to contest adverse AI decisions
- Notify the Attorney General within 90 days of discovering algorithmic discrimination
The "reasonable care" standard will be interpreted by courts over time, creating uncertainty for early compliance efforts. However, organizations that implement structured AI governance programs aligned with NIST AI RMF and document their compliance efforts will be well-positioned to demonstrate reasonable care.
See Areebi in action
Get a 30-minute personalised demo tailored to your industry, team size, and compliance requirements.
Get a DemoImpact Assessment Requirements
The Colorado AI Act requires deployers to complete an impact assessment for each high-risk AI system before deployment and to update it on a regular cadence or whenever material changes occur.
Each impact assessment must include:
- A description of the AI system's purpose, intended uses, and expected benefits
- An analysis of whether the AI system poses known or reasonably foreseeable risks of algorithmic discrimination
- A description of the data the system processes, including personal data categories and sources
- A description of the outputs the system produces and how they are used in consequential decisions
- An evaluation of the data used to train or customize the system, including known biases
- A description of metrics used to evaluate system performance and fairness
- A description of transparency measures, including consumer disclosures
- A description of post-deployment monitoring processes
Impact assessments must be maintained as ongoing documents and made available to the Attorney General upon request. They do not need to be published publicly, but they must be thorough enough to demonstrate the deployer's reasonable care.
Areebi's policy engine includes impact assessment templates aligned with Colorado AI Act requirements, automating much of the documentation process. The platform also provides continuous monitoring that feeds real-time performance data into your assessments.
Consumer Disclosure and Contestability Requirements
Deployers must notify consumers before or at the time a high-risk AI system is used to make a consequential decision about them, and must provide a mechanism for contesting adverse decisions.
Required consumer disclosures include:
- That an AI system is being used to make or substantially contribute to a consequential decision
- A description of the AI system's purpose in plain language
- Contact information for the deployer
- A description of the consumer's right to contest the decision and how to exercise it
When a consumer contests a decision, the deployer must provide a meaningful explanation of the AI system's role in the decision and, if applicable, correct any errors. The Act does not mandate a specific appeal mechanism, but the process must be genuine and accessible.
For enterprises processing high volumes of AI-assisted decisions, implementing scalable disclosure and contestation workflows is essential. Areebi's platform can be configured to generate automated disclosures and manage contestation workflows.
Enforcement Delay and Penalties
The Colorado Attorney General has exclusive enforcement authority under the Act, with enforcement beginning June 30, 2026 - providing a four-month grace period after the February 1, 2026 effective date for organizations to finalize compliance programs.
Key enforcement provisions:
- Exclusive AG enforcement: There is no private right of action. Only the Colorado Attorney General can bring enforcement actions under the Act.
- Cure period: Before bringing an enforcement action, the AG must provide written notice and a 60-day cure period for first-time violations (this provision sunsets after two years).
- Penalties: Violations are treated as deceptive trade practices under the Colorado Consumer Protection Act, carrying penalties of up to $20,000 per violation. Each affected consumer may constitute a separate violation.
- Safe harbor consideration: The AG must consider compliance efforts, including adherence to nationally or internationally recognized AI risk management frameworks (explicitly mentioning the NIST AI RMF), as mitigating factors.
The safe harbor provision is critically important. Organizations that implement the NIST AI RMF or achieve ISO 42001 certification have a concrete defense if the AG investigates their AI practices. This makes framework implementation a legal risk mitigation strategy, not just a governance exercise.
With June 30 approaching, enterprises should finalize their compliance programs now. Start with Areebi's free assessment to identify gaps and prioritize remediation before the enforcement deadline.
Your Colorado AI Act Compliance Action Plan
With enforcement beginning June 30, 2026, organizations should follow this accelerated action plan to achieve compliance before the deadline.
- Inventory all AI systems (Week 1-2): Identify every AI tool, model, and service that contributes to decisions affecting Colorado consumers. Include vendor AI features embedded in existing software.
- Classify high-risk systems (Week 2-3): Determine which AI systems make or substantially factor into consequential decisions in covered domains. These are your in-scope systems requiring full compliance.
- Establish risk management program (Week 3-4): Document your AI risk management policy, governance structure, and responsibilities. Align with NIST AI RMF to leverage the safe harbor provision.
- Complete impact assessments (Weeks 4-8): Prepare and document impact assessments for each high-risk AI system. Assess algorithmic discrimination risks across all protected characteristics.
- Implement consumer disclosures (Weeks 6-10): Design and deploy consumer notification mechanisms for all consequential AI-assisted decisions. Establish contestation workflows.
- Deploy monitoring (Weeks 8-12): Implement ongoing monitoring for fairness metrics, performance degradation, and emerging discrimination risks.
- Test and document (Weeks 10-12): Conduct end-to-end testing of compliance processes. Compile documentation demonstrating reasonable care.
Areebi's platform accelerates every step of this plan with automated AI discovery, policy templates, impact assessment workflows, and continuous monitoring. See pricing for enterprise deployment options.
Frequently Asked Questions
When does the Colorado AI Act take effect?
The Colorado AI Act took effect on February 1, 2026, but the Attorney General's enforcement authority does not activate until June 30, 2026. This provides a grace period for organizations to finalize compliance programs. The cure period provision gives first-time violators an additional 60 days to remediate after receiving AG notice.
Does the Colorado AI Act apply to companies outside Colorado?
Yes. The Act applies to any developer or deployer whose high-risk AI systems make consequential decisions affecting Colorado consumers, regardless of where the company is headquartered. If your AI system impacts Colorado residents in covered decision categories, you are subject to the Act.
What is a 'consequential decision' under the Colorado AI Act?
A consequential decision is any decision that has a material legal or similarly significant effect on a consumer's access to or terms of education, employment, financial services, government services, healthcare, housing, insurance, or legal services. AI systems that make or substantially contribute to these decisions are classified as high-risk.
Is there a safe harbor for NIST AI RMF compliance?
Yes. The Act explicitly states that the Attorney General must consider adherence to nationally or internationally recognized AI risk management frameworks - specifically mentioning the NIST AI RMF - as a mitigating factor when deciding whether to pursue enforcement. This makes NIST implementation a concrete legal defense strategy.
What are the penalties for violating the Colorado AI Act?
Violations are treated as deceptive trade practices under the Colorado Consumer Protection Act, carrying penalties of up to $20,000 per violation. Each affected consumer may constitute a separate violation, so penalties can accumulate rapidly for AI systems processing high volumes of decisions.
Related Resources
About the Author
VP of Compliance & Trust, Areebi
Former compliance director at a Big Four consulting firm. Deep expertise in HIPAA, SOC 2, GDPR, and the EU AI Act. VP Compliance and Risk at Areebi.
Ready to govern your AI?
See how Areebi can help your organization adopt AI securely and compliantly.