Colorado’s AI Act: From Leader to Lagger?

Colorado passed the Colorado Artificial Intelligence Act (SB 24-205) (“CAIA”) in May of 2024, becoming the first U.S. state to enact a comprehensive AI framework. Despite its early lead, however, Coloradans are slowing their roll on implementation as lawmakers and Governor Jared Polis grapple with how best to balance innovation and accountability. 

The Act

CAIA is consumer-focused, aiming to prevent "reasonably foreseeable algorithmic discrimination” by “high-risk AI systems” used in consequential domains such as hiring, lending, housing, healthcare, public benefits, insurance and education. To do so, developers and deployers must adopt risk management programs, perform impact assessments, and disclose to affected individuals when AI is utilized for or influences decisions made in these sensitive areas. 

If that jargon seems confusing, then welcome to the conversation! That is what many Coloradans are concerned about.

Effectively, CAIA is crafted to prohibit businesses from using AI systems that deny consumers access to services based on demographics alone. 

As an example, consider a hiring software that compares applicant resumes and demographics (age, sex, ethnicity) against resumes and demographics of individuals who have been highly successful in a law firm over the past 30 years. 

To frame the point, I am starting with the generalization that, historically, the attrition rate of women and minorities in law is higher than that of men. 

If the model measures “success” based purely on longevity with the firm and promotions to partner, the model would favor white male applicants. Therefore, when presented with two similar resumes from individuals with different demographics, this model would discriminate against non-white, non-male candidates, even if credentials are evenly matched. This is the type of “algorithmic discrimination” that CAIA aims to mitigate. 

The example model can be fine-tuned to omit weights relating to demographic information, but many iterations of this type of AI tool may include inadvertent pitfalls. This reality has left Coloradan business owners and tech companies in a pickle.  

CAIA Criticisms by Businesses and the Tech Sector

CAIA is under significant scrutiny from small businesses and tech companies alike. First, many of CAIA’s terms are ambiguous or broad, including the terms “high-risk AI system,” “deployer,” “developer,” “algorithmic discrimination,” and “consequential decision.” The lack of clear definitions created significant business concerns about whether liability may attach for specific AI use cases that may or may not constitute “consequential decision making,” and questions about applicability to businesses that use potentially discriminatory third-party or open-source AI models that businesses did not train themselves. Businesses are pushing for more of a “no control, no liability” approach, which may not meet the aims of CAIA, and discourage efforts toward including human-in-the-loop oversight for sensitive decisions.  

Opponents of the bill also argue it has a disproportionate burden on small companies– which must document, monitor, and assess AI systems in the same manner as larger companies. Both developers and deployers deem CAIA an “innovation tax,” creating compliance roadblocks impeding on business’s abilities to quickly deploy or adopt AI programs.  Given these tensions, some businesses (particularly ones that cannot afford to undertake compliance efforts) may shy away from using AI for any decision making, potentially leaving them in their competitors’ dust. 

In emerging tech, startups fear a loss of available seed funding, as investors are becoming more tentative about investments in Colorado companies. Higher compliance costs, model testing, and reporting requirements may reduce company profitability to the point that these companies are dead on arrival. Not to mention the increased potential liability under CAIA. Liability for startups is a hot button issue and can come from every angle. First, startups employing AI technology largely rely upon open-source models that they modify for specific use cases. Startups fear the possible liability arising from an open-source model that they didn’t know to be discriminatory, or from modifications that, for one reason or another, created discriminatory results that the startups may have been able to detect with a larger budget. Startups also worry about potential shifting liability if consumers use their product in an unintended manner, which ends up causing a discriminatory outcome. 

Though many of these concerns may be contracted around, the fears are real. As explained below, lawmakers are presently working to minimize these concerns.

Current State of Things

Colorado is divided. After extensive public pushback and lobbying, on August 25, 2025, Governor Polis signed an amendment to the Act that delayed its effective date from February 1, 2026 until July 1, 2026. In the meantime, there is massive confusion about what enforcement of the Act will look like, and when compliance will actually become mandatory. 

On October 15, Governor Polis convened a second working group to explore possible means of simplifying and clarifying the law prior to the effective date. 

What Can Businesses Do Now?

This is a period of flux for Coloradans, and it’s anyone’s guess about how this all will shake out. In the interim, businesses can do the following now: 

  • Monitor Updates: it is likely that Governor Polis’s working group will publish comments in early 2026 to allow the legislature time to further amend the Act, and for businesses to comply prior to July 1, 2026. 

  • Map AI Use: Consider which AI systems may fall within the law in its current form, casting a broad net to include even those AI systems that arguably fall outside of the law’s purview, and get a head start on drafting (or outlining) risk management documentation. 

  • Start Documenting: While some of the Act’s obligations may change before July 1, 2026, it is unlikely that businesses that are using AI for clearly “high risk” use cases (such as hiring, loan applications, or benefits determinations) will escape compliance obligations entirely. Setting up fulsome compliance architectures now for “high risk” will prevent a compliance bottleneck down the line. If your organization uses AI systems for different types of decisions and compliance seems too cumbersome, consider ranking decision points in terms of “high-risk” and “lower risk” (for which a looser governance approach may be appropriate).  

  • Review Contracts (Both Developers and Deployers): Developers and deployers will have different, and sometimes competing, needs when it comes to contract terms. It is important to discuss those needs with an attorney. Generally speaking, here are some things to consider: Whether your business develops or deploys software, the Act requires the sharing of training model documentation (data provenance, model limitations) to stay in compliance. Both developers and deployers will want language on information sharing, and about cooperation for audits and other regulatory inquiries. Deployer-side contracts should contain language shifting liability upstream to developers who have more (or total) control over the model. Deployers will also want to have a firm understanding of fairness testing that models undergo prior to entering into vendor contracts, and obtain representations and warranties about that testing in any contract. Developer-side contracts should contain indemnification provisions if AI tools are used outside of the scope stated in the vendor agreement, or when AI tools are modified in a manner not specified in the parties’ agreement. 

  • Assess Impacts of Compliance on Budget: Once your AI systems have been mapped, and compliance documentation considered, your organization can get an idea of what resources will be needed to maintain compliance (i.e., whether to hire, insource or outsource support; audit log maintenance; legal). Cost to achieve and maintain compliance should be fully assessed, and additional funds set aside. 

  • Seize the Opportunity to Act Now: Failure to prepare early may mean inability to comply with the Act if it becomes effective on July 1, 2026. This could create a barrier for entry for unprepared and noncompliant businesses, but an opportunity for less competition to well-prepared businesses. 

  • Seek to be Compliant to Avoid Liability: Non-compliance with the Act may be treated as a deceptive trade practice under the Colorado Consumer Protection Act, subjecting companies to civil liability.

Conclusion

The Colorado Artificial Intelligence Act clearly signals the Coloradan desire for transparency, fairness, and consumer protection in AI use. It’s important that Colorado companies monitor changes in the law, and timing of enforcement. But, in the interim, businesses should take steps to be compliant with the law as-written, as it will take time to implement the compliance architecture. 

Separately, since the time CAIA originally was signed into law in 2024, California passed the Transparency in Frontier Artificial Intelligence Act (TFAIA). TFAIA is much less specific, and focuses on large frontier model developers (companies that gross over $500MM in revenues), shifting the burden largely away from startups and smaller businesses. TFAIA was not California’s first attempt at an AI framework. The first draft of California’s AI law included specific benchmarks that lawmakers believed would stifle innovation. 

California is a leader in the tech industry, housing 32 of the 50 largest AI companies as of March of 2025. Accordingly, while it seems unlikely at this late juncture, it is possible that Colorado will examine California’s new law, and consider whether a more hands off approach would be better. 

Whether or not CAIA remains law in its present state, time and again transparency in AI systems has created more public trust and confidence. Therefore, compliance is not a waste. 

Previous
Previous

Litigation Heats Up Over Texas App Store Accountability Act

Next
Next

U.S. Updates to Children’s Online Safety Initiatives