Concern from tech leaders prompts delay of Colorado AI Act
Amid concerns raised by tech companies and business leaders, the Colorado legislature convened a special legislative session to revisit certain aspects of the Colorado AI Act (CAIA), a comprehensive law on the development and deployment of artificial intelligence (AI) systems. The CAIA was signed into law on May 17, 2024, and was set to take effect February 2026.
Governor Jared Poli’s signing statement, excerpted below, highlights the breadth of concern brought about by the CAIA:
This law creates a complex compliance regime for all developers and deployers of Al doing business in Colorado, with narrow exceptions for small deployers. There are also significant, affirmative reporting requirements between developer and deployer, to the attorney general, and to consumers.
I am concerned about the impact this law may have on an industry that is fueling critical technological advancements across our state for consumers and enterprises alike. Government regulation that is applied at the state level in a patchwork across the country can have the effect to tamper innovation and deter competition in an open market.
Colorado legislature announced on August 26 that negotiations to amend the law had failed, and implementation of the law in its original form is now delayed until June 2026.
Mitigating algorithmic discrimination caused by the use of AI
The CAIA is sometimes referred to as one of the most sweeping artificial intelligence laws in the United States. Amid fear of AI discrimination, the CAIA aims to regulate developers and deployers of “high risk artificial intelligence systems” which take part consequential decisions that have material, legal, or similar significant effects in education, employment, financial or lending services, essential government services, healthcare services, housing, insurance, or legal services. At its core, the CAIA’s goal is to mitigate the risk of algorithmic discrimination caused by the use of AI resulting in “unlawful differential treatment” of individuals or groups based on a protected class such as age, race, or religion.
To achieve its goal of mitigating AI discrimination, the CAIA states that developers and deployers of AI are subject to a duty of care to protect consumers from “any known or reasonably foreseeable risk of algorithmic discrimination” from its use. The act also imposes specific requirements on developers and deployers of AI systems, and the attorney general is granted the authority to request documentation related to the requirements.
Obligations and requirements for developers & deployers under the CAIA
Developers have additional requirements under the act for disclosing information about high-risk AI systems including:
- Issue a general statement to deployers that describes the “reasonably foreseeable uses and known harm or inappropriate uses” of the AI.
- Maintain public disclosures about the types of high-risk AI systems they manufacture and how algorithmic discrimination is managed.
- Notify the attorney general and all known deployers within 90 days of discovering that their AI systems have caused or is reasonably likely to cause discrimination.
Deployers’ obligations include more governance and consumer protection focused measures such as:
- Implementing a risk management policy program governing high-risk AI deployment.
- Conducting impact assessments annually and within 90 days of modifications to a high-risk AI system.
- Making public disclosures and periodically updating statement summarizing the AI systems deployed and how discrimination will be mitigated.
- Reviewing deployment of the AI system annually to ensure there is no algorithmic discrimination. If algorithmic discrimination is discovered, the deployer must notify the attorney general without unreasonable delay and within 90 days.
- Notifying individuals if a high-risk AI system is used to make a consequential decision no later than at the time of deployment. If the decision is adverse to the individual, the deployer is required to provide additional documentation disclosing the reasons for the decision and an opportunity to correct inaccurate data or appeal the decision.
Impact of CAIA on tech companies operating in Colorado
The CAIA has garnered scrutiny as overburdensome, because under its disclosure requirements, deployers such as hospitals and schools are required to disclose all possible biases and their plans to mitigate the impact despite not developing the AI themselves. This has led tech companies to threaten to leave Colorado to avoid increased liability under the CAIA. The special session attempted to renegotiate the requirements under the CAIA to avoid losing tech innovation and development in Colorado. The negotiations were unsuccessful; however, with the delayed implementation, lawmakers will have the opportunity to attempt to amend the law again in the 2026 session.
How businesses developing or deploying AI can prepare for the CAIA
Businesses developing or deploying high-risk AI should begin to prepare for the law’s implementation in 2026. If the CAIA remains in its original form, businesses should implement risk management plans similar to the NIST AI Risk Management Framework or other nationally recognized frameworks. The risk management plan should include maintenance of public disclosures and impact assessments to comply with the CAIA’s regulations. Businesses should also create internal reporting processes for notifying the attorney general of any AI discrimination discovered to comply with the 90-day notification deadline. Creating a risk management plan now will help businesses remain in compliance when the law comes into effect next summer.
For more insight and guidance on risk management planning and reporting, reach out to Annslee Perego or another member of McDonald Hopkins’ AI practice.