On May 17, 2024, the first comprehensive U.S. state law governing artificial intelligence (AI) was signed into law. The Colorado Artificial Intelligence Act goes into effect on February 1, 2026. While this Act affects many areas, it specifically regulates the use of high-risk AI in employment and employment opportunities. A high-risk AI is defined as a system that, when deployed, makes, or is a substantial factor in making a consequential decision.
A consequential decision is defined as one that has a material legal or similarly significant effect on the provision or denial to any consumer of the following:
-
Educational enrollment or an educational opportunity
-
Employment or an employment opportunity
-
A financial or lending service
-
An essential government service
-
Health care services
-
Housing
-
Insurance
-
A legal service
For our purposes, the most significant one is a high-risk AI having a material effect on an employment decision. This includes hiring, recruiting, termination, or any other decision that affects employment or employment opportunity. While the Act exempts employers with 50 or fewer employees from some requirements, it otherwise requires organizations to take broad measures to protect current and potential employees from algorithmic discrimination.
The Colorado attorney general’s office is charged with rulemaking and enforcing the Act. The Act does not provide a private right of action.
The Act’s Requirements
The Act imposes requirements on both “developers” and “deployers” of high-risk AI systems. Developers are those who create high-risk AI systems, and deployers are employers, presumably with more than 50 employees in Colorado, who adopt high-risk AI within their organization. The Act requires both developers and deployers to use reasonable care to protect Colorado residents from known or reasonably foreseeable algorithmic discrimination risks by:
-
Developing a risk management policy that specifies and incorporates principles, processes, and personnel that serve to identify, document, and mitigate known or reasonably foreseeable risks of algorithmic discrimination
-
Creating a detailed impact assessment pursuant to the requirements of the Act
-
Giving notice to consumers in plain language
-
Disclosing to the consumer any consequential decisions that a high-risk AI participated in
-
Reporting to the Colorado attorney general’s office
-
Disclosing to the consumer when they are interacting with an AI system
To establish an affirmative defense in response to an enforcement action, the Act requires that organizations comply with a standard risk management framework, such as the NIST AI Risk Management Framework.
Action Steps
If your organization uses a high-risk AI that affects or is a substantial factor in making consequential employment decisions, we suggest getting ready now because implementing the necessary AI controls can take time. We recommend that you do the following:
-
Identify if your organization is using a high-risk AI to make consequential decisions. If you’re not, then continue to periodically evaluate if your organization employs one.
-
If your organization employs a high-risk AI to make consequential decisions, then do the following:
-
-
Review or, if necessary, create an AI governance policy that conforms to a standardized risk management framework.
-
Draft and implement a risk management policy.
-
Establish a process for detecting and mitigating discrimination bias that arises from AI use.
-
Prepare the necessary documentation for the Colorado attorney general’s office.
Employers Council will update members with future articles on any further developments. In the meantime, for guidance on the use of artificial intelligence, please consult our Employers Guide to Managing AI in the Workplace. If you have any questions, please contact us at info@employerscouncil.org.
Mark Decker is an attorney for Employers Council.