New AI Compliance Requirements Prohibit Discrimination for Colorado Employers
Carl Williams, Paralegal
On June 30, 2026, Colorado’s Senate Bill 24-205 (“SB24-205”), also known as the Colorado AI Act, goes into effect. [1] The law requires employers to exercise reasonable care when using high-risk AI systems to prevent algorithmic discrimination. SB24-205 authorizes the Colorado Attorney General (“AG”) to treat violations as deceptive trade practices and to impose civil penalties for violations.
Employer Use of AI in Their Human Resources Programs and Key Definitions
A high-risk AI system is any tool that makes or meaningfully influences a consequential decision affecting a person’s access to employment, education, healthcare, or other significant opportunities. For employers, this typically includes systems that use AI to screen or rank applicants, evaluate employee performance, recommend promotions, or determine eligibility for benefits or workplace opportunities.
Algorithmic discrimination occurs when an AI system causes unfair treatment of individuals based on protected characteristics such as race, sex, age, disability, national origin, religion, or other legally protected traits.[2] Employers remain prohibited from discriminating against individuals based on these characteristics under federal and state anti-discrimination law.[3]
A developer is anyone who creates or substantially modifies a high-risk AI system. Employers fall into this category if they develop or customize AI tools. A deployer is anyone who uses a high-risk AI system, which may include human resources departments, managers, and compliance teams. SB24-205 does not establish any employee-count threshold; any employer that develops or deploys a high-risk AI system is covered.
Obligations for Employers Using High-Risk AI Systems
Employers that use high-risk AI systems must maintain a documented AI governance and risk-management program that explains how the organization identifies, monitors, and mitigates algorithmic discrimination. SB24-205 recommends the National Institute of Standards and Technology’s AI Risk Management Framework as an appropriate guide for achieving compliance.
Before deploying any high-risk AI system, employers must complete a written impact assessment that addresses the system’s purpose, data inputs, testing procedures, risks, and safeguards. This assessment can be conducted internally or by an external third-party auditor, depending on the employer’s resources and risk management strategy. The assessment must be updated annually and after any substantial system modification. High-risk AI tools must also undergo routine annual reviews and re-evaluation within ninety days of any significant change.
Colorado residents must receive clear notice when high-risk AI systems are used to make consequential decisions about them. If the decision is adverse, deployers must provide residents with a plain-language explanation. Residents must also be given the opportunity to appeal an adverse outcome, including the option for human review, unless offering such an appeal would not be in the consumer’s best interest. For example, where delay could result in serious safety risks or harm.[4] Employers must also publish a public transparency statement identifying the high-risk AI systems they use and describing their risk-mitigation practices. Finally, if an employer discovers that its AI system has caused algorithmic discrimination, it must report the issue to the AG within ninety days.
Enforcement and Exemptions
The AG has exclusive authority to enforce SB24-205, and violations are treated as deceptive trade practices, carrying civil penalties of up to $20,000 per violation.[5] Certain entities, such as banks, insurers, and HIPAA-regulated healthcare organizations, are exempt from portions of the law because they are already subject to comparable federal or state oversight.
Employer Takeaways
SB24-205 will significantly shape how companies use AI in their human resource systems, including the hiring process and will require employers to develop a set of protocols with the objective of prohibiting discrimination. In preparation for the June 30, 2026 enactment of SB24-205, employers should assess how they use AI in their human resources systems and determine which AI tools are considered high-risk and develop AI governance policies for those programs, including providing the proper public transparency statements.
Employers using AI in their human resources systems should contact Campbell Litigation to assist with assessing and developing compliant AI governance programs.
[1] Colorado is the first state to enact an AI law that imposes governance obligations on both developers and deployers of AI systems within the state.
[2] See C.R.S. § 6-1-1701(1) (2024).
[3] See Colorado Anti-Discrimination Act (“CADA”), Colo. Rev. Stat. § 24-34-401 et seq., and Title VII of the Civil Rights Act of 1964, 42 U.S.C. § 2000e et seq. (“Title VII”).
[4] See C.R.S. § 6-1-1703(4)(c)(I) (2024).
[5] See C.R.S. § 6-1-112(1)(a) (2024).