Colorado Sets Precedent with Comprehensive AI Regulation

by Fadi Agour, J.D.

In a pioneering move, Colorado has emerged as the national frontrunner in instituting robust regulations governing the deployment of generative artificial intelligence technology (AI). This groundbreaking development came to fruition on May 17, 2024, as Governor Jared Polis endorsed Colorado Senate Bill 24-205, officially enacting the Colorado AI Act (CAA) into law. Distinguishing itself from the comparatively narrower AI legislation enacted in states like Florida and Utah, the CAA adopts a risk-centric approach aimed at overseeing developers and deployers engaged in AI systems with a heightened potential for “algorithmic discrimination.”

Scope of Regulation
The ambit of the CAA extends to all developers and deployers—corporate entities—utilizing “high-risk artificial intelligence systems” within the state of Colorado. The term “high-risk AI systems” under the purview of the CAA encompasses those AI systems pivotal in influencing consequential decisions or substantially contributing to altering outcomes in various domains. Such systems are deemed high-risk if they impact the cost, provision, or denial of critical services such as education, employment, financial services, healthcare, housing, insurance, legal services, among others. Moreover, the legislation encompasses AI deployment involved in crafting content, decisions, predictions, or recommendations pertaining to consumers, subsequently influencing consequential determinations concerning them.
 
Exclusions and Definition of Algorithmic Discrimination
However, the CAA carves out exceptions, sparing AI systems tailored for narrow procedural tasks or those designed solely for detecting decision-making patterns or anomalies without supplanting human evaluation sans oversight. Furthermore, anti-fraud mechanisms devoid of facial recognition functionalities, databases, data storage, cybersecurity tools, firewalls, and generative AI tools like chatbots aimed at furnishing users with information are not classified as high-risk AI systems. This exemption is contingent upon the proviso that such generative AI tools explicitly forbid the generation of discriminatory or harmful content.
Algorithmic discrimination, as delineated by the legislation, pertains to instances where AI employment culminates in unlawful differential treatment or impacts on individuals belonging to classes protected from discrimination. These classes typically encompass race, color, ethnicity, national origin, religion, sex, age, disability, veteran status, and similar attributes. Notably, AI systems employed to enhance diversity or rectify historical discrimination are exempted from this classification. Moreover, discriminatory acts or omissions stemming from AI utilization within private clubs or non-public establishments do not fall under the purview of algorithmic discrimination as stipulated by the CAA.
 
Developer and Deployer Obligations
The CAA imposes a duty of care on developers of high-risk AI systems, mandating measures to shield consumers from potential algorithmic discrimination risks inherent in their AI systems. To substantiate reasonable care, developers must adhere to specified protocols, including furnishing deployers with comprehensive information regarding the purpose, intended utility, benefits, associated risks, foreseeable algorithmic discrimination, and risk management strategies related to the AI system. Additionally, developers are obligated to supply deployers with requisite documentation facilitating impact assessments of the system, disclose any known or foreseeable risks of algorithmic discrimination within a stipulated timeframe, and furnish public summaries detailing the types of high-risk AI systems they develop or modify.
 
Concomitantly, deployers are entrusted with the responsibility of exercising reasonable care to mitigate algorithmic discrimination risks associated with high-risk AI systems. This entails periodic reviews of AI systems for evidence of discriminatory outcomes, transparent communication with consumers regarding decisions influenced by the AI system, and provision of avenues for rectifying erroneous information impacting consequential determinations. Larger companies exceeding the employee threshold stipulated by the legislation are subject to augmented obligations to fulfill their duty of care effectively.
 
Disclosure Imperatives and Enforcement
Furthermore, developers and deployers employing AI systems for consumer interaction are obligated to disclose the AI-driven nature of interactions, clarifying that consumers are engaging with AI systems rather than live personnel. Enforcement of the CAA is entrusted solely to the Colorado Attorney General, empowered to promulgate additional regulations for effective enforcement. Violations of the Colorado AI Act are deemed unfair trade practices; however, the legislation precludes the provision of a private right of action.
Looking Ahead
In anticipation of the enforcement of the CAA, developers and enterprises leveraging AI for consequential decisions must diligently acquaint themselves with the legislation’s stipulations and brace for compliance measures. Alternatively, abstaining from conducting business in Colorado remains an option; nonetheless, given the evolving regulatory landscape, akin statutes governing AI technology are poised for enactment in other states. Connecticut, for instance, is currently deliberating on SB 2, titled “An Act Concerning Artificial Intelligence,” mirroring several facets of the CAA, underscoring the inevitability of comprehensive AI regulation nationwide.