State lawmakers discussed the uses of artificial intelligence both in the public and private sector during an interim committee meeting on Monday.
The Science, Technology and Telecommunications Committee discussion followed a presentation focused on transparency in how AI programs are increasingly being used by companies to make consequential decisions that affect a person’s acceptance or denial of services.
“(Consequential decisions) are decisions that have a major impact on somebody’s life or livelihood. It really changes what opportunities they have access to, what services they have access to,” computer scientist Christopher Moore of Santa Fe Institute said. “We need a greater degree of transparency to make sure that AI is doing good here rather than harm.”
These consequential decisions include decisions made by employers through automated hiring practices and other decisions made for health care, education, social services fraud detection, housing and criminal justice.
“The advocates of AI, both inside and outside the industry, argue that unlike human decision making, which the psychologists tell us is often imperfect, AI can be evidence-based. It can be objective. It can avoid some of the stereotypes that humans might use, consciously or unconsciously, and it can be accurate in a way that we can measure quantitatively,” Moore said. “On the other hand, some of the cons are that AI works by being trained on data from the past… and then assumes that those patterns will hold in the future.”
Related: Experts tell legislators about ‘black box’ AI
AI treats people like statistics since it does not know people as individuals, Moore said.
Many AI systems, called black boxes, “produce some decision, some recommendation, but without any explanation or any ability for the people affected or the people advised by it to understand how and why it came up with that,” Moore said.
During his presentation, Moore told the committee the questions he would ask about the use of a black box program would be: What kind of data does the program use? Where does the data come from? Do decision makers and others affected by the AI’s recommendations understand why it made its decisions? And is there an independent assessment for accuracy?
“If I’m a caseworker, or a judge, and some AI tells me this person, if this is not a high priority call about a child protection case or yes, this person can be released without danger to the public— I would like to know what the logic behind that is,” Moore said. “I would like to know what kinds of mistakes these systems can make because they do make mistakes.”
Attempted AI transparency legislation
Committee members also discussed a bill that initially came to the legislature in 2024 during the regular session but never made it to the floor for a vote.
HB 184 sought to have state governmental agencies submit a report detailing the AI systems used by each agency. The bill passed through two House committees during this year’s regular legislative session but was never heard on the House floor.
HB 184’s drafter Mark Edwards of the Legislative Council Service spoke about what the bill would do.
Related: Campaign ad regulatory bill passes House
“During development of the bill, there was a lot of discussion about… (figuring) out whether the software is biased or not,” Edwards said. “It was extremely hard to come up with a definition of bias that everyone could agree on.”
The bill ended up with a definition stating that “consequential decision results that may constitute an unlawful discriminatory practice pursuant to” the Human Rights Act.
Colorado example
House Majority Leadership Office Chief of Staff Alisa Lauer highlighted the Colorado model for AI transparency as an example of AI transparency legislation.
The Colorado legislation was chosen because it was “recognized as the first comprehensive artificial intelligence legislation in the United States,” Lauer said.
The legislation creates duties for AI developers and users “to use reasonable care to protect consumers from any known or reasonably foreseeable risks of algorithmic discrimination arising from the intended and contracted uses of high-risk AI systems,” she said.
The bill puts authority under the Colorado Attorney General’s Office with no private right of action.
The bill, although touted for its comprehensiveness was not without its issues such as loopholes that allow companies to withhold information or hide evidence of discrimination, weak enforcement provisions and the law’s reliance on self-reporting and self-assessments,” Lauer said.
The bill was passed and signed into law earlier this year. It goes into effect February 2026.