Current News

/

ArcaMax

After two years of struggle, lawmakers are poised to rewrite -- and scale back -- Colorado's AI regulations

Seth Klamann, The Denver Post on

Published in News & Features

After two years of task forces, collapsed deals, attempted overhauls and an Elon Musk lawsuit, Colorado lawmakers are finally poised to rewrite — and scale back — the state’s beleaguered artificial intelligence regulations.

The latest legislation, in contrast to those earlier stutter-starts, is flying through the legislature. By Saturday afternoon, Senate Bill 189 had met little resistance, had passed both chambers and was headed to Gov. Jared Polis’ desk, as the clock ticks down to the annual legislative session’s adjournment Wednesday night.

The proposal represents a near-total rewrite of the state’s initial attempt to protect Coloradans from being discarded by a discriminatory AI system when they apply for jobs, bank loans or housing. Passed into law in 2024, those rules — which haven’t yet taken effect — have become a piñata for nearly every group with an interest in them. Lawmakers are now preparing to scale them back to a requirement that applicants be made aware when AI is involved in a consequential decision about their lives.

“The amount of interest and money and scale of this stuff begs for some type of regulation,” said Sen. Robert Rodriguez, a Denver Democrat who’s been at the center of the AI negotiations since 2024. “… It’s important that we do safety stuff. Polling, bipartisanly, on data centers and all the other stuff — it is not going well, and people want us to do something.”

But figuring out the right something has been “a complicated piece,” he said.

In truth, the fight over the state’s AI regulations has largely happened outside the House and Senate doors. Only a few lawmakers have been involved in the up-and-down negotiations over the past 24 months.

The seesaw has more regularly been occupied by an array of tech and business groups, local venture capitalists and AI firms, hospitals and schools, and consumer protection and progressive groups. Those interests have sometimes competing and overlapping goals, and they’re also powerful enough to tank lawmakers’ plans: After a would-be AI deal was announced by lawmakers in August, a howling lobby scuttled it within half a day.

Polis, who has called for a moratorium on state-level AI regulations, convened those outside forces in a closed-door task force to hammer out a negotiated armistice that could be delivered to the legislature for passage.

With that framework now drafted into a bill, the legislature finds itself in odd, if not unprecedented, territory. Any changes risk fracturing the fragile truce among the legislative lobby, and the task force members have repeatedly pleaded with lawmakers to leave the bill as is.

Perhaps wary of another late-session fight over wonky AI regulations, legislators have largely obliged, and the bill has sailed through the Capitol at a brisk pace since it was introduced May 1. Through all of its votes in the Senate and House, only eight lawmakers — all Republicans — voted against it. A brief attempt to amend it beyond the bounds of the deal was voluntarily dropped Saturday.

“Hopefully, everybody’s happy,” Rodriguez told reporters Thursday, “and hopefully you’re not coming back next year saying, ‘Oh God, here we go again.’ ”

A bill focused on decisions

At its core, SB-189 is effectively a notification bill for companies and other entities that use AI in consequential decisions.

Starting on Jan. 1, it would require those companies or agencies to disclose to people that AI will play a role in their job application, their college acceptance or their loan approval. The bill requires companies to provide “clear and conspicuous notice” to those consumers or applicants, but how that will work in practice isn’t detailed in the measure itself.

If an applicant’s résumé is rejected, the bill would allow that applicant to ask what personal data the system used. If that data was incorrect, then the consumer could correct that information and request that a human provide “meaningful review” of the decision.

That’s a marked shift from the current regulations, which would require more intensive work by AI companies to curb bias if they took effect next month, as currently scheduled. That effective date has been delayed repeatedly to give lawmakers more time to work on changes.

SB-189 “puts it more on the individual to say, ‘OK, instead of making sure that the systems are free of risk prior to them being deployed, what we’re going to do is we’re going to let you know these systems are used, some of the information about them, what categories of data is being used,’ ” said Travis Hall, the state director for the Center for Democracy and Technology.

“And if there’s an adverse decision against you,” he said, “you’ll have some ability to correct it.”

Colorado’s proposed retreat has drawn mixed reactions. Under the existing rules, companies were required to undertake impact assessments intended to blunt the potential that AI systems could discriminate against someone. But those assessments were among the most criticized parts of the rules — and though Rodriguez privately sought to bring them back into SB-189, doing so risked shattering the peace.

“That’s been the big piece for the (venture capitalists) and the startups,” who would struggle more to comply with the requirements than larger AI firms, Rodriguez explained. He thought that would have been a “nice way to go,” and it would’ve provided some protections to companies that completed the assessments.

Still, he agreed to drop it.

“If you ask me, that’s the wrong choice,” said Mariana Olaizola Rosenblat, a policy adviser at New York University’s Stern Center for Business and Human Rights.

 

To her, SB-189 “is very limited” without the risk assessments — which, she argued, can prevent bias before it impacts someone trying to get healthcare or a bank loan. She was sympathetic to concerns from smaller companies, but she said the current law “does very little compared to what it could be.”

Risk assessments “might be burdensome, but that doesn’t mean they’re not good,” she said.

Jim Samuel, an associate professor at Rutgers University who focuses on artificial intelligence and data science, was more supportive of the bill’s approach.

He argues that bias is inherent in data because humans are biased. AI can never be perfect, he said, because humans aren’t perfect, and attempting to tweak the information that feeds into artificial intelligence would have unintended consequences.

The best option, Samuel continued, was to provide transparency and education to consumers.

“We need to ensure fairness. We need to ensure there’s no injustice in our system,” he said. “So if something like (AI-driven discrimination) comes to light, we need to already have regulations in place, which will protect the consumer. That’s where transparency comes in. If my credit application is rejected, I need to have an explanation for why.”

He called for continued oversight of AI companies and refinement of laws to protect consumers. Echoing calls from some consumer groups, Hall, from the Center for Democracy and Technology, said SB-189 was a starting point.

“It’s not nothing, and it is a little bit like — crawl, walk, run,” he said. The rules set to take effect next month were akin to walking, “and we’re going back a little bit to a crawl. But we’re still moving forward. And it does provide some traction for accountability in these systems.”

Nationwide interest

As other states weigh AI regulations, Colorado’s SB-189 is simultaneously broader and narrower than what’s been adopted elsewhere, Hall said.

“There’s a smattering of laws across the country, but they’re narrow in scope,” he said.

Some address specific industries, like healthcare or insurance. Texas adopted a law similar to Colorado’s soon-to-be-rewritten rules, though Hall said it applies mostly to government agencies, rather than the private sector. Connecticut lawmakers also just passed an anti-bias AI law that’s mostly focused on employment-based discrimination, and Illinois recently changed its human rights laws to prohibit AI discrimination and require companies to notify consumers when AI is used.

One uncertainty facing all of the states’ attempts at regulation is the potential for federal intervention, whether from Congress or President Donald Trump’s administration. In an executive order last year, Trump sought to undercut states’ individual AI regulations, and specifically criticized Colorado’s rules, as he called for a national AI framework. And last month, after Musk’s xAI sued to challenge Colorado’s earlier AI law, the Department of Justice joined his lawsuit.

Colorado lawmakers this year have taken aim at other areas where AI’s tentacles have reached. The legislature has advanced bills regulating how the technology is used in healthcare and in psychotherapy.

Another bill would place some guardrails around AI chatbots used by minors. All three of those measures have passed the full House and were advancing in the Senate as of Friday.

The increased national and state focus comes as the public grows increasingly wary of AI.

“To a certain extent, there’s also a little bit of a cultural consciousness (and people) understanding the moment that we’re in,” Hall said.

A Pew Research Center poll from last summer found that only 10% of Americans were more excited than concerned about AI, while 50% were more concerned. More recently, a Quinnipiac University poll from March found that though more people were using AI, more people were also growing concerned about it. Fifty-five percent said the technology was doing more harm than good.

In a continuing call-to-arms for lawmakers, 74% of respondents also said the government was not doing enough to regulate AI.

“Does this bill go far enough? Well, it depends on who you ask; I’m pretty sure if you ask literally anybody who worked on it, the answer is no,” Denver Rep. Jennifer Bacon, another of SB-189’s sponsors, said before the final vote Saturday. “But it’s a good starting place because we need to keep an eye out, not only on how businesses are developing but any potential impacts and harms that may be happening to any one of our constituents. And the only way we can do that is by what’s in this bill.”

_____


©2026 MediaNews Group, Inc. Visit at denverpost.com. Distributed by Tribune Content Agency, LLC.

 

Comments

blog comments powered by Disqus