COUNTERPOINT: AI needs rules -- and states cannot be forced to wait
Published in Op Eds
Congress has not enacted meaningful artificial intelligence legislation, yet some in Washington insist that states should be blocked from legislating on AI. This argument asks Americans to accept the federal government will do nothing to regulate AI, and, meanwhile, states must also do nothing.
For some, this passes as “sound governance.” In reality, it is a failure of leadership.
States have often served as the country’s first responders when new products begin affecting Americans before Congress responds. From consumer protection to automobile safety to labor laws, states have frequently moved first because they are closer to the public, faster to respond, and better positioned to test practical safeguards. Artificial intelligence is no exception.
The case for preserving state authority is especially strong because the AI harms are multiplying rapidly. Older Americans are targeted by AI-enabled fraud. Children, primarily young girls, are targeted with nonconsensual intimate imagery. Workers are being laid off by the thousands. Job hunters are affected by obscure AI systems that reject their applications without explanation. Unsafe chatbots have been released to the market, with multiple people dying by suicide. Deceptive AI political content is threatening democracy with an incoming wave likely to hit the 2026 midterm elections.
Despite these threats to children, the American public and our democracy, Congress has twiddled its thumbs. In the absence of federal leadership, states are doing exactly what federalism was designed to permit — responding to harms to protect their constituents when Congress fails to do so.
Importantly, the public overwhelmingly wants regulations on AI. Nearly 97 percent of Americans say AI safety and security should be subject to rules and regulations, including strong majorities of Democrats, Republicans and independents. More than 80% oppose federal efforts to block state AI protections, especially when children’s privacy and safety are at stake.
Moreover, the world’s largest AI firms are among the wealthiest corporations in history. NVIDIA alone is valued at more than $4 trillion. Apple, Google, Microsoft, Meta and Amazon each command trillion-dollar valuations and are armed with enormous legal staff, compliance teams, engineering capacity and lobbying operations. These are not fragile startups. They are multinational corporations that already adapt products to legal regimes across the world.
In the United States, Big Tech companies do not need to build 50 separate systems as they disingenuously argue. They build products to comply with the strictest major state standards, often California’s, and use that baseline nationally.
We saw this with privacy law after California enacted the California Consumer Privacy Act. Many firms tethered their product standards to meet that law rather than make individual products for 49 other states. That is how large markets set practical compliance norms. It’s simply a normal business practice across various industries.
Big Tech companies are also not passive victims of legislation. They are deeply involved in shaping it. More than 3,500 federal lobbyists, one-fourth of all federal lobbyists, worked on AI issues in 2025. The number of AI-related lobbying relationships has increased by 265% in three years.
That lobbying extends well beyond Washington. In state capitals nationwide, major technology companies actively influence legislation. A perfect example is OpenAI’s controversial push to steer chatbot legislation aimed at protecting teens in California. In short, the image of state lawmakers bullying or blindsiding Silicon Valley with legislation is fiction. Big Tech is not only in the room when legislation is considered; they often draft the legislation with state legislators before it is even introduced.
Finally, there is no serious evidence that state regulation is or would negatively affect innovation. AI investment is booming. Data center construction is exploding in the United States. American AI companies account for the top 10 global tech valuations and dominate the majority of the top 50 richest tech companies. If regulation were truly crushing innovation, one would expect to see anything but record market capitalization, record infrastructure growth and record lobbying expenditures. Instead, Big Tech revenues continue to soar.
State regulation remains the only meaningful safeguard standing between Americans and a rapidly expanding set of AI-related harms. That makes state authority more important than ever. Congress will one day establish federal standards, but until then, blocking states would give AI companies broad freedom to test powerful systems on the public before protections exist.
States were never meant to remain idle while harm spreads and Washington stalls. They were designed to act quickly. At this moment they should, because waiting is itself a policy choice, and increasingly, a dangerous one for our society.
_____
ABOUT THE WRITER
J.B. Branch is the AI Governance and Technology Policy Counsel for Public Citizen’s Congress Watch division. He wrote this for InsideSources.com.
_____
©2026 Tribune Content Agency, LLC






















































Comments