While Senator Chuck Schumer convenes star-studded "AI Insight Forums" in Washington, the actual machinery of governance is humming elsewhere. It is a classic American irony: the more we talk about a national strategy, the more we cede that strategy to the fragmented halls of state legislatures.
In the vacuum left by federal inertia, a quiet revolution is taking place in Sacramento, Denver, and Hartford. These states are not waiting for a grand bargain in the 118th Congress that will likely never arrive.
Instead, they are drafting the rules of the road for the most transformative technology of our generation. The result is a patchwork of compliance that may ultimately force the federal government's hand, whether it likes it or not.
The Illusion of Federal Momentum
To watch the proceedings on Capitol Hill is to witness a masterclass in performative concern. We see CEOs like Sam Altman and Elon Musk testifying before committees, nodding solemnly at the risks of existential catastrophe while their lobbyists work overtime in the shadows.
In 2023 alone, the top five tech companies spent a combined $95 million on federal lobbying. This massive investment serves a singular purpose: to ensure that any federal regulation is either toothless or sufficiently delayed to allow market dominance to solidify.
Is it any wonder that despite dozens of introduced bills, not a single comprehensive AI safety framework has reached the President’s desk? The political ossification of D.C. has turned AI regulation into a Rorschach test for partisan grievances rather than a serious policy endeavor.
We see a similar pattern of stagnation in other critical sectors. For instance, The Real Reason the Housing Crisis Has No Easy Solutions highlights how federal inaction forces local markets to bear the brunt of systemic failures.
When the center cannot hold, the periphery takes command. This is exactly what we are witnessing with the rise of the "State-Level AI Consensus."
California and the Power of the Veto
California remains the undisputed heavyweight in this arena, largely because it hosts the very companies it seeks to regulate. State Senator Scott Wiener’s SB 1047, the Safe and Secure Innovation for Frontier Models Act, became the flashpoint for this entire debate in 2024.
The bill sought to mandate safety testing for the largest AI models, those costing over $100 million to train. It proposed a "kill switch" for models that went rogue, a concept that sounds like science fiction but represents a very real concern for existential risk researchers.
Governor Gavin Newsom eventually vetoed the bill, arguing it was too blunt an instrument for a rapidly evolving field. However, his veto message was not a rejection of regulation, but a demand for more nuanced, data-driven approaches.
Newsom’s hesitation reflects a broader tension: how do you protect the public without strangling the golden goose of Silicon Valley? This is the same delicate balance we see in other sectors, such as how The Real Reason Climate Policy Is Shifting From Prevention to Adaptation reflects a pragmatic pivot in the face of political reality.
Even without SB 1047, California has passed nearly 20 other AI-related bills. These cover everything from deepfakes in elections to the protection of digital likenesses for actors and performers.
The message to Washington is clear: if you won't set the standard, the world's fifth-largest economy will do it for you. This creates a de facto national standard, as few companies can afford to ignore the California market.
The Colorado Model: Consumer Protection First
While California focuses on the existential "frontier" risks, Colorado has taken a different, perhaps more pragmatic, path. In May 2024, Governor Jared Polis signed SB 205, the nation’s first comprehensive law targeting algorithmic bias.
This law focuses on "high-risk" AI systems—those used to make decisions about housing, employment, healthcare, and lending. It requires developers and deployers to implement risk management frameworks to prevent "algorithmic discrimination."
Why does this matter more than the high-minded debates in D.C.? Because it addresses the immediate, material harms that AI can inflict on marginalized communities today.
Colorado’s approach is a direct descendant of the consumer protection era. It treats AI not as a magical entity, but as a sophisticated tool that must be audited and held accountable to existing civil rights standards.
By focusing on outcomes rather than the underlying code, Colorado has provided a blueprint that other states are already scrambling to copy. Connecticut and Virginia have introduced similar measures, signaling the birth of a bipartisan coalition for algorithmic accountability.
This shift toward localized, outcome-based regulation is a recurring theme in modern American policy. We see it in the fashion industry as well, as noted in The Vintage Aesthetic Is Dead — How Fast Fashion Finally Killed It, where market forces and regional trends dictate reality far faster than legislative bodies.
The Brussels Effect on American Soil
In international relations, the "Brussels Effect" refers to the European Union’s ability to set global standards through its rigorous regulatory environment. We are now seeing a domestic version of this phenomenon, which we might call the "Sacramento-Denver Effect."
For a multi-billion dollar corporation, building 50 different versions of a product to satisfy 50 different state laws is a logistical nightmare. It is far more cost-effective to build to the most stringent standard and apply it across the board.
Consequently, the laws passed in a handful of influential states effectively become the law of the land. This is how the California Consumer Privacy Act (CCPA) became the baseline for digital privacy in the United States.
But is this "regulation by patchwork" actually good for innovation? Industry leaders argue it creates a "death by a thousand cuts" for startups that lack the legal departments of a Google or a Meta.
They argue that a single, unified federal framework would provide the certainty needed for long-term investment. Yet, these same leaders are often the ones funding the lobbying efforts that prevent that federal framework from ever materializing.
This leads us to a stinging question: Is the tech industry’s preference for federal regulation a genuine desire for clarity, or a strategic move to ensure the weakest possible rules? History suggests that the latter is far more likely than the former.
The High Cost of Compliance and the Startup Gap
The danger of state-level regulation is not just the complexity, but the potential for unintended consequences. When compliance costs skyrocket, it is the smaller players who suffer most, further entrenching the power of incumbents.
We saw a similar dynamic play out in the sports world, as discussed in 7 Ways College Football NIL Deals Created a Brutal New Class System. Regulation intended to empower individuals often ends up benefiting the largest, most well-capitalized entities.
If a startup in Austin has to hire a team of compliance officers just to launch a new generative AI tool in Denver, they may choose to pivot or fold. This stifles the very competition that is supposedly the engine of the American economy.
Moreover, state legislatures often lack the technical expertise to understand the nuances of the technology they are regulating. A well-intentioned law in one state might inadvertently ban a whole category of beneficial software because of imprecise language.
This is where the federal government’s failure is most acute. D.C. has the resources to hire the best technical minds, to conduct deep research, and to create a balanced, nuanced framework.
By abdicating this responsibility, they are forcing part-time state legislators to do the work of a national government. It is a recipe for inconsistency and, eventually, a constitutional crisis over the Commerce Clause.
Historical Context: The Long Road to Tech Oversight
We must remember that this is not the first time the states have led the way on technological oversight. In the early 20th century, it was the states that first regulated the safety of automobiles and the purity of food and drugs.
The federal government only stepped in when the chaos of varying state standards became an intolerable burden on national commerce. The Pure Food and Drug Act of 1906 and the creation of the NHTSA were responses to state-level pressure, not proactive federal leadership.
We are currently in that "chaos phase" of the AI lifecycle. The states are the laboratories of democracy, testing out different theories of how to manage a world-altering technology.
Some of these experiments will fail spectacularly, while others will prove surprisingly effective. But the longer the federal government remains on the sidelines, the more difficult it will be to eventually harmonize these disparate rules.
We see this same pattern of delayed federal response in the digital media landscape. As The Podcast Bubble Has Officially Burst — Athletes Are Next illustrates, market corrections often happen far ahead of any regulatory or institutional oversight.
The question is whether AI—with its potential to disrupt everything from our jobs to our very perception of reality—can afford a century-long wait for federal competence.
The Looming Crisis of Algorithmic Federalism
What happens when a deepfake created in Florida influences an election in Ohio, using a model trained in California and hosted on servers in Virginia? This is the jurisdictional nightmare that "algorithmic federalism" creates.
Without a federal preemption clause—a rule that says federal law overrides state law—we are headed for a legal quagmire. Tech companies will spend more on lawyers than on researchers, and the public will be left with a confusing, uneven set of protections.
Yet, there is a certain democratic beauty in the state-level approach. It allows for local values to be reflected in the technology that citizens interact with every day.
If the citizens of Colorado decide that algorithmic bias is their primary concern, they should have the right to address it. If California wants to focus on the long-term safety of "frontier" models, that is their prerogative as a global hub of innovation.
But let us not mistake this state-level activity for a functioning national policy. It is a symptom of a broken federal system, a desperate attempt to fill a hole that should have been plugged years ago.
As we move into the next election cycle, the rhetoric around AI will undoubtedly heat up in Washington. But while the politicians in D.C. argue over soundbites, the real work of writing the future is happening in the statehouses across the country.
Whether this patchwork will protect us or merely confuse us remains to be seen. What is certain, however, is that the era of the "Wild West" for AI is coming to an end—one state at a time.