The US could become a grand experiment in AI law—in theory, anyway
The case for a narrower form of the moratorium on state-level AI regulation.
I’m sorry it’s been so long since I posted here. I’ve been pretty occupied with other projects, including some consulting work, mostly for for media companies and pro-democracy organizations. And I’m open to more: If you or someone you know could use my services in convening, facilitating, comms strategy, editing, or public speaking, please get in touch. (More about what I offer here.)
But aside from that, I’ve also been publishing more freelance journalism. I argued that the claims that we’re on the verge of “artificial general intelligence” are misleading (at Bloomberg—gift link), that Trump’s recent chip deals with the UAE and Saudi Arabia may threaten the US’s primacy in AI in the long term (also Bloomberg), and that Elon Musk’s term at DOGE made him a useful idiot for Russell Vought, one of the true powers behind Trump’s throne (for Foreign Policy).
Today, though, I’ll talk about a piece I wrote last week for Tech Policy Press, which is reprinted below, with slight modifications. It’s about the measure that Congressional Republicans are trying to stuff into Trump’s budget bill that would ban states from enforcing any law that constrains the development or use of AI for the next 10 years, on the grounds that these are creating a regulatory patchwork that will stifle AI innovation.
I argue for a narrower form of the ban: don’t let states regulate development but do let them regulate use. Just as states currently set everything from road safety rules to building codes to water pollution standards, they should be able to impose constraints on what people do with a technology like AI without dictating what the technology itself should be capable of.
My rather idealistic reason for arguing this is that it effectively creates a grand, 50-state experiment in what sensible AI legislation should look like. Within a few years, we should be able to see which laws are unnecessary or badly thought out and which yield actual social benefits, which could then lead to templates for better laws at both state and federal level.
Why is this idealistic? Because it presumes that someone will actually conduct the experiment. In reality, will anyone, especially lawmakers, have the wherewithal to properly analyze all these laws and their impacts—or even pay attention to such an analysis, if some think tank or university does it? I’m not hopeful. But if we want to imagine a future for democracy, then part of it means imagining a world in which this kind of experiment would be done and the results would be heeded.
(EDIT: Here’s a paper arguing the whole “states are the laboratories of democracy” thing is a myth because it’s third-party organizations like activist groups and funders that really drive policy innovation. And rather than a strict division between what states and the federal government can legislate on, the authors suggest mechanisms by which the federal government can support and incentivize innovations by states.)
Here’s the full piece.
Why Both Sides Are Right—and Wrong—About A Moratorium on State AI Laws
House Republicans’ proposed 10-year moratorium on enforcing any state-level or local AI regulations has caused the predictable uproar. They argue that the AI laws now passing in dozens of states will create a patchwork of conflicting and often poorly drafted regulations that will be a nightmare for companies to comply with, and will hold back American AI innovation. The countervailing view, in an open letter signed by more than 140 organizations, from universities to labor unions, is that it will give AI companies license to build systems that cause untold social harm without facing any consequences.
Both are right—just not entirely; both are wrong—just not completely. There’s an argument for a moratorium—but a much narrower one than what Republicans propose.
The idea of a “learning period” to let the AI industry develop before imposing laws on it was first floated last year by Adam Thierer at the center-right think tank R Street. He wrote:
An AI learning period moratorium should block the establishment of any new general-purpose AI regulatory bureaucracy, disallow new licensing schemes, block open-ended algorithmic liability, and preempt confusing state and local regulatory enactments that interfere with the establishment of a competitive national marketplace in advanced algorithmic services.
Over at Reason, Kevin Frazier fleshes out the argument:
A hodgepodge of state regulations, however well-intentioned, will inevitably stymie AI innovation. Labs could be subjected to conflicting, sometimes contradictory, compliance schemes. While behemoths like Google or Microsoft might absorb the legal and operational costs of navigating 50 different sets of rules, smaller labs and university research teams would face a disproportionate burden.
Frazier goes on to cite three bills currently before state legislatures. In California, SB 813 would establish a regulator that “certifies AI models and applications based on their risk mitigation plans.” In Rhode Island, SB 358 makes AI developers liable in some cases for harms caused to non-users of their systems. In New York, the RAISE Act requires AI developers to prevent their models from causing “critical harm” by having safety protocols and submitting to audits.
Frazier is right that these kinds of laws would burden the AI industry and create a maze of conflicting rules. And he’s right, in particular, to warn that this could disproportionately benefit the tech giants. In the EU, lawmakers are now getting ready to pare back GDPR, after evidence that smaller firms are drowning in the bureaucracy it generates.
But here’s the issue: laws like the ones Frazier mentions are a tiny minority of the enacted and proposed state regulations on AI. The majority put limits not on how AI is developed but on how it’s used. This is like the difference between telling GM and Ford what kinds of cars they can build and telling people how fast they can drive. Speed limits don’t hobble the auto industry. Rather, they help it by making driving safer.
You can see this just by skimming the National Conference of State Legislatures database (for 2024 and 2025) of proposed and enacted state laws on AI. Take, for example, laws adopted in New Hampshire and Alabama to ban political campaigns from using AI-generated deepfakes. Or those in Indiana and North Carolina which prohibit AI-generated revenge porn (much like a federal law Trump signed on May 19). Or Illinois’s update to its human rights act, which says that employers who use AI-based tools to make hiring recommendations may not set them up to infer someone’s race from their zip code.
Are the laws a patchwork? Sure—and so are building codes, environmental regulations, road safety laws, and any number of other rules that states pass because, well, they’re states. The Republican version of the moratorium would rule out nearly all of this category of laws for AI. In fact, it’s much broader than Thierer’s original proposal, which really only addresses rules that would constrain AI developers.
There’s one other argument for a broad moratorium, however. Basically, it’s that laws to prevent bad uses of AI are unnecessary, because bad uses are already illegal. Here’s Frazier again:
[T]he rush to regulate at the state level often neglects full consideration of the coverage afforded by existing laws. As detailed in extensive lists by the AGs of California and New Jersey, many state consumer protection statutes already address AI harms.
And here’s Neil Chilson, a former chief technologist of the Federal Trade Commission and now head of AI policy at the libertarian Abundance Institute:
[C]ivil rights, privacy laws, and many other safeguards are completely unaffected by the moratorium. SOME requirements to tell customers they are speaking to an AI may be affected, but even those could be easily tweaked to survive the moratorium. Just change the law to require all similar systems, AI or not, to disclose key characteristics.
Is this right? In some cases, at least, arguably yes. I’m no lawyer, but it seems pretty clear-cut that Illinois doesn’t need to explicitly ban AI-driven racial discrimination in hiring because the non-AI-driven kind is already verboten.
But does that mean such laws are never necessary? I don’t think so. If you were to transfer Frazier and Chilson’s argument to cars, it would say that speed limits are unnecessary, because killing people is illegal whether or not a car is involved. But we support speed limits because they make road deaths less likely. If a certain use of AI makes a certain kind of harm much more likely, then even if that harm was already illegal, we may want a law to limit it. And we can’t possibly predict all the ways AI can be used, so we can’t say we’ll never need such laws.
In short, I think there’s a case for a narrow, Thierer-type moratorium on laws that impose constraints on AI developers. As Chilson also notes, the idea that this would allow AI companies to ride roughshod over us all is hyperbolic: “[T]raditional tort liability as well as general consumer protections and other laws would continue to apply. Deliberately designing an algorithm to cause foreseeable harm likely triggers civil and potentially criminal liability under most states' laws.”
But for the rest, the kind that try to prevent harms in how AI is used, there’s a case for the opposite approach: let the states legislate all they want, and watch what happens.
It’s often said that the states are laboratories for American democracy. The US is now running a giant controlled experiment in AI legislation in 50 separate laboratories. Yes, it’s messy; yes, many of those state laws will be poorly drafted and unnecessary; yes, they’ll conflict. That’s the whole point. These various efforts could yield a wealth of data for anyone who actually wants to get AI law right, especially at the federal level.
AI companies in particular should welcome this. Washington is friendly to them right now, but it may not always be. A few years of data from the states could give them some ammunition against future overreach. And if lawmakers want a “learning period” for AI regulation, it’s hard to think of a better way to learn than by running 50 experiments at once.
Before I read more into the content, I'm here to appreciate the prompt behind the Midjourney picture!