The machinery of misinformation and the capacity of Congress
Two counter-narratives about things we take for granted.
I didn’t settle on one big theme this week, so I’ve chosen two. Let me know in the comments what you think of this format.
The TL;DR: There’s an argument that the world isn’t, in fact, about to drown in AI-generated deepfakes; I don’t entirely buy it. And a look at how a ruling that guts regulation might conceivably have beneficial consequences.
Is AI-generated misinformation really a threat?
No, say Sayash Kapoor and Arvind Narayanan, the authors of the book AI Snake Oil. They write in a recent piece that, all fears to the contrary, AI deepfakes have not done much to undermine elections around the world this year. (Bruce Schneier and Nathan Schneider recently made a similar point.) And in general, they argue, more and better deepfakes aren’t going to make the misinformation problem worse.
They make a good case but I think they’re missing some things.
Kapoor and Narayanan’s theory is broadly similar to that outlined in an article last year in Harvard Kennedy School’s Misinformation Review, which they also cite. I’ll summarize the latter piece because it makes the case more neatly:
Myth 1: More and better AI means more consumption of misinformation. In fact, misinformation is a demand problem, not a supply problem. How much of it people consume is a function not of how much there is, but of how willing they are to believe it (which in turns stems from things like distrust in traditional media).
Myth 2: More and better AI means more convincing misinformation. In fact, the quality of pictures and videos doesn’t matter much. Most of those that have influenced opinion in elections are cheapfakes—existing images, doctored or selectively edited—not deepfakes created from whole cloth. And in any case the most influential misinformation isn’t images or videos at all, but lies or misleading statements amplified by people with big platforms like Trump or Joe Rogan.
Myth 3: More and better AI means more personalized information. In fact, the limiting factor is distribution, not creation. While the cost of making convincing deepfakes has come down, and they could be personalized to different kinds of viewers, what hasn’t fallen is the cost of getting them out there and microtargeting them to people via social media. Generative AI doesn’t help with that.
Overall, Narayanan and Kapoor conclude, technological advances in AI fakery aren’t the problem. The problems are polarization and the collapse of trust in purveyors of reliable information, like the media and academia, which motivate people to believe misinformation, whatever form it takes.
I find this compelling, but not totally convincing.
First, yes, it may be true that having more AI deepfakes doesn’t mean people see more of them. But they don’t have to. All it takes is for a few big online influencers to peddle an image to their followers, who will trust it because of who it came from. Designing fakes to appeal to a few of those super-users would be a much smarter tactic than trying to flood the zone. Think of it as influencer marketing rather than broad-spectrum advertising. Admittedly, this can be done with non-deepfake images too, but realistic ones will surely have more power.
Second, as AI image creation gets more sophisticated, so, I suspect, will the social engineering of it—in other words, thinking about when people are most vulnerable to trickery, not just making the trick itself more compelling.
Imagine, for example, if among all the cellphone videos from the crowd at the attempted assassination of Donald Trump, there were a fake one that purported to be shot from a different angle and showed a second gunman. Precisely because it’s just one video of many, it could be more plausible and be harder to spot as fake than if it were the only one, and it could feed conspiracy theories on both left and right. After all, the purpose of misinformation is as much to create confusion as it is to try to change the story to something specific.
Third, the more convincing fake images there are online, the more work it is for media outlets to sift them out. Flooding the internet with them could lead to journalists running around chasing what turn out to be false story leads, distracting them from real reporting.
In short, I think this is a case of “smarter, not harder.” Even if Narayanan and Kapoor are right that simply pumping out more and better-quality misinformation won’t make much difference, its purveyors can still find other ways to increase its impact by using it more intelligently.
Might the Chevron decision lead to a bigger Congress?
This is—maybe! Unlikely! But maybe!—a story about good unintended consequences coming from something bad.
The US House of Representatives is a classic example of the democratic scaling problem. After growing with almost each census until 1910, its membership was frozen at 435, in part because it was simply running out of space. Since then the population the representatives represent has more than tripled. The federal spending they oversee has increased nearly tenfold as a share of GDP (!). Each member has about 25 times as many constituents as when Congress was first established. But they still number 435, jammed into offices in five separate buildings with poor interconnections and woefully inadequate facilities, in particular a chronic lack of meeting spaces.
So now Protect Democracy and POPVOX have hired some architects to imagine an expanded House. Naturally, there are renderings of airy courtyards and a shiny new office building. There is a redesigned House chamber with seats for twice as many members. (Another architect’s flight of fancy, commissioned by Harvard’s Danielle Allen last year, found a way to squeeze 1,725 members into the chamber, though it dodged the question of where to put their offices.)
What’s interesting to me, though, is not the design itself but an almost throwaway line in the report about why it might, just might, be adopted.
In 1984 the Supreme Court established a precedent known as Chevron deference, in a case involving the oil giant. Simply put, it says that when Congress writes ambiguous laws, and regulatory agencies like the Food and Drug Administration write rules interpreting them, courts must defer to those interpretations.
Six months ago, however, the Supreme Court decided that actually, it’s the job of courts to say what the law means, and overturned Chevron deference. Now inexpert judges, rather than expert agencies, will be required to interpret laws on things like environmental protection, healthcare, and labor.
That of course is a shock to anyone who cares about these things. There’s already been a long list of consequences. But as a various people have pointed out, it also puts pressure on Congress to do a better job of lawmaking.
For one thing, when lawmakers write laws, they’ll have to be more specific about what they’re empowering regulatory agencies to do. For another, they’ll have to be more specific about what they intended those laws to achieve. (Sometimes, for political purposes, the ambiguity is deliberate.) If they fail at that, there could be many more lawsuits, which would blunt the laws’ effectiveness, further overload the legal system, and force some laws back to Congress for reconsideration.
But right now Congress doesn’t have the capacity to do a better job. Its limited number of representatives each has a limited number of staffers due to their limited office space. Those staffers are typically young, short-tenured, chronically underpaid, and severely lacking in expertise.
So the hopeful, perhaps wishful, idea making the rounds in some policy circles is that eliminating Chevron deference will force Congress to reckon with its own shortcomings. The ruling, says the Protect Democracy/POPVOX report, “has been viewed as a transfer of power from administrative agencies to courts, but it is also a transfer of responsibility from agencies back to Congress, which can no longer rely on well-staffed agencies to fill gaps in the law and may instead finally have to face its own capacity issues.”
This doesn’t have to mean expanding the membership of the House, which members would probably be loath to do since they’ve spent their careers learning how to win their particular districts. Just giving themselves more resources and more space to analyze and draft laws sensibly would go a long way.
As I said, that’s the hopeful view. The less hopeful one is: Congress has, for a long time, quite happily let its capabilities stagnate even as its challenges have grown. Indeed, a Republican-led Congress cut back its sources of expertise by eliminating the non-partisan Office of Technology Assessment in 1995. It’s unlikely to spend more money on itself now. Post-Chevron deference, it will solve this problem the way it has been doing for decades: by outsourcing more expertise to lobbyists, thus giving special interests even greater power over lawmaking.
Still, one can always dream.
This week’s links
How Polish judges resisted authoritarianism. A long webinar of an even longer report about how the judiciary rose up to defend the rule of law after the (ironically named) Law and Justice Party won election in 2015. Huge as the report is, it’s worth skimming for some good lessons for activists of all kinds, not just judges, on how to create solidarity and ward off state attacks. (International Center on Nonviolent Conflict)
More dangers for DOGE. Another excellent post (see last week) from Jen Pahlka on the Trump administration’s plans for government reform. Message: Democrats utterly failed to pull off such reforms themselves, and the institutional inertia and resistance that foiled their efforts is going to present a huge challenge to Musk and Ramaswamy too. (Eating Policy)
The kids love Luigi. Last week I called for a deeper look at the people cheering on the murder of the UnitedHealthcare CEO, and it turns out a poll had just been done. As you might expect, younger people were overwhelmingly more favorable, and Black and Hispanic people were too. (Center for Strategic Politics)
Why the US’s TikTok ban might backfire. Evidence from past efforts to ban or restrict tech platforms suggests that the effect can be to simply drive what happens on them into the shadows, as users resort to VPNs or migrate to platforms that are even less regulated and transparent. (Tech Policy Press)
Trespassing as democratic practice. As meandering and lyrical as the nature walks it describes, this piece profiles the UK’s “Right to Roam” movement, and others such as guerrilla rewilding and anti-pollution campaigns, as forms of civil disobedience that reclaim the populace’s ownership of public goods and natural resources and challenge the concentration of power in private hands. (Noema)