DeepSeek, demagoguery, and democracy
What Donald Trump, AI, and the future of governance all have to do with each other.
What do Donald Trump and the AI industry have in common? It’s that both of them keep us off balance by drowning us with information. In Trump’s case it’s what Steve Bannon once called “flooding the zone with shit.” With AI it’s the endless hype cycle of new product releases. Either way, it’s hard to find signal in the noise, step back, and see what’s really going on.
This has been particularly on display in the past week, with the administration’s blitzkrieg of executive orders and actions and the frenzy around DeepSeek R1, the hot new AI model out of China. And it’s had an effect on me: I’ve tried to write three or four different posts and run aground each time in my attempts to produce a coherent argument. (That’s also why this newsletter is later than usual.)
So instead this will be an attempt to loosely string together those disparate ideas.
The TL;DR: We’re lacking both the language to make sense of the technological and political changes facing us and the capacity to process all the information about them. It’s not hard to come up with policies for countering the power of Big Tech and populist demagoguery; what not enough people are working on, or even know how to begin on, is solving the attention problem that prevents mobilization around these policies.
1. “We’re all DeepSeek correspondents now.”
So wrote Business Insider’s Pranav Dixit this week. Of the millions1 of words written about DeepSeek, I think what matters can be condensed down to about a dozen: AI will keep getting cheaper, more powerful and more accessible faster than you thought. All the rest—Is DeepSeek R1 better than OpenAI’s o1? How soon will Nvidia stock recover? Is China beating the US? Is this AI’s “Sputnik moment”?—are merely questions about who is “winning” the AI “race” in the short term. The long-term implications for society are all contained in that one sentence.
Those implications are basically that whatever your predictions for how quickly AI will develop and what it can do, you probably need to throw them out. DeepSeek was trained for a fraction of the price of rival Western models, and runs much cheaper and faster too. It shows that AI development is unlikely to run into a wall of insufficient computing capacity or electrical power. The hundreds of billions of dollars now being poured into new chips and datacenters won’t go to waste—that capacity will be used for new directions in AI research and for much more widespread applications. It may become cost-effective, for example, to string a bunch of large language models together to work in collaboration or adversarially, solving harder problems, as Azeem Azhar points out. Either way it points to more AI doing more things and surprising us in new ways.
2. It’s all about jobs
Two applications of DeepSeek that I saw this week caused me to revise my idea of what generative AI can do. One was by an anthropologist who jotted down very rough notes and had DeepSeek turn them into an essay. “first time I've actually thought ‘wow the AI can write,’” she says, and I agree. I tried doing it for this newsletter, and was a little less impressed, but probably because my notes were already closer to a written draft, so all it could was embellish them. What struck me in her example was its ability to turn inchoate thoughts into a developed argument. If you write any kind of commentary or analysis for a living (like me!), it means the competitive advantage of being a good writer is shrinking. What’s left is the originality of your ideas. If you’ve already published a lot, it will become trivial to train an AI to write in your voice and referring to things you’ve said in the past, and to generate a new post based on just a few short prompts.
The second was this DeepSeek-generated analysis of the impact of a US tariff hike on Canadian GDP, which the author contrasted with a simulation run at the Bank of Canada. The key breakthrough was not the accuracy of the answer—though it wasn’t bad—but the fact that DeepSeek shows its reasoning, allowing a competent economist to see where the flaws might lie. Similarly to the writing example above, then, it empowers someone with expertise to do their job in a much shorter time.
Both uses supercharge experienced workers. The concern is that they might eliminate more entry-level jobs, leaving it harder for recent graduates and junior employees to forge a career. In fact, some studies suggest a “leveling effect,” where junior workers benefit more than senior ones from using generative AI. Either way, what’s clear is that a lot about AI’s future impacts is still unknown.
3. But who’s taking this seriously?
This week I went to discussion on generative AI and labor in San Francisco, hosted by the Data & Society Research Institute. I love D&S—I did a fellowship there—and they do valuable and important work, particularly in debunking techno-hype and showing how unevenly distributed the benefits of technology are. But I came away from the event disappointed. Much of it was a kind of fist-shaking at the sneaky ways in which employers have been using AI as a way to trim their workforces and cut costs even at the cost of lowering quality and losing valuable talent. What didn’t get discussed were solutions.
That was on my mind because of a post this week by Nate Silver which essentially accuses the left, in particular, of not taking AI advances seriously enough. Critics on the left tend to sneer at tech bros, dismiss AI industry claims as hype, and focus on harms like algorithmic discrimination, misinformation, and energy use. Silver argues that the “the ‘overhyped’ critique is becoming weaker, almost literally by the day, given the volume of AI news,” and the potential for massive job disruption is becoming harder to ignore. Yet mainstream progressive politicians have little to say about it, while the right just wants the US to build as much AI as fast as possible to stay ahead of China.
It’s not hard to come up with a plausible plan to prepare the workforce for this kind of disruption. DeepSeek or chatGPT will do it for you in a second. What’s missing is a strategy to create public pressure around it: to get the public to care, civil society and unions to organize, and politicians to understand the issues. The Democratic party should be leading this charge; after all, to win another election it will have to reposition itself as the party of ordinary people in contrast to a party of billionaires. Instead, it’s still arguing about why it lost in 2024 and which of a slate of uninspiring political hacks should be its next leader.
So that’s my first democracy tie-in: having an AI policy is going to be crucial to the platform of any party that aims to win and govern with a popular mandate.
4. A problem of language
Another event I went to this week: a talk by Daniel Stone, an Australian political consultant. He’s been researching how the metaphors people use about AI reflect and even shape how they think about it. For instance, people in the AI industry often talk about it as a “race”, with phrases like “catching up” and “taking the lead,” which reflect the industry’s concerns about who’s winning but render the social impacts of AI irrelevant to the discussion. Policymakers use “building” metaphors, like “establishing a foundation” or “laying the groundwork,” which emphasize the idea of AI as something all of society participates in. But their language is often dull and policy-oriented and feels irrelevant to people.
This is a problem because, as Stone wrote last month, “The public doesn’t care about your tech policy.” Nobody—not the politicians, not the Data & Society researchers, not the AI ethics NGOs—has succeeded yet in making people understand and get exercised about how these issues apply to them. That starts with finding language people can relate to. In his talk, Stone cited as an example how the climate movement began to take off when NASA scientist James Hansen popularized the (already existing) term “greenhouse effect” to explain what carbon emissions were doing to the planet. That phrase, Stone said, was the “carrier wave” that then raised up other terms like “global warming.” What’s the “greenhouse effect” description for what AI will do to society?
5. Information overwhelm and the trust economy
Which brings us back to the start of this post: how both Trump and the AI industry keep us distracted with a deluge of news.
In a podcast interview with Chris Hayes and a column this month, Ezra Klein ruminated on Trump’s mastery of the attention game. “Democrats are still thinking about money as a fundamental substance of politics, and the Trump Republican Party thinks about attention as a fundamental substance of politics,” he wrote. All the money the Harris campaign spent on advertising and door-knocking was as nothing compared to the tornado of attention Trump and his allies, most notably Elon Musk, generated at the national level.
How do you counter this? There are two ways to buy attention. One is to make a lot of noise, as Trump has done. Another is to engender trust, to keep people coming back to you because they think you’re best placed to help them make sense of a confusing world.
For a long time the American left, and especially the mainstream media, thought that it could counter the flood of shit by being trustworthy: being reliable, transparent, objective, fact-based, well-sourced. I think that tactic failed with a large share of the electorate because it conflates trustworthiness with trust itself. Trust can be earned by being trustworthy, but there is another currency for buying it. The alternative media ecosystem epitomized by the likes of Joe Rogan deals in that currency: relatability, a sense of tribal identification, fear, shared resentment. In an information-overload environment, that currency has proven cheaper and easier to mine than trustworthiness.
Trump’s success has to do with the fact that he excels at two methods of buying attention: making noise, and engendering trust based on fear, resentment and tribal identification. The left uses only one way to buy attention, and it’s the most costly one: trustworthiness.
So how to change this attention equation? It’s conceivable that there will be a pendulum swing in people’s preferences away from noise and partisanship and towards reliability and trustworthiness. As Hayes and Klein discuss in the podcast, there are some signs of a backlash against the ubiquitous presence of technology, maybe analogous to the backlash against pollution, squalor and urban decay during the Industrial Revolution. But I don’t think we can rely on that. Persuasion has to be not just a matter of coming up with good ideas, but of changing narratives in a more primal, fundamental way. I’d like to know: Who’s working on that?
Not a scientific estimate, of course, but when I searched just now, Google returned 284 pages in the past seven days about DeepSeek on Substack alone. That’s easily 200,000 words right there.