Meet your AI politician of the future
A young, no-name outsider is quietly revolutionizing Japanese political campaigning. His experiment may presage what all campaigns look like a few years from now.
Hi everyone! First, a housekeeping note: next week I’m speaking at FWD50, one of the best conferences on government innovation, which is online-only on October 10 and hybrid (online and in Ottawa) November 4-6. I have one free conference pass to give out. Please email me on futurepolis@substack.com if you want it. It’ll be first-come, first-served, but please don’t ask for it unless you know for sure you’ll use it.
Now on with the show.
That’s Anno Takahiro in the screenshot above. Or rather, an avatar of him. Or rather, an avatar of a large language model trained on the contents of Takahiro’s political manifesto.
During the Tokyo gubernatorial race this summer, AI-Takahiro was live-streamed for 17 straight days on YouTube. Users could post questions about the manifesto in the comments and the avatar would speak AI-generated answers back to them (demo video below). There was also a voice-only version, reached by dialing a phone number.
Over that period, “AI me gave 8,600 answers to questions,” the real Takahiro-san told me. “Obviously, physical me cannot answer this large number of questions, but AI can augment my communication with the citizens.”
The TL;DR: By using various technologies to solicit feedback, adjust his manifesto in response, and explain it to citizens, a complete political outsider got the highest vote of any unaffiliated newcomer, and now other Japanese politicians are scrambling to follow suit. It’s not hard to see how this could go mainstream.
In person—in Zoom-person, anyway—Takahiro is the antithesis of a traditional Japanese politician, or indeed of politicians anywhere. A lanky, long-haired 33-year-old with a self-deprecating sense of humor, who speaks very good if halting English, he worked as a software engineer and entrepreneur, then began to write science-fiction stories about near-future uses of AI. He decided to run for governor because he was fed up with the generalized, one-way communication between political candidates and voters. Technology, he thought, could both allow candidates to craft more individual messages and give voters more of a voice.
He had already been mulling how to do this when, in the spring, he read Plurality, a new book by the legendary Taiwanese hacker and politician Audrey Tang, American economist Glen Weyl, and a community of hackers and civic activists.
(At some point I will write more about the Plurality movement. The book, which is just a snapshot of a continually evolving document on GitHub, is hard to summarize, but my best attempt is that it’s a diverse collection of ideas about how to better align technology to democratic values, and how to use technology to enable more collaborative forms of civic engagement. I should also disclose that earlier this year I gave the Plurality folks about an hour of free advice on their publicity strategy, and that it was they who first told me about Takahiro.)
The book resonated with him. “I saw that [I and Audrey Tang] share a lot of values,” he said. He spoke to her, and her advice informed some of the system he went on to develop. It has three parts.
Part 1: “Broad listening.” The campaign scraped X/Twitter, YouTube, and news sites’ comment sections for public opinions on Tokyo’s politics and government. It used Talk To The City, an open-source AI system created by the non-profit AI Objectives Institute in San Francisco, to analyze that mass of data and group the comments into thematic clusters. One cluster, for example, expressed preferences for younger politicians; another said that candidates with shared values ought to cooperate with each other. Those opinions helped the campaign build up a picture of the public and its concerns.
Part 2: “Brush-up.” Takahiro already had a campaign manifesto. It had a number of specific goals, like creating industrial zones to attract companies, investing more in STEM education, or making it easier for elderly people to get online medical care at night and on holidays. But he wanted citizens to weigh in. So he posted the manifesto on his campaign’s GitHub page, where people could discuss its ideas and propose changes. Once again AI came into play, this time to sort proposals and merge similar ones.
Part 3: “Delivery.” This was when the AI version of Takahiro was created, with the 2.0 version of the manifesto fed into an LLM that could answer questions from citizens online.
Takahiro credits this process for getting voters’ attention, making them feel listened to, and boosting his vote. He was a complete unknown and the initial reaction from the media, he says, was “so cold.” Yet he placed fifth with 154,000 votes, 2.3% of the total—which might not seem like much until you consider that the first four were all established politicians with years or even decades of experience in public service, and that behind Takahiro were 51 others.
A blueprint for the campaign of the future?
Takahiro’s approach has its seeds in the web 2.0 days. Social media has long allowed campaigns to solicit voters’ ideas and talk back to them directly. But both those things are hard to scale with human operators. Generative AI makes easy two things that weren’t before: parsing and summarizing vast numbers of user comments to pick out their key themes, and auto-generating customized messages for each individual user.
The Takahiro campaign is just an embryonic form of this. There were only a couple of hundred proposed changes to the campaign manifesto on GitHub, and no wonder: how many voters have heard of GitHub, let alone know how to make a pull request?
But it’s not hard to see where this is headed. Here’s how campaigning might look in a couple of years’ time:
Instead of just scraping the web for voters’ opinions, candidates start a campaign with an open comments period asking voters to tell them about their key policy concerns. An LLM condenses that mass of comments into a few useful key points.
Using that, the campaign drafts a manifesto, which goes up not on GitHub but on a website with an easy interface. You can click any section to ask the AI to provide a more detailed explanation, or submit a comment or a proposed change.
Once the manifesto is finalized, voters can quiz the AI-candidate about it. But rather than having a single YouTube channel answering everyone’s questions, each voter gets to to spend as much time as they want grilling their own individual AI avatar, just as you now talk to chatGPT. The avatar would be trained not just on the manifesto but on past speeches and policy papers—at least, on issues where the candidate hasn’t flip-flopped!—so it could answer more detailed questions.
Each voter on whom the campaign has data could, likewise, be sent an AI-generated video of the candidate talking specifically about the issues that voter cares about most.
Meanwhile, the voters’ conversations with the AI avatars would form an extremely rich corpus that can again be fed through an LLM, showing what issues they’re still concerned about. The campaign can use that information to further shape its public messaging.
Can this go wrong? Of course it can! Imagine the headlines if the AI hallucinates an answer out of line with the official messaging—“AI-Kamala Harris Says She’ll Start Deportations Immediately.” That might put some campaigns off using the approach. But the models will improve, and I think people are already becoming inured to the fact that LLMs slip up. There will always be a source of ground truth, the manifesto itself.
The other potential fear is that this will exacerbate a trend that’s been worrying some people for a while: the atomization of the electorate. Personalized messaging, the argument goes, leads to each of us living in our own individual reality bubble. We can’t hold a broad-based political conversation if different people are seeing different information.
That’s a fair concern, but I think that if tools like these make voters better informed and more engaged, that outweighs the effect of them not all seeing exactly the same messages. So long, that is, as the messages are self-consistent—i.e., that the AI avatar is representing the candidate’s actual platform and beliefs. Of course, campaigns could just program the AI to tell each voter exactly what they want to hear, but that might backfire when people realize their neighbors and family members are hearing completely different things.
It will be interesting to see where this goes. Takahiro’s approach seems to already be catching on in Japan. Two other politicians, Yuichiro Tamaki, who leads one of Japan’s political parties, and Kenta Izumi, the former leader of another, have both launched interactive AI avatars of themselves. Izumi’s, like Takahiro’s, is a cartoon, but Tamaki’s is an actual photograph of his face, crudely animated to look like it’s speaking (which, tbh, looks rather creepy).
Takahiro wants imitators. His team is open-sourcing their code and processes to make it easier for other politicians to follow suit. For his next act, he’s now trying to decide between two options: to go full-time into consulting for political candidates, or to try another run for office himself in next year’s national elections for the Diet.
Either way, keep an eye on this one.
Links
Does the US need a new constitution? Louis Menand reviews some recent arguments for tearing it up and starting again, and comes away somewhat unconvinced. (The New Yorker)
Ancient Athens reborn in modern Europe. The “pilot transnational people’s assembly” just held its first meeting in the Greek capital. The goal is to create a standing citizens’ assembly, a new branch of government, like a 21st-century version of the Athenian agora. (Democratic Odyssey and Noema)
Citizens’ assemblies 2.0. Deschutes County, Oregon is about to finish holding “the world’s first tech-enhanced Citizens’ Assembly” in which the entire process is recorded. The recordings, anonymized, will be used to “generate outputs grounded in the voices of assembly members.” The topic: youth homelessness. (DemocracyNext)
Autocracy 3.0. Like democracy, autocracy too is evolving. As China’s economy slows, how will the state adapt? Will it relax authoritarianism further to allow the public a safety valve, or—as it already seems to be doing—use technology to tighten the repressive screws? David Yang of Harvard sketches the outlines of what he calls Autocracy 1.0, 2.0, and 3.0. (NBER)
What next for California’s AI bill? Governor Gavin Newsom’s veto isn’t the slap in the face his critics claim; he’s also signed a raft of other AI regulation measures into law, Casey Newton points out. But some of them may fall afoul of the First Amendment. (Platformer)
Software is not like bridges. I suspect UK government CTO David Knott had lawmakers in his sights when he wrote this. Though we use construction-type language to talk about software—”engineering,” “architecture,” etc—thinking of software projects like brick-and-mortar construction projects is a recipe for bad decision-making. (A Lot to Learn)
Why the US has done little to prevent AI-related election disinformation. Unclear regulatory responsibilities, agency turf wars, and political deadlock mean that—even though most Americans want some kind of restriction on election deepfakes, and there are various bills already in Congress—nothing is likely to happen, certainly before November. (Schneier on Security)
Pre-distribution, not redistribution. The “windfall clause” is a proposal to force any AI company that becomes obscenely—OK, even more obscenely—rich to redistribute some of that wealth. What if instead we set policies that channeled some of the benefits of AI to people as it’s growing? (Collective Intelligence Project)
Let the people set rules for AI. This is a summary of a panel discussion (and related report) last month on how deliberative democracy could allow a global group of citizens to weigh in on AI governance. Read it alongside my recent interview with Aviv Ovadya on using deliberative tools for technology governance more generally. (MERL Tech)