A complete(-ish) guide to keeping Big Tech in check
It's not so much that tech companies are predatory as that governments are lame, says Marietje Schaake in "The Tech Coup."
I think most people grasp that Big Tech has great power. Meta owns all your data, Amazon can predict when you’re pregnant, Instagram is bad for your kids’ mental health, yadda yadda.
What I suspect most people don’t get is how much power Big Tech has over government.
“Most of our critical infrastructure is owned and operated by the private sector,” president Joe Biden told a meeting of tech leaders at the White House in 2021. That includes cloud storage, security systems, databases, risk-assessment algorithms, predictive policing systems, intelligence-gathering software, facial recognition tools, benefits-fraud detection mechanisms… the list goes on and on.
This is very different to a few decades ago, when government largely built what government used. It gives tech companies enormous leverage. They can change systems, pull the plug (see Elon Musk’s threat to cut off Starlink for Ukraine), influence policy, and lobby for friendlier regulation. It means that officials know little about the tools they’re using, and that the companies’ design decisions—which are shrouded in commercial secrecy—determine a lot about how government operates.
But! Marietje Schaake, a former Dutch MEP who has been at the forefront of debates about tech power for more than a decade, says governments can fight it if they choose to. Her new book The Tech Coup (in which I found that Biden quote) enumerates all the ways democracy is vulnerable to the power of Big Tech, and suggests some solutions. I spoke to her.
The TL;DR: Schaake is actually less critical of tech companies for being predatory than she is of governments for being lame. She has a bunch of recommendations, and you may be surprised to know that—though she wouldn’t be opposed—breaking up Big Tech isn’t one of them.
The interview is below—but first, I thought it worth summarizing the book’s key recommendations, because they show that’s there’s no magic wand, only a whole bunch of partial spells.
Ban or severely restrict the most antidemocratic technologies: spyware, data brokers, facial recognition, and cryptocurrency
Make it mandatory to state whenever AI is being used
Outlaw covert investments—e.g. through shell companies—in tech firms, and make it mandatory to publish tech projects’ environmental impacts
Extend public accountability—i.e., subject tech firms working on government’s behalf to the same kind of oversight as government itself
Use the government’s purchasing power as leverage to force tech contractors to build in more democracy-friendly standards (e.g. transparency, privacy, accountability)
Create or rebuild technology expertise within government—e.g., revive the US’s Office of Technology Assessment
Identify systemically important tech institutions, like we now have systemically important banks, and regulate them more aggressively
Create an arbitration court to investigate cyberattacks between countries
Create a supranational body to set democratic global tech policy—kinda like a WTO or G20 but for tech
Build a “public stack” of digital technologies (e.g., social media platforms) not owned by tech firms to provide alternatives
Make laws on technology less prescriptive and more about general principles, but give more latitude (and resources!) to enforcing them
And now the interview. It has been edited for clarity and concision. Yes, I know, it’s still pretty long.
On Big Tech’s power
You have endless examples of the risks of tech companies having too much power. What are some of your favorites?
Well, the combination of the legal gray zone and the power of companies is incredibly clear when it comes to war and conflict. The fact that it's companies that are building critical infrastructure, that are scanning it for risk, that are securing it, and that are now engaging in offensive operations by way of defense.
You give the example of Elon Musk turning on Starlink for Ukraine and then threatening to turn it off again. Or Safran, the French company whose voting technology for Kenya’s 2017 election failed. The election had to be annulled, which caused a huge amount of mistrust in the political process. They're examples of how a tech company can become a single point of failure for democracy.
Indeed, there was a lot of mistrust in Kenya, but it was also a real failure on the part of democratic states. Because this was a French company. France and EU foreign policy are all very focused on, "yes, we want free and fair elections, yes, we want democracy," but there's just a disconnect, it seems—a lack of willingness to make sure that companies act in line with democracy.
On why governments are being lame
Would you describe yourself as pro-tech, but anti-Big Tech?
I don't think I'm against tech at all. I'm just against unchecked power. I still think that the potential of some of these services is incredible. But there's a lack of transparency, a lack of ability to access information on the part of the public, a lack of accountability, and a lot of discretion, and thus power on the part of companies that simply have a profit [motive].
And we shouldn't be surprised; I mean, that's what companies do. The book is as much, if not more, a critique of democratic governments and leaders as it is of tech companies. These companies have simply taken the space that they've been given, right? The brazenness with which they are taking so many important decisions needs to be challenged. But it's not so much about the technology as such. l think that there's a lot of great potential from tech.
So you're saying governments are not taking it into their own hands to challenge this power?
That, and their dependency on the technology, and their temptation to use technology in in dubious ways. Take spyware. I think spyware is a illegitimate product. I think it's very anti-democratic and dangerous. But what's worse is when democratic states think that they can use it to their own benefit and don't look at the collateral damage, don't keep themselves in check.
On what governments can do better
Clearly, governments need the services of these massive companies; they can't build a lot of their own technology. In the book you talk about a lot of different ways to make that dependence less of a democratic threat [see list above]. It seems like there's no silver bullet, but are any of these measures particularly important?
For a lot of these tech companies, government is one of the biggest buyers, and its leverage is under-utilized. Think about writing different criteria and contracts, putting many more public values in the way in which this technology is used. Think about transparency, or think about consequences for negligence or discrimination, or things that are clearly illegal but are just not enforced.
I want to challenge this a bit. The website that tracks federal spending says Microsoft, for example, got about $500 million from government contracts in fiscal year 2023, which is 0.2% of Microsoft's annual revenue. And Microsoft knows the government can't really manage without a lot of its services. So how much leverage do governments really have?
Well, they can still change the contracts. I'm also proposing a three-strikes mechanism. Microsoft would be very eligible for being struck out because it has had so many security breaches. So I do think that the leverage can be there as long as governments assert themselves.
The problem is that they've taken a backseat. I think they could be much more deliberate. They could also have model contracts that can be used by lower-level governments, public hospitals, public libraries, schools... these smaller organizations that are in no position to negotiate with the tech companies, and often won't even know how to make a contract.
Any other measures you think are especially important?
The public accountability extension [i.e. applying more oversight to tech companies when they work on behalf of government], I think, would have a quite systemic impact. The notion of oversight would simply be put on par with how it is now with non-digital services.
It would also force governments to be much more accountable when they outsource functions to tech. For example, in the Netherlands, our tax authority needed an overhaul of the IT system. They got an estimate of six years—six years!—to overhaul this system. And they won't even make that [timeline]. So right now, even if the parliament wants to adopt changes to the tax system, they cannot be implemented because this overhaul is ongoing. It's insane.
And this is really not an area where you can say, "Oh, this is the fault of the tech companies." Sometimes it is, like when they keep pushing more modern versions of software on a hospital, which can't afford it, and then becomes very vulnerable to cyber attacks. That's a very nefarious dynamic. But the example of the tax authority is much more a product of just blindly trusting that the tech will work, neglecting to have [the government's] own talent and knowledge in-house, and waking up far too late to the depth of the problem. So again, this is an example where I'm not at all anti-Big Tech.
On the EU vs the US approaches to regulation
We've seen a big divide emerge between the EU and the US on tech regulation. What do you see as the strengths and weaknesses of each regime?
I like the fact that the EU is more proactive and also dares to take the first steps. A lot has been made of the "Brussels Effect," the first-mover advantage. But also, of course, the one who moves first makes the mistakes first.
I'm excited about the fact that a lot of EU regulation is very deeply anchored in fundamental rights. I think that's great. Where I'm much more critical is that the EU is a single market, but national security remains a national-level competence. That means that a lot of national security when it comes to tech basically cannot be governed on the European level. It makes what the EU does very skewed towards economic tools and fundamental rights-based legislation, and not so much on national security.
That is where the US is stronger—more willing politically to invoke national security to regulate technology. In fact it's one of the main frames within which technology is regulated.
So there are strengths and good reasons for either approach, but neither is complete.
When I talk to Silicon Valley people, a lot of them will say they're not anti-regulation, but they're convinced that the EU's approach to regulation is just leaving the EU behind on innovation. And they're worried that applying the same approach in the US will have the same effect. How do you answer that concern?
I hear this all the time, and it doesn't add up. So much tech regulation in the EU is so recent, but the the fact that there's not a comparable Silicon Valley-type ecosystem or access to capital long predates tech regulation. So the causality there is dubious.
Obviously, badly executed regulation or fragmentation hinders companies. But one could argue that actually, the fact that there is so much difference between EU member states and how they approach things like access to capital is the product of a lack of regulation, because it's a lack of harmonization, which is what a lot of EU regulation is about.
We also just have a lot of different languages, which may sound mundane, but it really matters. It's harder for a Swede to do business in Portugal.
So I do worry about the capability of European innovative companies to flourish, but I think it has a lot more to do with investment and with fragmentation than with regulation. You can see some of those comments also in the recently released Draghi report, where he says that there is a need for more European integration and more investment.
On Section 230 and freedom of speech
You talk a bit about Section 230 in the book, and you say that it's given platform companies a free pass to avoid responsibility for toxic content. But you don't take a position in the book on repealing it or rewriting it. So what is your position?
I think it makes sense to rewrite it. In general, I worry that the debate in the US is way too focused on freedom of expression, and much less on the use of data for targeting, discrimination, and other very valid public needs. I think freedom of expression is a key pillar of a free society, but it's not limitless, and it's also not the only freedom or right that people enjoy.
This clash of different rights and freedoms in the US is is just not addressed as much, because it serves the companies really well to keep talking about freedom of expression. Meanwhile, though, if you look at the difference between what companies say and what they do around freedom of expression, there's obviously a big discrepancy.
So how would you rewrite Section 230? There's an argument from from internet freedom activists like the Electronic Frontier Foundation that if you weaken it, that will primarily harm smaller-scale internet users. The big tech companies can afford lawyers, they can afford content moderators, but you risk stifling the smaller players.
You think about sanctions, and how to make those sanctions more focused. In the EU we have the Digital Services Act and the Digital Markets Act, where certain-sized companies are in view and others are not. So you can make it so there's a different compliance standard for smaller companies, or you can make resources available to them [for compliance].
Wikipedia is a really good example—it's very big, but it's not commercial, and so it runs into a lot of the challenges that big platforms do when it comes to content moderation, but it doesn't have the resources that a Facebook or YouTube has. So looking at the public function, looking at the nonprofit character, looking at revenue, and making obligations proportional, is one path to make it more fair.
On breaking up Big Tech
Another thing that surprised me a little is that you don't call for breaking up Big Tech, which is something people might expect you to say. Why is that?
Well, because it already is possible within competition law, and so I don't know if I would add much by saying that. What I do encourage is looking at significant societal power. Competition law is ultimately an economic tool—very important, of course, but not always direct enough when it comes to solving for the harms to democracy that we're seeing now. I'm happy that there's good competition enforcement, I'm happy that there's data protection, but the solutions that I've focused on are more directly targeting this lack of democratic oversight and accountability. But I'm perfectly happy, if there's reason for companies to be broken up, for that to happen immediately.
On banning bad stuff
There are some things you talk about banning more or less outright—data brokers, facial recognition, spyware, and crypto. Some would say these technologies are already out there. You can't just abolish them, and even if you do, people will develop them in countries that are less democratic, or develop them covertly. How do you expect such bans to be enforced?
It's really about beginning to create a threshold. Of course, companies can go to non-democratic states, but what's baffling to me is that actually spyware can still be used [in democratic states] without many restrictions. In the EU, I worked on a law that limits the exports of spyware to known human rights violators, because that was what I could influence at the time, [but EU countries can still use it domestically].
The US has done an unprecedented, and to me, quite surprising thing by saying the US government is no longer allowed to use commercial spyware. Of course, that's something that the US government can easily say, because they can develop their own hacking tools quite well. But it it does begin to attach consequences to violating the rules to create a moral threshold, where democracies say, "We're just not going to do this."
It will never be waterproof. But there's a lot that can be done with sanctions. Sanctions get evaded, but sanctions can also be enforced. When Belgian chemicals companies were exporting to Syria and were caught, they were put on trial. If there's no law, then there's no limit.
One problem with banning things, though, is that a lot of the technologies being developed today are dual-use or multi-use. Crypto has mainly been a speculative asset, yes, but it's also helped activists fighting authoritarian governments to gather donations and distribute them. So who makes the call that a technology is just too harmful to exist?
I think any democratic government is legitimately in a position to make such calls.
On how to make government move faster
One of the core problems here is of technology moving too fast for policy and lawmaking to keep up. When the EU AI Act was first being drafted, it didn't even contemplate generative AI. Now we have agentic AI emerging, which is going to create all sorts of interesting problems with tracking where data is going. There'll probably be some other development in the next year or two that nobody could predict. And lawmaking isn't iterative like technology—laws don't get changed for years. What do you do about that?
I propose to rebalance the relationship between legislation and enforcement. Enforcement is too often an afterthought. That's a weakness of the EU, where GDPR was hailed as revolutionary but is enforced so poorly that it's an example of a good law gone bad. There's too much emphasis in the legislation on the technology and how it works, and too little on enforcement. I'm suggesting to emphasize certain principles more strongly and give more discretion and resources to enforcement bodies to assess whether those principles are at stake.
How to make the law iterative is to write in review processes from the beginning. For instance, this is a law for data protection, and we are going to review it every four years and make updates in the spirit of the law. If it turns out after four years that small and medium enterprises are disproportionately hindered, or that Big Tech companies are able to challenge decisions in court again and again, then the spirit of the law isn't being fulfilled. I think this is possible but it demands a different mindset of lawmakers vis-à-vis the technology.
Links
The Oscars for bureaucrats. A heartwarming little story about the Samuel J. Heyman Service to America Medals, aka the Sammies, awarded yearly to US government officials who go above and beyond in their devotion to making things better for citizens. (The New Yorker)
Please take my money. An Austrian heiress decides to give away the bulk of her wealth and convenes a citizens’ assembly to decide how to spend it to alleviate inequality. Fractious debates ensue. (The New Yorker)
The Pact for the Future has a gaping hole. The UN agreement, launched this week to great fanfare, calls for action in several key areas but mostly ignores one of the most important: the decline of democracy. (International IDEA)
The metastasizing of the bureaucracy. A single DoD procurement framework has ballooned from 7 pages to more than 2,000 in half a century. It’s just one example of how regulatory requirements and procedures in the US are multiplying unchecked, bogging down infrastructure projects and action on critical challenges like climate. (Eating Policy)
AI could help restore trust in democracy. Congress could use the technology to organize vast quantities of incoming information, including feedback from voters that is currently lost in the ether. (Tech Policy Press)
Updating economics. Much of the dismal science is still stuck in 18th-century thinking, argues a new book by two economists, who propose ways to reinvent it for the 21st century and integrate it with other disciplines. (Project Syndicate)