cover of episode Google’s Epic Loss + Silicon Valley’s Curious New Subculture + How 2023 Changed the Internet

Google’s Epic Loss + Silicon Valley’s Curious New Subculture + How 2023 Changed the Internet

Publish Date: 2023/12/15
logo of podcast Hard Fork

Hard Fork

Chapters

Shownotes Transcript

Support for this podcast comes from Box, the intelligent content cloud. Today, 90% of data is unstructured, which means it's hard to find and manage. We're talking about product designs, customer contracts, financial reports, critical data filled with untapped insights that's disconnected from important business processes. We help make that data useful.

Box is the AI-powered content platform that lets you structure your unstructured data so you can harness the full value of your content, automate everyday processes, and keep your business secure. Visit box.com slash smarter content to learn more.

Kevin, there are now more Cybertrucks on the road. Yes. And that means that Cybertrucks are starting to appear on social media. Yes. And one of the ways that I have seen these Cybertrucks presented is being like stuck in the mud. Have you seen the stuck in the mud post? The Christmas tree one? There was the Christmas tree one where it's trying to tow a tree and it's gotten itself into a terrible spot of trouble.

And, you know, I referred to these things last week as a panic room that can drive. And over on Threads, my friend James messaged me and said, Casey, there's actually a better name now for the Cybertruck. Do you know what they're calling it? What? They're calling it a sport futility vehicle. What?

It's an SFV, Kevin. That's very good. If you want to accomplish a task, but don't actually want to accomplish the task, Cybertruck could be for you. If you want to go pick up a Christmas tree and you're more worried about being shot at on your way to get the Christmas tree than actually being able to haul the Christmas tree out of the Christmas tree place, get a Cybertruck. Sport Futility Vehicle. It's really good. It's so good. It's really good. It's so good.

I'm Kevin Russo, tech columnist for The New York Times. I'm Casey Newton from Platformer. And this is Hard Fork. This week, it's an epic win over Google in an important antitrust case. We'll tell you what it means for the digital economy. Then, Kevin investigates Silicon Valley's hottest new subculture, effective accelerationism. And finally, Cloudflare CEO Matthew Prince drops by to talk about how the internet changed in 2023 and what's coming in 2024. ♪

So this week, one of the biggest stories in the tech world is that Epic Games, the maker of Fortnite, just won a big lawsuit against Google. It was an epic win. Yes, an epic win. And this lawsuit was over the Google Play Store and whether it stifled competition and maintained a monopoly. A jury this week decided that it did. This is a big deal. It is a big deal because it

all comes down to how do people make money on the internet? If you are an app developer, if you have a business, these days, you probably need to have an app. And there are only a couple of app stores. And so what rules can those app stores

set for you. How much of your money can they take? That is what was at stake here in this trial. Yeah, so this is sort of a fascinating story. Epic Games is the maker of Fortnite and other popular games. And they're run by this guy, Tim Sweeney, who has sort of made it his mission. He's sort of on this one-man quest to break up these app store monopolies. So for years now, he has kind of been laying the groundwork for this legal assault on

on Apple and Google, which run the two biggest app stores. Epic also brought a case against Apple over its treatment of apps in the app store. Epic mostly lost that case. A judge decided that Apple did not have an illegal monopoly in the app world. But a jury found that Google, in this case, actually was operating illegally. So let's talk about this case, because you wrote about this this week in your newsletter, and I've been following it a little bit, although admittedly not as much as you have. So just

Can you just remind us of sort of what the claims were here and kind of what Google and Epic were trying to prove? Yeah, it's really tricky. And in order for Epic to win this, the jury had to agree to a multi-step argument for exactly why what Google did was illegal. And

And what it came down to was this idea that for many years, Google positioned Android as the open alternative to Apple, right? We all know Apple has very stringent rules about what you can do on their devices. But Google said, hey, Android is open. You can take it. You can fork it. You can hard fork it if you want to. Live and let live. Let a thousand Androids bloom.

But at the same time, as the years went on, they went from this position of being very laissez-faire to saying, "Well, actually, if you want to have Gmail on your phone, if you want to have YouTube on your phone, if you want to take advantage of Google services, then you're going to have our app store on your device and you're going to need to use our billing system." It tied its billing system together with its app store and it said, "You have to use these two things."

And of course, people didn't like this because they said, Google, we want to live and let live like you told us that we could. We want to have our own app store. We want to run our own billing system. And so that was the spark of this lawsuit was Tim Sweeney saying, I'm going to put my money where my mouth is, and I'm going to introduce my own billing system into Fortnite on Android. And Google said, nope. And it pulled it.

out of the Google Play Store, and that is where this lawsuit began. Right, so Tim Sweeney and Epic Games, they sort of basically set up this direct payment tool which would let Fortnite users pay them instead of going through Google and having to give Google 30% of that revenue. Google says, no, you can't do that.

and so they sue. And basically, my understanding is the thing that they are asking for is to essentially be allowed to sort of make it so that if you are in Fortnite and you want to buy, let's say, a skin for your Fortnite character, they want you to be able to buy that through their app store, their payment processor. They want to

keep the money from that rather than doing what they currently have to do, which is to give a big chunk of that to Google. Right. Potentially as much as 30%. Google generally takes a 15% fee for app subscriptions and a 30% fee for purchases made within apps.

It also says that the vast majority of developers qualify for a fee of 15% or lower, and that it is only the big guys like Epic that have to pay that 30%. But a lot of people are still paying 30%, so we're going to use 30% throughout this conversation when we talk about

what folks are paying. - Right. - Now this does get tricky because maybe you're an Android phone user and you're saying, but Casey, I can already put another app store on my phone because I can sideload it. And that's right, and that was one of the ways that Google attempted to get out of this lawsuit was by saying, look, if you really, really wanna bring in another app store, you can do it.

But the jury found that that actually was not enough and that Google had put enough restrictions on these developers that it did constitute an illegal monopoly. Yeah, that was one of the most interesting things about this case for me was sort of the insight that it has given us into how companies like Google operate.

There was this thing that came up during the trial called Project Hug. So Project Hug was this codename inside Google for this initiative that started in response to Epic sort of coming up with all these ways to bypass the Play Store. Google got worried that other game developers would try the same thing, that they would say, well, why are we giving 30% of our revenue to you? We don't want to do that either.

And real quick, one of my favorite details about this was that Google looked into the possibility that other developers would follow Epic's lead on this. Essentially, if they were allowed to do what Epic did, how many of them would follow? And they estimated that up to 100% of top developers would do this. Because why would you give Google 30% if you didn't know?

Totally. So because Google is worried about losing all this revenue from all these mobile game developers, it did what it called Project Hug, which is to basically go around to all of the top mobile gaming developers and essentially pay them off, right? Like give them some sort of deal to launch in the Google Play Store to basically incentivize them to want to stay within Google's

Google's garden and not go out and build their own thing. Yeah. The basic idea was, okay, to prevent this full scale revolt against the play store, we're going to run around. We're going to cut a bunch of sweetheart deals. It was just their way of trying to buy off everyone who was about to run screaming out the door. Yeah. Yeah. So what was Google's defense to all of this? Why did they say that this kind of thing was legal? It,

really came down to this. Their argument was, we are not a monopoly. We are in a duopoly. We compete against Apple. And so we can essentially do what we want because Apple exists. Right. Yeah. And the jury did not seem to buy that. They did not. And this is interestingly one of the reasons why the outcome was different here than in the Apple case. Because in the Apple case, which was decided by a judge rather than a jury, the judge found that Apple and Google were part of the same market.

And if you know one thing about antitrust, and that's about as much as I know about antitrust, it all comes down to how you define the market, right? Because if I want to come in and I say, hey, you have too much control over this market, the first line of defense is always to say, well, the market is actually huge, right? Like if you were to say, you know, well, Amazon clearly has like a monopoly on its, you know, over its sellers, Amazon would come along and they would say, but look at all

of the other e-commerce companies that exist right and this is how they wind up getting out of antitrust issues this is what apple and was able to do in its own antitrust trial they were able to say like look we're in this very competitive market we don't have a monopoly but in the google case this argument didn't work the jury was convinced that the market could be limited to just android and when you do that it's pretty clear who has control over android totally so um

What happens now? I mean, is it now that this case has been won by Epic Games, does this mean that suddenly, like, Google's not allowed to charge, you know, a percentage of revenue to app developers? Are app developers allowed to make their own app stores? What are the ramifications of this? The answer is we don't know yet. So in January, both sides are going to come back and they're going to write up these sort of post-trial briefs where they begin to talk about

what remedies they think might be appropriate. Google has told me that, yes, they are planning to appeal this case. So this will drag on for some time. But at the same time, in February or so, we might hear from the judge about what he thinks the proposed remedies are. The most extreme case is that, yes, he would come along and say something like, Google, you're not allowed to charge a fee on third-party billing, something like that. My guess is that he will not do something that extreme. But

you know, if the verdict is upheld, life will probably become at least a little bit easier for Android developers. Right. So Epic, they've won this case, but we still don't know what they're going to get out of it. And obviously, this will probably get a

So I asked Google what it made of all this. It shared with me a comment from Wilson White, its vice president of government affairs and public policy. And Wilson said, quote, the trial made clear that we compete fiercely with Apple and its app store, as well as app stores on Android devices and gaming consoles. We will continue to defend the Android business model and remain deeply committed to our users, partners, and the broader Android ecosystem. So this is obviously not the result that Google wanted, right?

they wanted to win this one. But how bad is this for them? Like, I don't really have a sense of how much of Google's money is earned from the Google Play Store versus like its search ads or something like that. So how much do they stand to lose from this decision? So I mean, I think the best way to get a sense of how bad for this is for Google is to look at the stock market, which after this was announced, Google stock declined less than 1%. Okay, the Google

Play Store is big by the standards of most businesses. You know, the Epix expert estimated during the trial that Google earned $12 billion in operating profit from the Play Store in 2021. And its profit margin on that, by the way, was 71%. So this is just sort of a pure profit machine.

If that $12 billion were to go away or to be cut in half, well, now Google has a, what, you know, $4 or $6 billion hole it has to fill somewhere. But the nice thing about being Google, Kevin, is that you own 4,600 different businesses, you have monopoly control over the web, and you'll probably be able to scrounge that up in your couch cushions or just throw another ad onto mobile search and there. There's your $4 to $6 billion and everything's fine. And I just want to say...

This is one of the reasons why I find Google's behavior in this trial so exasperated. There are multiple reasons, but one of them is just they don't need this money. This is a company that earned $19.7 billion in profit in the last quarter alone, and they are going to nickel and dime these developers to death. And when you ask them how they justify it, all they really say is like, Casey, these are industry standards. 30% is industry standards, right? Like these are

obviously just arbitrary numbers. If Google's making a 71% profit margin, that tells you that they're not reinvesting most of this back into the Google Play Store. This is just a very rich company wants to get very richer, and I'm not here for it. Generally, when companies like Google and Apple are asked, like, why do you charge so much money to developers in your app stores? They'll say, like, well, you know, we invented the app store. This is our platform. You know, we spend a lot of money trying to keep it safe and make sure that

people aren't submitting apps filled with malware or that will scam them in some way.

And like, I can see the rationale for that up to a certain point. Like it doesn't cost zero to maintain a big app store, but it also doesn't cost $12 billion a year either. Yeah, that's right. And look, I mean, Google absolutely should be able to collect something from these developers. It has invested many billions of dollars into Android. It should be able to recoup that investment in some way. It's just also clear that 30% is an arbitrary number. And given

the size of some of the businesses on its platform, Epic included. I just don't know how you justify taking hundreds of millions of dollars from these folks over time. Yeah.

Now, Casey, you're a gamer. How is the Fortnite community specifically reacting to this news? Are they flossing in the streets? They're flossing in the streets. They're doing every emote you can think of. They're dabbing again, Kevin. They are. They're wearing their best skins. And yeah, it's a real party on that bus with a parachute on it.

So I guess if we step back, do you think that we're starting to see these kind of massive app stores crumble? Do you think we're starting to see the beginning of the end of kind of the big app store monopoly that just has a tollbooth on it that takes 30% of whatever comes in and out? I do. I believe that we are beginning to see the end of 30%. In fact, there was a story on the day that we record that Europe is set to take a similar action against Apple.

Apple after Spotify complained about its rules. Of course, Spotify has to give a good chunk of money to Apple for all of these subscriptions that flow through that app. It's not allowed to point people to its website where it can just get them to sign up there without having to pay Apple. And according to Bloomberg, Europe is about to crack down on that. So look,

Apple and Google are going to be dragged into this new world, kicking and screaming. They're going to fight for every single cent because they have no incentive not to. But little by little, this world is starting to crumble. And I just hope that more of this money starts to flow back into the small and medium-sized businesses that want to build on these app stores. I think it's an interesting question. If instead of having to give 15% to 30% of all of your revenue to these two companies that don't need the money, could you just keep that for yourself? Would we maybe have a more vibrant infrastructure?

internet. I bet we would. Well, it seems like this case and these cases by Epic Games have been sort of framed as like the underdogs taking on the evil empire, right? Like Epic Games and the small developers of the world sort of taking on these app stores.

But it's also true that these are not plucky small developers. Epic Games is a huge... I mean, they make one of the most popular games in the world. And so they also have leverage when they want to send people to their own app store or to pay through their own billing system. They can do that. So I wonder if small developers are actually going to benefit from this or if it's mostly going to be kind of these medium to large size developers. Look, I mean, there's no doubt that Epic would benefit hugely from this. That's the reason that they undertook this whole thing. But I think...

I think it's worth saying that they probably could have gotten a sweetheart deal too if they wanted one, right? Like they did not have to choose this fight. Anytime you take on a big legal case like this, it is its own distraction. Epic's business is like it's doing okay, but Fortnite is pretty mature. They have not really pulled another rabbit out of their hat in a long time. So I'm sure that they would love to have those extra millions or hundreds of millions of dollars to be able to use for R&D to come up with something new.

Yeah. Another thing that stuck out to me about this case is that one of the things that the judge and Epic Games took issue with was Google's habit of having its employees hide their chat logs. Basically, like in Google chats, you can set it to auto delete after 24 hours. And there were a number of examples cited in the trial of.

where, you know, executives or employees of Google would be having some discussion about antitrust. It would get a little spicy. And then someone would say like, hey, everyone, the chat history is turned on. And then like the transcript would go dark after that because presumably they turned on the auto disappearing mode. That did not

go over well with the judge. No, it did not. This judge, James Donato, called Google's behavior, quote, the most serious and disturbing evidence I have ever seen in my decade on the bench with respect to a party intentionally suppressing relevant evidence. He also called it, quote, a frontal assault

on the fair administration of justice. He has promised to investigate. So one more thing to add about this case is that Epic was able to prove its case while still missing probably

most of the relevant evidence because Google had destroyed it. Right. Now, what Google would say is that a lot of the material that we're talking about that was deleted was deleted because in Google chat where these conversations were taking place, by default, the conversations auto-delete after 24 hours and

And I guess executives are sort of changing those defaults now. So this doesn't happen anymore. But like, come on. Oh, whoopsie. Accidentally auto deleted my incriminating antitrust conversation. Hate when that happens. Yeah. So the whole thing is it's giving chicanery. It's giving antics, hijinks. And it is. I mean, look, when was the last time we had a frontal assault on the Fair Administration of Justice on the show? Yeah, I'm not a lawyer, but I think when a judge says that to you, you're having a bad day. Yeah, bad day.

So, naughty Google. Yeah, Google is on the naughty list for the Play Store this year. And Epic Games is on Santa's nice list. That's right. Instead of getting 30% of revenues in their stocking this year, they're getting a lump of coal. When we come back, Kevin insists that I learned something about a person known as Based Beth Jesus.

This podcast is supported by KPMG. Your task as a visionary leader is simple. Harness the power of AI. Shape the future of business. Oh, and do it before anyone else does without leaving people behind or running into unforeseen risks. Simple, right? KPMG's got you. Helping you lead a people-powered transformation that accelerates AI's value with confidence. How's that for a vision? Learn more at www.kpmg.us.ai.

I'm Julian Barnes. I'm an intelligence reporter at The New York Times. I try to find out what the U.S. government is keeping secret. Governments keep secrets for all kinds of reasons. They might be embarrassed by the information. They might think the public can't understand it. But we at The New York Times think that democracy works best when the public is informed.

It takes a lot of time to find people willing to talk about those secrets. Many people with information have a certain agenda or have a certain angle, and that's why it requires talking to a lot of people to make sure that we're not misled and that we give a complete story to our readers. If The New York Times was not reporting these stories, some of them might never come to light. If you want to support this kind of work, you can do that by subscribing to The New York Times.

So, Kevin, this week you wandered into another wild San Francisco subculture. Some might say a religion. And these people are called the Effective Accelerationists, or EX for short. And your story opens with a scene at a party where

where the EX are putting up banners that say things like, accelerate or die. So I guess my first question is, how worried do I need to be about these people? So I was not actually at this party, but I did hear a lot about it from people who were there. Once again, a party you were not invited to. I know.

I know. Well, I actually was invited to this one, but it was like late on a school night. Was that like 7 p.m.? No, you know, parties, they start at like 11, and I'm too old for that. Oh, that's fair. So this was a party that was thrown by this subculture that is calling itself EAC. And this is something that I've been tracking for a while, about a year, actually. Do you have an EAC tracker? Yeah, I have an EAC tracker. Okay.

And this sort of was born on Twitter. A lot of people who are kind of in the AI world or adjacent to it in some way were sort of getting annoyed around the same time with all of this like AI doomerism or what they saw as AI doomerism that was coming a lot of it out of the effective altruism movement, which we've talked about in this show. This is a group of sort of, I would call them like data-driven do-gooders who like to kind of research like how to do philanthropy, but

in recent years have been very, very concerned about AI safety. And so a lot of the kind of more worried folks in that world have ties to effective altruism. So there was this group of people on the internet who were basically like all these effective altruists, they're kind of taking over the conversation. They're getting all this attention. They're raising all these alarms about AI and how it could go rogue and kill us all.

And like, we don't believe that. And actually we feel like that's a dangerous ideology. And so we're going to start our own ideology. That's effectively the opposite of EA. And we're going to call it effective accelerationism. And our platform is basically going to be that we think AI and other technologies should just be allowed to go as fast as possible. And that we are sort of heading toward this glorious utopia of sort of

AI and superhuman intelligence and that we should just kind of get out of the way and let it happen. Got it. So at its root, then, EAC is a reaction to...

Effective altruism. Yes. I actually, there's a funny line this week. A writer, Zvi Mashowitz, who covered this, said that basically, EAC is functionally a Waluigi to effective altruism's Luigi at this point. It's basically sort of the opposite movement with a lot of the opposite beliefs.

And, you know, when I first encountered this, it was like a few dozen people, most of them kind of anonymous accounts or pseudonyms who would just gather in these kind of late night Twitter spaces and they would talk about like politics and philosophy and A.I.,

And it didn't seem at the time influential enough or important enough for me to write about. But I would say that started to change over the past few months. You have people like Marc Andreessen, who we've talked about his sort of techno-optimist manifesto. He has declared himself an EAC candidate.

And he has also cited some of the founders of EAC as his sort of inspirations for some of his ideology. Gary Tan, the president of Y Combinator, the influential startup incubator here in San Francisco, has also declared that he is part of the EAC movement. And you're just seeing a lot of people, you know, change their like display names on X or put EAC in their bio somewhere.

they're throwing these parties. It just, it seems to be gathering momentum in a way that made me feel like, okay, maybe it's time to write about these guys. Well, so let's try to take their core argument seriously for a second here. Do we think that pessimism about AI is getting in the way of progress and stopping a bunch of wonderful things from happening? Well, a couple ways of answering that. One is, you

you know, is it limiting the rate of AI progress? And I would say the answer to that is pretty clearly no, right? Companies are racing ahead with this stuff. I mean, you could argue that, you know, maybe we would have gotten GPT-4 six months earlier if open AI hadn't had all these like effective altruists inside it trying to like make the systems as safe as possible. I think they're not really sort of materially slowing down overall AI progress because obviously, as we've talked about, there's this huge race going on.

I think it's more the culture and the discussions and the discourse around AI that they object to. You know, they see things like regulators in Washington and Europe being very worried about, like, these catastrophic risks coming from AI. You know, they see these open letters going around calling for six-month pauses on AI development. They hear people like, you know, us in the media being worried about AI. And they just think, like, all these people are blowing this stuff out of proportion, you

And so that is sort of the idea that they have risen up to promote is like, don't slow this stuff down. Every time you slow this stuff down, you're just delaying the inevitable. Right. And when you say it like that, that sounds reasonable. I could see how somebody would think, okay, these doomers are a little out of control. I think we should move faster on some of this stuff.

But at the same time, when you look deeply at this subculture, there are some pretty radical ideas in there, right? I was struck in your piece by how many people do not seem like they would be at all bothered if some sort of artificial general intelligence did emerge and actually just overtake human beings, right? Like, talk about kind of that religious aspect of these folks. Yeah, so it's complicated, right? Because all these groups have sort of

you know, bundles of ideas in them and not everyone subscribes to everything. So like I talked to a bunch of EX while I was writing this story and, you know, some of them were sort of like,

I just kind of like to go to the parties. I like the vibes. These are like fun people to hang out with. They're more optimistic than the effective altruists who are always talking about doomsday scenarios. Some of them were sort of basically just kind of libertarians who think that like capitalism is good and regulation is bad. And in general, the government should stay out of like regulating AI and

And some of them have these like very sort of, I would say, dark ideas. The idea of accelerationism itself is actually not a new idea. It has been around for decades and it was popularized by this philosopher, Nick Land, who basically believed that there were these forces of capitalism and of

AI and technology that were going to collide and produce something called the techno-capital singularity. And that would be sort of this point where like technology just runs the world. It overtakes, we can't control it anymore. And that is an idea that some of the EACs have run with. And so the EACs that I've talked to, they kind of actually agree with the effective altruists that we could have superhuman AGI very soon.

They're just not worried about it. Some of them think like, well, this is just sort of the natural evolution of things. Like they have this idea of the successor species, which is that like maybe our job as humans is to sort of birth this thing, this form of intelligence that is smarter than us. And,

And if it wipes us out or subjugates us or makes us its slave, maybe that's just kind of the natural order of things and we shouldn't be too worried about it. I will say that's not something that a lot of EACs believe, but that is something that the movement's leaders, who we should talk about, have actually said.

Well, so let's talk about that. So one of the movement's leaders is this pseudonymous gentleman who goes by Beth Jesus, and he is one of the people who has said that the goal of EAC is to, I believe his quote was, usher in the next evolution of consciousness, creating unthinkable next generation life form. So who is this guy? Yeah. So EAC was started last year by this group, this sort of small group of people. They all had these like pseudonyms, like

based Beth Jesus and Bay's Lord was another one. Um, and, uh, Forbes, uh, earlier this month revealed the identity of based Beth Jesus, who is a guy named Guillaume Verdon. He is 31. He's French Canadian. He used to work at Google X, which is sort of their like experimental lab, uh,

He made some money on NFTs, strangely enough, and used that to bootstrap his new hardware company. He runs a company called Xtropic. And, you know, he's just kind of like a engineer philosopher guy who has some ideas. You know, I was sort of going back through some of their old conversations and their old Twitter spaces. And, you know, at one point, Guillaume, this guy, starts talking about why he decided to start this movement.

And he basically says that it's because, you know, he and his friends who work in tech are constantly just being told that they're the bad guys, right? Like you're creating this stuff, this technology, this AI that's going to hurt society in all these ways. You're bad. You're irresponsible. You should slow down. And he was basically like, I wanted to create a movement where the engineers and the builders would be the heroes, right?

And so that's what he tried to do. Finally, we could center engineers and builders in Silicon Valley instead of the HR departments and the, you know, vice presidents for business. Right. And, you know, I think there's a little bit of like a persecution complex going on. A little bit? Yeah.

But so base Jeff Bezos is this guy, Guillaume Verdone. The rest of the EAC founders are still under pseudonyms and I think prefer to stay that way. I did ask Guillaume for an interview and he declined, although he did say he's going on the Lex Friedman podcast. So we'll be hearing more from him. So tune into that and let us know what he says. We'll be interested to hear. I am really struck by...

Two things. One is you compared this a minute earlier to libertarianism, and I have to say listening to you describe it, reading your story, it really does feel, at least for some segment of the EAC believers, like this just is rebranded libertarianism. I want to say that 100% of Marc Andreessen's interest in EAC is just that it gives a new coat of paint to an old idea, which is that you should not –

regulate capitalism because that reduces the amount of money that you can make as a venture capitalist. Does that seem like a fair assessment? Yeah, I think a lot of it is if you sort of dig one level below the memes and kind of the social media of it all, a lot of it is very standard libertarian stuff. And I should say it's also like not a new idea in Silicon Valley. There were all these groups that sort of popped up during the dot-com boom and even

earlier, like there were these groups like the transhumanists and the extropians. There were sort of all the early internet, kind of the whole earth catalog era people. And a lot of those people were kind of techno libertarians. They believed that like the internet was this, you know, liberating thing and that we, the government should stay away from regulating it. And there was sort of this strain of idealism that basically said like,

you know, technology deserves to be free and we should not regulate it. And so to me, IAC is kind of fusing the sort of hardcore libertarian economics of, you know, people like Hayek or Milton Friedman with this kind of Silicon Valley subcultures like the extropians and the transhumanists with like a healthy dose of just sort of like pure rage against the effective altruists. Yeah.

The second thing that strikes me, you also mentioned, which is that the EAX and the EAs really are two sides of the same coin. And what is notable about that to me is that you do have these growing and relatively powerful contingents that both do agree that AGI is coming and might be here soon. And that just sort of seems worth saying, right? Then that for all of the ways that the EAX might want to make fun of the decels, they do share a

a lot of core beliefs. - Yeah, so I wasn't able to interview anyone who sort of defended the most extreme version of the EAC argument, which is like the AGIs are gonna take over and kill us all and that's good, or that's sort of the natural order of things if it happens.

What most of them would say is some version of, you know, AI is going to just be an incredibly positive thing for humanity. And the sooner it gets here, the sooner we can, you know, cure diseases, the sooner people can live longer, the sooner we can like fix all these problems with our society. And so the people trying to slow AI down are really just preventing all of those things.

those good things from emerging quickly. And, you know, I hear that and I think that's actually something that some effective altruists also

but they're also weighing the risks. And I think the EACs just don't think the risks are that serious or at least as serious as people are making them out to be within the EA community. From the way that you're describing this, Kevin, it sounds like these folks believe that technology is the one and only solution to all of our problems and that if we just sort of build enough tech, all of our problems will take care of themselves. Is that a fair read or is there like a political and social dimension to their thought too? Yeah.

It depends who you ask. I think, you know, I talked to a bunch of EX and I think they would answer these questions in slightly different ways. I think among the most hardcore EX and some of the leaders, there is this feeling that sort of technology and capitalism are kind of these like inevitable forces that like you can stand in the way or you can get out of the way. But like ultimately, like...

they are too powerful to be permanently resisted. And so there's just kind of this idea that there are these like currents that are just pulling us in the direction of the techno capital singularity and that we can delay it, but that ultimately it's inevitable. And I do think you're right that that's limited in some ways, because, you know, if you just think about something like climate change,

That's something that a lot of AI optimists will say that AI could help us fix, but I

I don't think anyone is saying that it's going to fix itself, right? We need humans and politicians and governments and companies to like come together to solve this. It's a social and a political problem, not just a technological problem. And so I think that's one place that a lot of people sort of disagree with the EX is just this sort of belief. And I would say also this exists to a certain extent in the effective altruism community too, is this belief that there is this kind of inexorable march of technology, right?

that, you know, sort of our only options are to, like, stand in the way and, like, hold up our hands and say stop or get on board. Yeah. So you also write that there are a number of splinter groups that are forming out of EAC, including AAC, BioAC, and DAC. How do they differ? And do I really have to remember all this?

Yes. As with any good, you know, religious movement, there are splinter groups. So Grimes, also known as Elon Musk's ex and a musician who has played, she actually DJed the EAC rave that I wrote about in my story.

You know, she has proposed something called AAC, which stands for Aligned Accelerationism, which is basically, what if we just accelerated, but like a little more carefully and making sure that the robots actually want us around? Let's just accelerate the good parts. Yes, exactly. There's also something called BioAC, which is sort of like

taking effective accelerationism to the world of biology and like putting chips in our bodies and augmenting ourselves so that we can more effectively, you know, compete and live in a world with lots of super intelligence in it. Sure. And then there's D-Act,

which is Vitalik Buterin, who's the founder of Ethereum, proposed this idea. I think it stands for defensive accelerationism or decentralized accelerationism. He didn't really specify which of those it stands for, but basically it was kind of like a compromise. It was kind of like, what if we accelerated the good parts, but also didn't stop worrying about the potential bad parts?

Got it. Okay, so I'm just going to forget about all of those things immediately, but we congratulate everyone who's coming up. I actually identify as an L-Yak. Have you heard of this? No. That's a fan of Linda Yaccarino, the CEO of X. I think she's one of the most interesting CEOs in Silicon Valley. Yeah. So please attend our L-Yak rave. It'll be later this month.

All right. So at the end of all this, Kevin, where do you where do you shake out on the EACs? Do you think they have some good ideas that are worth paying attention to or should we hope that they disappear? So, you know, I've said before that I think we should be celebrating progress in technology and other fields more in this country. Like, I think we should have had parades for the people who invented the covid vaccines earlier.

And I think there's something to the kind of aesthetics of EAC where they are sort of taking a conversation that has been very, I would say, like dominated by negativity and pessimism and kind of injecting some optimism into that. I think there's something appealing about that for a lot of people, especially in Silicon Valley. The thing that I'm sort of worried about, and then I'll be very

interested to see how people react to is this kind of idea that we should celebrate progress, even if that progress hurts people, right? We know that AI is already starting to harm people in vulnerable communities and that the smarter it gets, the more it potentially could cause job losses and things like that. So I just think it's going to be a very different conversation when people can actually see the harms from AI in their own lives and communities.

And so I do think there's some kind of natural limit to the number of people who are going to sign up for something like EAC. I don't think their most extreme versions of their ideas are very popular at all.

That said, I do think it's an interesting social phenomenon. And I think we are headed into the era of the AI religion, where you will have just these factions, these sects that are kind of, you know, working, operating, functioning as kind of online tribes, people declaring their allegiance to them, you know, prophets sort of rising up within these movements to like give everyone directions. I just think we're headed into like a very interesting time of

people not just having sort of political identities, but also kind of identities around how they feel about progress and technology and AI. That makes sense. My feeling about all of this is I think it's okay to want to accelerate like certain kinds of projects. If you're working on an AI system that is going to help identify cancer at earlier stages, by all means, go as fast as you possibly can. And maybe we do even need to tweak some regulations so you can go a little bit faster with some of that stuff, right?

But I just want to keep our accelerationism really, really specific. I think a broad-based movement that just says accelerate everything simultaneously is bound to cause really bad harms. And so to the EACs, I unfortunately do have to say, knock it off. All right, that's the EAC. When we come back, we talk about how the internet changed in 2023 with Cloudflare CEO, Matthew Prince.

This podcast is supported by KPMG. Your task as a visionary leader is simple. Harness the power of AI. Shape the future of business. Oh, and do it before anyone else does without leaving people behind or running into unforeseen risks.

Simple, right? KPMG's got you. Helping you lead a people-powered transformation that accelerates AI's value with confidence. How's that for a vision? Learn more at www.kpmg.us.ai.

Well, Kevin, it's been a big year on the internet. It sure has. And there have been many trends that have emerged and there have been people observing those trends and writing about those trends. Yeah, including us. Including us. So today we're going to have a conversation about how the internet changed in 2023 and what we can expect to change in 2024.

And I wanted to bring in Matthew Prince. Matthew is the CEO and co-founder of Cloudflare. And you may be wondering what the heck is Cloudflare. I would say it's one of these companies that does something that on the surface seems incredibly boring, but if it like disappeared overnight, the entire internet as we know it would basically collapse.

Cloudflare is an online security and data company. They sort of help websites operate. They help data move around the internet. And they also provide security services that help websites protect themselves from hackers and DDoS attacks like that.

One of the most important things Cloudflare does is serve as a free security guard for a huge chunk of the web. So a reason that more sites are not just taken down by random attacks is because Cloudflare has stood up and said, we are going to protect these sites. Yeah, so I've known Matthew Prince for a while. He is an unusually thoughtful tech CEO. He's been around for a while, and he just has...

Because of his position at Cloudflare, overseeing this vast chunk of the internet, he just has a very expansive view onto how the internet is changing, what is going on, and what we should be aware of and worried about. So today we're going to talk to Matthew about how he sees the internet changing and what he thinks the next year will hold. ♪

Matthew Prince, welcome to Hard Fork. Thank you for having me. So I always say that you guys are like the plumbers and bouncers of the internet. Cloudflare protects like a vast chunk of the internet from things like hackers, but also helps to route information around the internet in ways that I only kind of understand. But I know that your company is very important and also has a very broad view of the internet and what's going on on the internet. And I want to dive into that because you all just put out some really interesting research on this. But

First, I just think we should define what we're talking about when we talk about the internet in the year 2023. I think some people think the internet is just web pages. It doesn't include, you know, walled gardens like TikTok or Instagram. But what do you think the internet is? How would you define the internet in 2023? You know, I think that anything that you're doing on your phone, anything you're doing on your laptop, a lot of things that you're doing with your smart refrigerator or your

You know, your smart vacuum cleaner, all of that behind the scenes is getting connected to the Internet in one way or another. And at Cloudflare, you know, somewhere between 20 and 25 percent of the Web, but a huge percentage of the Internet as well, runs through our pipes. And that gives us the ability to see a lot of what's going on online and just understand the trends of how 2023 was different than 2022. Yeah.

So let's talk about that. So you all just put out this year-in-review report, which is one of my favorite things to look at every year, because it's just, it's like stuff that I don't really think about. Like, how much did internet traffic grow this year? And I was sort of shocked by this. Internet traffic grew by 25% overall this year. I didn't know we could spend any more time on the internet, but apparently that's true. So how do you all measure that, and what does that tell you about where we are on the internet's life cycle? Yeah, so I think the first thing is that while...

All of us who live in the U.S. and we're here in San Francisco are using the Internet just almost continuously, frankly, almost pathologically. It's amazing that still half of the world's population isn't connected and online. And I think the biggest driver of growth over the last year has just been that 4 billion people that weren't online, some of them got online this year. And that turns out

to be one of the biggest ways that you can drive more growth. So we saw significant growth across India, Africa, a lot of Southeast Asia, and that was driving a lot of what that growth was. How we're able to see that is, you know, as you are, you know, sending a text message on your phone or as you're, you know, interacting with your smart vacuum cleaner, there's a good chance that that is actually traffic which is passing through Cloudflare. And we don't see all the details of that, but we do see enough to be able to measure general trends.

And is it the case then that you're still seeing growth in the United States as well? Places where people are already online. Is the fact that they now have the smart vacuum cleaners and the smart refrigerators, are you seeing more growth there? Yeah, I think we're continuing to see more people spend time online. It does not look like we've seen a drop-off.

Actually, out of the pandemic, we definitely did see some services start to decline. You didn't see as much people on streaming services. That's continued to be the general case. But if anything, that's leveled out and we are seeing that as more things do connect to the Internet, that's just more traffic across the network. Again, even in developed countries that are highly connected, we're still seeing growth to overall Internet usage.

And sort of when you step away from the specific statistics, you know, we could go through a list of these things. Google is, again, the most popular internet service with TikTok is now in fourth place after Facebook and Apple. Facebook is the number one, you know, website in the social media category, followed by TikTok, Instagram and X slash Twitter.

But I just want to step back from that and ask you, as you look back on 2023 and sort of what's been happening, not just on the part of the internet that CloudFlare services, but just the entire internet and the entire sort of online ecosystem, what were some of the biggest changes that you think we went through in 2023? You know, 2023 almost feels like what I would have predicted 2022 was going to be like. Mm-hmm.

I think that the big story out of 2022 was obviously the Russian invasion of Ukraine.

And we anticipated that in 2022, there would be a massive rise in cyber attacks originating from Russia and Russian-born hackers going after Western allies of Ukraine. And that then didn't largely happen. It was actually quite quiet on the sort of cyber front. And it kind of had a scratching our heads asking, like, why has this been the case?

2023 made up for that. So we saw a dramatic increase, especially after July of this year, in the amount of attacks that were going on online. That even accelerated more with the Hamas attack on Israel. Attacks meaning cyber attacks, meaning hacking into websites. Yeah, hacking into websites, trying to disrupt websites, trying to do various things. And while 2022 was very quiet, 2023, especially the second half of 2023, was extremely busy. And I think what

we're seeing very much is that whatever is happening in the physical world gets reflected very much in the digital world. Almost simultaneous with the Hamas attack on Israel, we saw a substantial increase in cyber attacks.

And over time, I think the digital world is really reflecting what it is that we're seeing. And in a very tumultuous world that we're seeing today in 2023, I think it's been a very tumultuous world online as well. How resilient are the Western allies turning out to be against these sorts of attacks? Have you seen sort of anything really scary and new, or are people mostly just trying the same tactics that they've been using for years?

I think that the attacks range. We've had, you know, what I would call a series of just kind of patriotic Russian kids that are launching sort of just really kind of not very sophisticated, but disruptive attacks. And they can cause damage because they can knock things offline. But they're sort of the equivalent of a caveman with a club in terms of sophistication. On the other side, I think that as there has been

more distraction around kind of what is happening in the Middle East, what is happening in Ukraine. We're seeing that there are attackers out of China, there are attackers out of North Korea. They're launching much, much, much more sophisticated attacks, oftentimes out of North Korea, targeting the crypto space. China oftentimes targeting either critical infrastructure in the United States or various platforms

places where there's a lot of intellectual property. And in those cases, those are extremely sophisticated attacks. And even some of the most fortified organizations that are out there have problems with that. You saw with the attack against Okta that happened this year. A lot of sophistication going into that. And so I think that as there is this sort of general noise around what's going on online in the cyberspace,

the more sophisticated attackers are using that as almost cover to be doing much more damage. The big story of the year, obviously, in the tech world has been AI and all kinds of predictions out there about how AI could reshape the internet, you know, fill everything up with spammy AI-generated garbage, help people create new cyber weapons, change the open web in all of these different ways.

What do you see AI doing to the internet this year, and what do you think we should look for next year? Yeah, again, I think it almost is the big story of this year, but I think it will actually have the big effects next year. I don't think we've seen a huge amount of change yet.

I think there are a bunch of headlines and things to worry about. There's, you know, headlines of parents getting tricked into sending, you know, fraudsters money because, you know, their daughter's in a Mexican prison where it's not even their daughter or, you know, fake news and sort of the ideas of what people can create. Some people manipulating like Google's algorithm on SEO and trying to inflate their own rankings. But I think these are sort of the

leading indicators for what is going to become a real challenge next year. And I think the thing that we're looking to the most is regardless of what your politics are, the 2024 election is going to be really a fulcrum where a lot of these things come together. And so that's a place that we're spending a lot of time. I think it's a place where if I were working for The New York Times, I would be trying to sort of say, how can I help tell

what is human-generated versus what is machine-generated. And I think we're sort of seeing the early indications of what those headlines are. We're seeing the risk, but we haven't seen a ton of what has actually been that effect. I'm actually generally pretty positive on how AI will affect cybersecurity at

At some level, Cloudflare has always been an AI company. We would never describe ourselves that way. But the whole theory of the company was if you can get enough traffic passing through your systems, then you can look at that and analyze it and make predictions on what the next cyber attacks are that, you know, someone would want to

be able to say. And the same way that in the last 18 months, it felt like kind of AI systems went from, you know, sort of jokes to being really interesting. Internally, about 18 months ago was the first time that our system started to pick up new cyber threats and new attacks that no human had ever identified before. And that went from something that was really novel at the time to something that's now happening on a relatively regular basis. And I think that that's

The good news is that I think that those systems are really good at helping us protect it. And if you look at a lot of the AI companies, they actually are Cloudflare customers where we're using our AI systems to protect their AI systems. That's sweet. It's like friends looking out for friends, bots just looking out for other bots. That's exactly right. Matthew, you and I have talked about AI.

a bunch before. And you, I would describe you as sort of an AI centrist. Like you're not one of these people who thinks we're doomed and we should go into the bunkers and start hiding from the robot apocalypse. But you're also not like,

a sort of wild-eyed, you know, techno-optimist who thinks everything's going to be okay. I'm surprised that, again, I think I'm sort of a centrist on a lot of these things. And so it was surprising to get an invitation to be on this because the way you get ratings these days is be on one extreme or the other. But one of the ideas that you've talked with me about that I found interesting is this idea that AI could actually be less global than we

we think that it could sort of balkanize or splinter the internet into sort of different countries for running their own AI system. So explain what you meant by that. Cause I think that was a really interesting idea. I think that, um,

Today, if you look at AI systems, about 95 percent of the infrastructure that is running AI, so the Nvidia GPU chips, the systems that are actually cranking out these AI models, actually running inference on these AI models, and 95 percent of that is being deployed in the United States today.

Um, and I live in the United States and we're sitting here in the United States right now. And if you're in the United States, you should be like, wow, that's really cool. We're leading that innovation. But I think if you look at what came out of the EU, um, this last week, uh, in terms of their, their AI act, uh,

And you talk to regulators around the rest of the world, what you hear time and time and time again is, we don't want to make the same mistake with this next technological revolution that we did with cloud, that we did with the internet, that we did with mobile. This time it's going to be different.

The mistake being letting American companies run the whole thing? Correct. Yeah. And so I think that there's very much a sense that if this is another sea change in terms of technological movements, that they want to be able to make sure that they are a part of that data is going to stay local in their own regions and that they're going to be able to either take advantage of it or maybe shut it down. And so I think that this is one of those periods of time where

And there's a real force to say we want to, for some really noble and good reasons, but also for some just purely protectionist reasons, we want to be able to control these new systems as they come online. And so someone joked the other day that AI is going to be the first industry that's regulated before it becomes an industry. And it has that feel. And I think that that's actually kind of a somewhat dangerous thing.

of things to go down. We don't know what this is going to turn into. This is, you know, pick your best metaphor, but, you know, if you're a baseball fan, we're at the, I don't know, top of the first in terms of the AI. And I think, and what this is going to be. And so I think

there is a rush to be able to control this in part because there's so much extremes around here. I think that, that smart regulators will hang back a little bit, see what's going on and then let this develop before, before it goes, goes forward. But I do think that more and more, um,

regulators around the world are saying, we want more control of how the internet works. And they're using AI as a way to try and put the internet effectively back in a box. I mean, even before AI, we had seen the internet starting to splinter into zones, right? And it seems like over the past decade or so, we've gone from having maybe like a Western and a non-Western internet to like

an American internet, a European internet. India sort of has its own internet, right? This seems like a trend that is accelerating to me. I wonder, do you just see the continued balkanization of the internet accelerating? And does AI wind up playing a role in that? Yeah, you know, I...

If you think of the first 40 years of the Internet, traditional sources of power, whether that's media, religion, education, family, government, like the Internet was a massive disruption to those things. And I think 2016 was this turning point. And depending on where you are in the world, you see it as a turning point for different reasons. In the U.S., it was the Trump election. In Europe, it's Biden.

Brexit in Asia it's a lot of consolidation of G's power and a number of other conflicts that happen in in that region I tend to look at something that's much much more mundane which is that 2016 July of 2016 was the year that the Associated Press said you no longer had to capitalize the I in the internet anymore and again that's not the cause but it's actually an effect it's that point in time where we were like oh yeah we just take this for granted it's like oxygen it's everywhere

And what also happened at the same time is we started to, the problems of the internet were always there, but we started to, as we took it for granted, start to say, oh, let's look at all of the downsides. And I think the next 40 years are exactly that. I think that that's what we're in the midst of. And I think that those traditional sources of power are very much

trying to put the internet back in a box. And right now, historically, there have been two internets. As you said, there was sort of the Chinese internet and China was smart in a lot of ways to say, we're just never going to let this in, recognizing the threat to the systems and traditions and culture that they had. We're never going to let the internet in. I think the race right now

is Russia able to recreate the Chinese internet? Is Iran able to recreate the Chinese internet? Is Turkey, is India, is Brazil? And if the answer is yes, then I think we are Balkanized. I think the good news is that while a lot of people are talking about doing that,

In Russia, if you want Western media, you can still get access to it. They have not figured out how to rebuild China. It's very hard once the horse is out of the barn to get it back into the barn. And that's, I think, the race right now is, is the rest of the world going to be able to figure out how to balkanize the internet or not? And I think that's the struggle of

you know, at least the next 35 years. Fascinating to note that Russia has still not banned YouTube, which like you just, you would assume that at this point in this war, they would have, but they haven't. Maybe Vladimir Putin just really likes mukbang videos. He's like, don't take away my YouTube. What will that Mr. Beast get up to next? Matthew, I want to ask you about

content moderation, which is a subject that we talk a lot about on this show. Cloudflare is not a social media company, but you all have had your share of run-ins with content moderation. And I think we should just briefly explain why that is. Basically, and correct me if I'm wrong, my understanding is that the internet is just swarming with hackers looking for websites to take down all the time. And if you don't have a service like Cloudflare protecting you

especially if you are like an extremist website or something that a lot of people have strong feelings about. They're just going to be, it's going to be trivially easy for someone to come in and hack your website, DDoS your website, take it down. And so basically if,

if Cloudflare takes away protection from, you know, some extremist site, you're basically taking away their security so they just get hacked and die. And for a long time, you had this kind of absolutist stance that you would protect any website no matter what was on it. But we started talking after 2017 when this white nationalist rally in Charlottesville happened. And in response, Cloudflare decided to ban the Daily Stormer, the neo-Nazi website.

And you did that kind of thing a couple more times once in 2019 after the El Paso mass shooting when Cloudflare took away security protections from 8chan.

And then last year, you all banned Kiwi Farms, which is a site where people were violently harassing and doxing and stalking trans people. So you have been sort of a kind of unwilling content moderator, but you also felt really weird about it. I think you said something like, "I woke up in a bad mood and decided someone shouldn't be allowed on the internet." And you basically didn't think that you should have that power.

And I bring this all up because I think we're now at a really interesting moment with content moderation where kind of everyone wants to be a content moderator. You know, governments want to moderate content. Elon Musk wanted to moderate content so much he like bought Twitter so that he could change the rules.

You couldn't pay me $44 billion to run Twitter, just to be totally clear. I can't explain to you why. So like, where are we with content moderation in 2023? And kind of, do you think we're, I don't know, the pendulum is swinging away from that being a solution to problems? Or do you think more and more people are just going to start trying to influence what can be seen online? Yeah, you know, so first of all, while this is something what we end up talking about

from time to time. And it actually doesn't end up being a hard issue for us all that often. And I think the reason why is, for the most part, governments are good at regulating these things. And they're good at taking things which are harmful and making sure that they're illegal. Now, the U.S.,

it's hard to overstate how radically libertarian the US view of speech is. And it is not the view of speech around the rest of the world. And we have to operate around the entire world. And so there's content which you can access in the US on our network that you can't access in Germany, or you can't access in Turkey, or you can't access in Egypt. And we have to follow those rules. And for the most part, that's pretty straightforward. I think the challenge is that every once in a while,

There's something which is technically legal, but clearly extremely harmful. And you listed three incidents of this. We've been around for about 12 or 13 years. So sort of the mean time for us seeing these things is about once every four years on average. Something really bad kind of crosses into that.

that zone. And at some level, I think that that's a challenge for policymakers. That's a challenge for people who, you know, have some political legitimacy. We don't, we're, we're just, most people have no idea who we are. So it's again, surprising you had me on your show. But in those cases, every once in a while, I think we will have to take some action, but it's pretty rare. What I think has been interesting has been to watch, for example, what Elon is doing at X, where,

which is, you know, when I've struggled with this, I've gone back and tried to say, okay, how should we think about what the right way to approach these questions is?

And I actually pulled down a whole bunch of philosophy books from college and went back and read my Aristotle and then read Madison because I think that when... Like while you were trying to decide whether or not to kick off 8chan? No, not, no. You're like, I have to read some Rawls first? Usually afterwards. And Rawls is a little bit later. But I think the interesting thing is when a platform gets to a certain size and scale, it starts to almost approximate the...

trust challenges that governments have. If you think about Facebook, Facebook has the population of the Southern hemisphere in it. And so as they think about how do we continue to have trust,

with that large population. It's the closest thing that you have to something like a government. And so going back and reading Aristotle's Politics, going back and reading what Madison wrote on sort of the Bill of Rights, I think that gives you some information on how do you actually build that trust. And what I found interesting is like Elon turning to, should we let Alex Jones back on line or not? And putting a poll out

I mean, that's almost kind of the next, that's sort of this almost democracy-like system where, listen, I'm going to let the people decide. And I think that there's some, again, there's a lot of things that you can criticize about that, but there's a lot of, I wouldn't be surprised if like,

I'd be a little surprised, but I wouldn't be completely surprised if Elon was out there reading his Aristotle and thinking, okay, how do I have some level of legitimacy as I'm making these decisions? Because if it is just one person making that decision, that's a very uncomfortable place to be and it's very hard to create trust.

Doing something like a poll is as close as we get to how we've actually assembled, you know, governments to have legitimacy over time. So it's, I think this is a fascinating time. But again, I think it's, I would never want to run Twitter or X or whatever we call it now.

Well, I mean, there are much more sophisticated ways of doing this. Like over the past year or so, Meta has experimented with this thing they're calling platform democracy. And the basic idea is we'll take a policy issue that we haven't decided yet. We will put together a panel of our users selected basically at random. We'll pay them for their time. We'll bring them in. We'll educate them about the issue and we'll let them deliberate. And then at the end, they will present us with their recommendation for what do you think our policies should be around climate change?

That seems really smart to me. That's good, right? Like, I would agree with you that the spirit of let's let users have a voice and like who belongs on that platform, I think that's a good idea. I think maybe just sort of throwing open to like an X poll is not as good of an idea. Again, I think these all map to systems of government that we would be familiar with. I mean, the X version is direct democracy, which is what we have here in California. I mean, those people who hand out polls,

And we famously have a lot of crazy stuff on our ballots. That is – there might be some downsides to that. What you just described with Facebook, I mean, that's a republic. They've essentially selected a group of people. They've created a senate.

And then they've used that in order to create systems. I think that that's what everybody in these tech systems is struggling with is basically rule of law challenges. Again, they're not governments, but they have the scale and size that they start to behave in almost that way. And I think thinking about what are forms of government that work.

And how do you build trust around that and stability? I think that's a lot of what these organizations are thinking about, even if that's not how they frame it. Here's my content moderation question for you. And by the way, I really do appreciate how thoughtful you guys are. I always really enjoy reading the stuff you put out on this subject because I can tell- It is strange. Strange how many people are like, I just hope some more neo-Nazis sign up for Cloudflare so they'll write another blog post. Well, like-

The thing that's really unusual about your role in this ecosystem is that you are not like a web host. You are not GoDaddy. You are not a traditional content moderator, right? It makes sense to us that Facebook is going to have to make calls every day on what posts stay up and come down. But for somebody that is like just sort of protecting the general traffic of the web, that's much more unusual. Yeah.

In some of these recent high-profile cases, my understanding is that the core service that you were providing was sort of anti-DDoS, right? You were preventing these websites from just being hit by many, many servers simultaneously, which has the effect of bringing them down. This is a service that you choose to provide for free to most sites on the internet, or to basically anyone who wants it, right?

My question for you is, why did you make the decision, we want to be a free security guard for everyone on the internet, and we're only going to not be your free security guard in the most extreme circumstances? You know, I think that maybe it's a little bit of a penance. In college, I wrote my college thesis back in 95 or 96 on why the internet was a fad. Wow, and we think we made some bad predictions on this show. And that was clearly wrong.

And I think that this is one of the sort of just great inventions of human history. I think that there are clearly harms that come from it. It turns out that if you connect everyone in the world, some bad things are going to happen. But by and large, I see time and time and time again, when again, we started by talking about why did the internet grow 25%? Largely, that's because

some of the people who haven't had access to all of the resources the internet have now have access to that. And I think that that is...

genuinely made their lives better. And so I think that if we allowed a system where, you know, anyone could basically take anyone offline, unless you had the money to pay for it, then again, we're, we are denying what is really great about the internet. Yeah. Last question and then we'll let you go. But I think one of the defining questions of 2023, at least for me is really who should be in charge of this.

This is the question that came up in conversations Casey and I have had around content moderation, around AI, around crypto. And there are just so many different answers to this question. And it seems like power is really shifting between various groups. Now, a lot of governments, including state governments in the U.S., are starting to try to be in charge of what appears on social media platforms. I would not say that's been going well.

When you think about who should be in charge, do you think we need new kinds of governance to make sure that the internet works for us? Or do you think our existing institutions are proven capable of governing this thing? You know, I think this is an interesting challenge. And I think that the fight of the future is...

how extensive does a local institution's regulation end up applying? So for instance, there was a German court recently that said that a certain set of providers, we were one service called 9.9.9.9 was one, that we needed to block certain websites from using our infrastructure.

Which again, if it only applied in Germany, that would be one thing. But they said that because a user in Germany could then use a VPN to pretend like they were coming from Austria or Sweden or Mexico or wherever, that we actually had to apply that regulation globally. So this Leipzig court said, you have to follow this rule on a global basis. At the same time, we have the Montana legislature saying,

that says, you know, they're going to ban TikTok.

And again, if that just applies to Montana and the people in Montana say that that's what they want, maybe that's fine. But using the exact same rationale, what could happen is the Montana court could say, well, somebody could use a VPN to pretend they're coming from Mexico. And so we actually have to plan TikTok globally. I think there's real danger if we get to the point where there are, you know, not just at the nation state level, but down to the individual locality level,

organizations or governments or institutions that are saying that our rules have global effect. And if that happens, then we're going to fall to what I've sort of been describing as the Teletubby internet, where everything falls to the absolute lowest common denominator. And it's actually interesting to look at- Wait, wait, what do the Teletubbies have to do with this? Well, and then the story of that was that, you know, Jerry Falwell tried to get them banned. So, but-

But Tiki Wiki was accused of being gay. That's right. That's true. And I'll say not without evidence. Yeah. I mean, if you think about television, I mean, television was this new technology that came along. And the concern, if you were one of the television broadcasters in the United States, just looking at the United States, was not so much about competition because there were only, for reasons of physics at the time, there were only three broadcasters.

stations, the real threat to your business was regulation. And so, you know, by and large, you had newscasters who came from the center of the United States. Kansas was overrepresented. They were all men. The newscasts were basically all the same. You covered every political convention from, you know, opening speech to balloon drop, which is terrible TV if you think about it. Like, why would everyone do exactly the same thing from exactly the same pool feed? And

And yet that was the best way to avoid regulation. What I worry about in the internet is if we all have to play to the lowest common denominator, it's not going to be Kansas anymore, but it's going to be probably somewhere in Mumbai that gets to set what the global internet looks like. And that's probably, you know, if you're sitting in San Francisco or you're listening to New York Times, probably not a world that you want to live in. So I think the fight is going to be how do we make sure that local regulations stay local and that the people who have the authority to

are answering to that authority in a local space. And yeah, if the Leipzig court says something is illegal in Leipzig, then we should block it in Leipzig. That's easy. But that Leipzig court shouldn't be able to have that same rule apply to Montana. If we can do that, I actually think we have the right institutions in place. I think the problem is going to be when...

countries start to say, you know, our rules apply on a global basis. And that's going to be, I think, the real fight of the next period of time. Yeah. Any predictions for 2024 on the internet?

You know, unfortunately, I think it's going to be a really difficult year. I think the election is going to catalyze a lot of the worst of things that happen online. And hopefully it turns out to be a lot quieter, but I think it's going to be a busy 2024 on the internet and especially in the cybersecurity space. Given how hard a year you think it's going to be, would you say it's even more important that people listen to hard work in 2024? Absolutely. I think this is the...

The only way that you can save democracy and save the internet. And the only thing I would ask is that if you could push back and try and get the New York Times to capitalize the I in internet going forward. I think that that actually, and I know you think I'm kidding, but this is, if you believe in the internet, there should be one. If you don't want it to be balkanized, it should be a proper noun. It's like Earth or Mars. It should be capitalized. So capitalize the I in internet. I'll take that up with the style editors. Yeah.

Well, Matthew, thank you so much for coming on. Thank you so much. Thank you. It was great. And happy holidays. Happy holidays. Happy holidays. This podcast is supported by KPMG. Your task as a visionary leader is simple. Harness the power of AI. Shape the future of business. Oh, and do it before anyone else does without leaving people behind or running into unforeseen risks.

Simple, right? KPMG's got you. Helping you lead a people-powered transformation that accelerates AI's value with confidence. How's that for a vision? Learn more at www.kpmg.us slash AI. Hard Fork is produced by Davis Land and Rachel Cohn. We're edited by Jen Poyan. This episode was fact-checked by Caitlin Love. Today,

Today's show was engineered by Alyssa Moxley. Original music by Marion Lozano, Rowan Nemisto, and Dan Powell. Our audience editor is Nelga Lokely. Video production by Ryan Manning and Dylan Bergeson. If you haven't already, check us out on YouTube at youtube.com slash hardfork. Special thanks to Paula Schumann, Wee Wing Tam, Kate Lepresti, and Jeffrey Miranda. You can email us at hardfork at nytimes.com. Let us know what kind of act you are.

I'm a hacky sack. Ooh, you're a hack, all right. You're an H-hack. That's a hack, baby. Ever dream of a three-row SUV where everything for every passenger feels just right? Introducing the all-new Infiniti QX80 with available features like biometric cooling, electronic air suspension, and segment-first individual audio that isolates sound.

Right to the driver's seat. Discover every just right feature in the all new QX80 at infinityusa.com. 2025 QX80 is not yet available for purchase. Expected availability summer 2024. Individual audio cannot buffer all interior sounds. See owner's manual for details.