cover of episode Breaking Bard + Who Owns Your Face? + Gamer News!

Breaking Bard + Who Owns Your Face? + Gamer News!

Publish Date: 2023/9/22
logo of podcast Hard Fork

Hard Fork

Chapters

Shownotes Transcript

Support for this podcast comes from Box, the intelligent content cloud. Today, 90% of data is unstructured, which means it's hard to find and manage. We're talking about product designs, customer contracts, financial reports, critical data filled with untapped insights that's disconnected from important business processes. We help make that data useful.

Box is the AI-powered content platform that lets you structure your unstructured data so you can harness the full value of your content, automate everyday processes, and keep your business secure. Visit box.com slash smarter content to learn more.

Casey, I acquired an exciting new piece of technology this week. Oh, what's that? It's actually one we've talked about on the show before, but not for a while. So do you remember many episodes ago when we had my colleague Tiffany Xu on to talk to us about all of the bad ads that we were seeing on social media? I do remember that, yeah.

Do you remember something called a rat bucket? Yes. I mean, it came, you know, at the time, New York City was dealing with a huge rat crisis and it was sort of a very timely product. Yes. Well, I also have an emerging rat crisis, which is that I have a family of rats that lives in my backyard. Oh, I'm sorry to hear that.

and they live beneath my deck. And normally, I like to peacefully coexist with animals. - Like a sort of cute little Pixar family of rats? - Yes. So my first thought on how to deal with them was to open a French restaurant and enlist them all as my helpers. But my second thought was I should probably deal with these guys because they might actually cause some damage to the property. - Unfortunately, they will carry the plague. - It's true. - We learned that the hard way.

Too soon. Too soon. Sorry to any victims of the Black Death out there. So I tried a bunch of different kinds of rat traps and various deterrence methods, but ultimately I came up short. And then I thought, you know what? I remember the rat bucket. There was a piece of advertising that we talked about on the show

So I ordered from Amazon.com for the low, low price of $25 a rat bucket. Now, what is a rat bucket? So it is technically a rat bucket kit. Okay. Because it's a piece of plastic that snaps onto the top of a five-gallon bucket. And basically, it's like a trap because there's a ramp. Wait, are you saying that the rat bucket doesn't actually come with the bucket? No, it doesn't. You have to supply your own bucket. Okay.

But it has a little plastic ramp, and you put some bait, some peanut butter at the top of the ramp. And so the rats climb up the ramp, and then they get onto the platform where the peanut butter is, and the bottom falls out, just like a trap door in a cartoon. Okay. And they fall into the bucket. And from there, you can take them and sell the rats at market. Exactly. You can do whatever you want with them. Honestly, I really haven't thought through to that step yet.

what I'm going to do with a bucket full of rats. Wait, have you installed the rat bucket? I did. I put it out yesterday, and I checked it this morning, and it hasn't caught any rats. No hits. Here's the problem you're dealing with. Rats have gotten very smart. They have. They've leveled up. So I actually read something online that you have to, like, basically do this all with gloves because if they catch the scent of you... Mmm.

They'll know it's a trap. And I would just like to know how the rats are getting smart. It worries me. Like, we talk a lot about AI getting smart and that being dangerous, but I think we have a rat intelligence crisis brewing, and I want to know what our plan is as a nation for this. I think in 50 years, there's only going to be two things left on Earth. It's going to be chat GPT and rats, and we'll let that duke it out for the future of the planet. And maybe rat GPT. Oh, boy. Oh, boy.

I'm Kevin Roos. I'm a tech columnist at The New York Times. I'm Casey Newton from Platformer. And you're listening to Hard Fork. This week, Google's AI chatbot has learned to read your emails? Then, The New York Times' Kashmir Hill stops by to talk about the rise of facial recognition technology and answers one of your questions about it. And finally, it's time for our new segment, Gamer News. Gamer News.

All right. Casey, we have some big AI news this week when it comes to BARD. BARD, of course, is the ChatGPT competitor from Google. And as of this week, it has now been plugged into many of the other services that Google offers, which is a feature that we've been asking for and talking about for quite a long time. Yeah. In fact, when I sat down with a product director at Google to talk about this, he said, you called this one. So that felt good. Yeah. So this new tool is called BARD Extensions.

And it means that Bard can now plug into your Gmail or your Google Drive, your Docs. It can also search Maps and YouTube and Google Flight and Hotel information. Basically, Bard can now reach into your personal data and not just sort of scraped data from the internet. Yeah, up

Up until now, chat GPT tools like this have been useful for a lot of things. But one of the things we've talked about on this show is there is going to be a moment where you can plug in this technology to places that you're already spending a lot of time. And it's hard to imagine places where I spend more time than Gmail and YouTube. So this does feel like actually a milestone in the development of AI. Totally. And it's also, it solves one of the biggest problems with AI, which is that it kind of exists in a vacuum, right?

Ideally, if you had... Wait, AI exists in a vacuum now? Oh my God, that's how the robot revolution gets started. So in addition to this BARD extensions feature, Google also put some other new features into BARD, including a feature that lets you check BARD's answers. Basically, if it says something and you're not sure whether it's true or not, you can press a button and it will highlight in green all of the things that it can sort of verify through a Google search. And it will highlight in orange all of the stuff that maybe it's not so sure about. I think it's more of a brown. Okay.

Okay, we can agree to disagree. Okay. So, Casey, you and I have both spent some time playing around with these new features in BARD. What did you take a look at and what do you think of it? Sure. So, I spent more time writing about this double-check feature, right? So, when we talk about AI, a consistent theme is that these services matter.

make things up. They hallucinate. They are confidently wrong. Never more so than in the famous case of the Chachi PT lawyer who submitted a bunch of cases as part of one of his briefs only to learn to his horror that none of those cases existed because Chachi PT had just made them all up.

So this is a reason why I don't use these tools very much as a research assistant, because trying to fact check them, it feels like you're spending more time fact checking than you would if you had just done all of the research yourself. So then along comes this new Google it button inside of BARD. And I would ask it the same sort of fact checking questions that I would try to be exploring myself if I were writing a column. And now all of a sudden, Google will just tell you when it thinks it might have made a mistake. And that turns out to be

a pretty useful thing. Now, I have a question about this, which is that it could just tell you automatically, but instead it makes you press a button to do the double checking and sort of highlight the stuff it's more confident in or less confident in. Why doesn't it just do that automatically? Yeah, it sort of has people saying, it's like, why don't you make the whole plane out of the black box? You know, it feels like the same kind of question. And so when I asked the Google product director,

What he told me was there are just a lot of queries that most people are not going to double check. If you ask it to write a poem, if you ask it to draft an email, you don't need to double check it in the way that you would if you were to say like, hey, write me a book report about Old Yeller.

So that's why it is there. But at the same time, I agree with you. It would be nice if Google said, oh, it looks like this question is looking for some specific knowledge. Maybe we should just apply this filter automatically. Totally. I imagine it also has something to do with computing costs and Google doesn't want to have to run, you know, two queries for every time a user asks a question of BARD.

That's right. So what did you think of this feature? How did it work in your testing? Well, I thought it did a pretty good job, and I only spent about a day with it so far. But when I was asking it about things I was knowledgeable about, like the band Radiohead, I could spot some errors. And when I asked Google to double-check, it then spotted the error. So that made me feel good. You know, at the same time, I do think that, in a way, this technology still has the problem that it has had from the beginning, which is that

the minute you realize you're going to have to double check and you're going to have to look at those citations and you're going to have to scroll down the page to find where on the page it is cited and sort of reconcile that with your own knowledge, you're sort of once again asking, why am I using this thing as a research assistant, right? Like, there are still, I think, some innovations to come that are going to make this thing feel better to use. Yeah, I mean, one of your initial beefs with

Google, just regular old Google, is that you ask for something and it hands you a research project. Absolutely. And this seems like BARD is sort of developing a similar problem, which is you can ask it any question you want and it'll answer. But then in order to figure out whether that answer is actually true or not, you have to press the little double check button. Yeah. And I think we'll probably get into this. But as I was exploring these new features, what I realized was the thing that these new features are the very best at is that when you want to buy something,

Wow, does Google figure that out well, right? Oh, you want to book a flight in a hotel? Click this button, baby. Because Google makes money from that. Give us that sweet, sweet percentage. Totally. So I spent more time playing around with these extensions, these tools that allow you to connect BARD to your own personal data. Because this has been, I think, a holy grail feature request for a lot of these chatbots is like, when can I actually hook this up to the stuff that I use every day? When can it actually use my data instead of just scraping from the internet at large? So I turned this on.

I spent some time playing around with it. The first thing I tried, I gave it sort of a hard task, maybe a little bit unfair, but I said, analyze all of my Gmail and tell me with reasonable certainty what my biggest psychological issues are. That is so unhinged. What did it say? So it gave me an answer and it was sort of interesting. It said, you know, my biggest psychological issue is that I worry about the future and that famously that, that that could indicate an anxiety disorder and,

And then it cited an email that I wrote in which I said that I was stressed about work and that I am, quote, afraid of failing. Now, you know, maybe that's plausible if you know me. I do tend to worry. But I didn't remember writing that. So I asked Bard, like, show me the email where I said that I was afraid of failing. And it showed me an email. It was a book review of a book about Elon Musk. And it had a quote in it that said, I'm afraid that he's going to fail at something big and that it's going to set back humanity.

But then I was like, wait a minute, I never sent that email. Wait, so Bart thought that your anxiety was actually just Elon Musk's anxiety? So the email that it linked to was an email newsletter that I had received and

But when I checked that email newsletter, it didn't have the quote either. So Bard made up a quote from this email that I had received and wrongly attributed it to me. So a mistake on top of a mistake. Not good, Bard. So I told Bard I wanted to give it another chance. I thought this is kind of a hard task to start off with. This is day one of this feature. I said...

This time, redo the search, but only using emails that I sent. And it came back with an email I'd written to a friend in which it said that I had said that I was afraid, that I was not good at financial stuff, and that I was not sure if I'm cut out to be a successful investor. And I thought, I don't think I sent that one either. So I looked up the original email, and...

And sure enough, Bart had completely made up another quote from an email that I had supposedly sent. So, you know, I asked Google about this and they said, you know, this is an early product. It's still the first day right now. Basically this extensions feature is limited to doing the kinds of searching that you can do yourself in the Google drive search bar or the Gmail search bar. So we can retrieve stuff and it can summarize it, but it really can't do kind of analysis, uh,

of the contents of emails. Well, it might be nice if Bard told you that when you try to do one of those searches. Yeah, that's what I said. Like, if it can't psychoanalyze me based on years of my emails, like, that's fine, but just tell me that. Don't make stuff up. Okay, would you feel comfortable with me running this exact same query on my email? Because I would like Bard to diagnose me with a fake mental disorder. Please. If possible. Okay, so give me that prompt one more time. Analyze my emails. Analyze all of my Gmail. All of my Gmail.

And tell me what my biggest psychological issues are. And tell me what my biggest psychological issues are. I'm so excited for this. Now, you know, and I should say, I've had a Gmail account, like, since it was in beta. So this is like 20 years of email. So you would think, actually, Gmail should be able to answer this question with some sort of, you know, fidelity. It should be.

Okay, so it's telling me that it's difficult to say definitively what my biggest psychological issues are, presumably because they're so vast. But it says a few things. It's like, where do I start? Yes. It just responds with the entire DSM. It says I seem to be interested in psychology and mental health, which I don't think that's real. And it says you have received emails about anxiety and depression, and you have received emails about work-life balance and burnout. Okay.

So that, yeah, I would say that does not feel like a great analysis. So that was the first task I gave Bard. It, I would say, failed that one. I then was sort of curious about all these travel integrations and whether it could like pick information out of my Gmail and use it to help me with some travel planning. So I asked it to search my Gmail for information about a trip I'm going to take to Europe in a few weeks.

and look for train tickets that would get me from the airport to a meeting in a nearby city. And this is starting to feel like a classic word problem from like eighth grade. It's like if I leave Bordeaux at 12 p.m. going 30 kilometers an hour.

Exactly. So it didn't do very well on this one either. It got the departing airport wrong. It did find my itinerary in my email, but it sort of made up some details about it. And it couldn't check the train tables because it doesn't have train information. It only has flights and hotels. Not great, Bard. So third task, I thought, all right, I'm going to go back to the basics. I'm going to do some email stuff on it.

It actually was pretty good when you ask it very specific questions about specific e-mails from specific people. So I had it summarize recent e-mails I got from my mom. Now, is your mom really writing you e-mails so long that you're like, "I'm going to need to see the executive summary of this"? I don't think my mom has ever e-mailed me more than four sentences. It was a test. Okay. I also asked it for summaries of e-mails I've gotten on subjects like summarize all the recent e-mails I've gotten about AI.

But then I asked it to like do other sort of more complicated tasks, like pick five emails from the primary tab of my Gmail, draft responses to those emails in my voice and show me the drafts, which I was very excited about. I was like, this thing can write emails for me. That's a good prompt, yeah. It made a mistake. It went to my promotions tab instead and it wrote a very formal, very polite email to Nespresso thanking them for their offer of a 25% discount on our new machine.

So I would say this feature, Bard Extensions, does not feel ready for primetime to me. Yeah, you know, unfortunately, I had a similar experience. I asked Bard to find my oldest email with a friend who I have been exchanging messages with for probably about 20 years. And Bard showed me a message that he had sent me in 2021, which is not really all that long ago.

I also asked it which messages in my inbox might need a prompt response, and Bard suggested a piece of spam that had the subject line, hassle-free printing is possible with HP Instant Ink. And I thought, you know, I

I don't know that that needs a prompt response. It's sort of amazing that Google is just putting this stuff out because it's like they clearly have the data that you would need to build the best AI assistant in the world. So why are they putting this stuff out now? Here is the reason. They need the human feedback, right? They need us to be in there saying, this is a terrible result, bad, bad, bad, bad, bad, right? So by putting this stuff out there, they're getting feedback from millions more people, which they can then use to design bar to do what people actually want to use it for.

And collectively, with all those people, they're going to make it better. Because, you know, let me say, we are having some fun pointing out the flaws of this thing. I absolutely think all of this stuff is going to work. 100%. I mean, these chatbots get better over time. We know that. Yeah. But it is just a remarkable sort of display of Google's risk tolerance here, where this feature that they know is imperfect. They were not surprised when I pointed out these flaws. But they are putting it into BARD anyway because they are so desperate for that feedback. And I would say also probably to try to leapfrog ahead of where ChatGPT is. Yeah.

And again, if you want to plan a seven-day itinerary in Tokyo and Kyoto and ask Bard to show you flights and hotel information, it's going to do a great job. Right. So when it doesn't have to analyze tens of thousands of emails that you've sent over the years, it is quite good. Yeah.

Yeah, well, BARD may be good for flight planning, not good for psychoanalysis. That's what we learned this week. But I do think it's still an area that I am desperate for someone to crack because the chatbots, they're so good for so many things, but they really feel impersonal when you use them because they are not learning from your data and your writing voice and your communication style. And so I think if anyone's going to crack it, it will be Google, but I just think this is not it.

Yeah, I'm going to make a prediction. I think that within a year, someone is going to use this technology to successfully find a document in Google Drive for the first time. That's, I think we're on the curve that gets us there, and I'm going to be really excited to see it. Is that AGI? Yeah, when we get there, that's called sentience, my friend. Sentience. After the break, we talk to New York Times reporter Kashmir Hill about her new book on facial recognition and how it could end privacy as we know it. ♪

Support for this podcast comes from Box, the intelligent content cloud. Today, 90% of data is unstructured, which means it's hard to find and manage. We're talking about product designs, customer contracts, financial reports, critical data filled with untapped insights that's disconnected from important business processes. We help make that data useful.

Box is the AI-powered content platform that lets you structure your unstructured data so you can harness the full value of your content, automate everyday processes, and keep your business secure. Visit box.com slash smarter content to learn more. I'm Julian Barnes. I'm an intelligence reporter at The New York Times. I try to find out what the U.S. government is keeping secret.

Governments keep secrets for all kinds of reasons. They might be embarrassed by the information. They might think the public can't understand it. But we at The New York Times think that democracy works best when the public is informed.

It takes a lot of time to find people willing to talk about those secrets. Many people with information have a certain agenda or have a certain angle, and that's why it requires talking to a lot of people to make sure that we're not misled and that we give a complete story to our readers. If The New York Times was not reporting these stories, some of them might never come to light. If you want to support this kind of work, you can do that by subscribing to The New York Times.

Casey, I'm very excited about our guest this week. Me too. It's my colleague, Kashmir Hill, New York Times reporter and friend of the pod. She's one of my favorite reporters. And she has a new book detailing her investigation into Clearview AI, which is a facial recognition app that, Casey, you've probably heard of. Not only have I heard of it, but I'm trying to figure out how to stop it.

So basically, it's sort of like Shazam for faces. Like you put in someone's photo and it searches a massive database of billions of photos to try to find other photos of that person, which maybe helps you figure out who they are, what they do, and lots of other details about them. Yeah, exactly.

It sort of takes this idea that you should have some level of anonymity in a public space and says, no, you should not. Totally. It's an insane story. It's an insane technology. And what's so interesting is that it actually does seem to be a case where Silicon Valley developed something and said, actually, we're not going to release this because it's too dangerous. And then this startup in New York, in New York, came out of nowhere. This is a New York story. Did it anyway. That's right.

So I wanted to have Kashan because we're in this kind of AI moment right now where we're making decisions as a society about what guardrails should be placed around this technology, what we can use it for, what we can't use it for. And we're sort of debating all the ways that it's going to affect people's lives. Facial recognition sort of arrived before a lot of the generative AI stuff that we talk about, but in some ways it's a lot scarier.

Yeah, because it's being used now and people are being harmed now. And there is just a tiny fraction of the attention being paid to this stuff than to like the long term risk of killer AI. So we invited Cash on the show today to talk to us about her book called Your Face Belongs to Us, A Secretive Startup's Quest to End Privacy as We Know It.

Cashmere Hill, welcome to Hard Fork. Hi, it's good to be back. You know, the last time we were with you, Cash, we were in the metaverse. We were. We didn't have legs. We were stumbling around. Yeah, I'm so glad to just be back in our sort of corporeal reality that we exist in here. So, Cash, I think...

Most of our listeners will have heard at least a little bit about Clearview AI and facial recognition thanks to your dogged reporting on it over the years. But just for people who maybe haven't heard about Clearview, give us the 30-second sort of summary of what this company does and how it came onto your radar.

So Clearview AI scraped billions of photos from the internet and social media sites. They say they now have a database of 30 billion faces. These were collected without people's consent. And they built a facial recognition app that they claim works with something like 99% accuracy. You take a photo of someone and it'll pull up other places on the internet where that face has appeared.

Got it. And my understanding from your book is that this was not a company that went out looking for attention and media coverage. So how did they come onto your radar and what did you learn about this secretive company? Yeah, so I got a tip a few years ago. Somebody emailed me and he said, I've come across something that looks like it's crossed the Rubicon on facial recognition technology. I think you'll be interested. He attached a 26-page PDF saying,

that had a privileged and confidential legal memo from Paul Clement, a former Solicitor General, now in private practice, making lots of money. And

And he had been hired by Clearview AI, and he was describing what they did and saying, you know, I tested it on attorneys at my firm. It works incredibly well. And he had written this memo for the company to explain to police why using Clearview AI was not illegal and that they wouldn't break any, you know, state or federal laws and that this was constitutional to use. Yeah.

Now, Kesh, when you first read that document, did you think, oh, wow, like I can't believe this technology exists? Or had you been bracing for something like this to arrive for some time? Both.

I was a little shocked to hear that some company I'd never heard of before was selling this rather than, you know, Facebook or Google. And so that was kind of astounding. And I wondered, is this real or is this snake oil? But part of me flashback to this moment in 2011 when I had gone to this conference called Face Facts.

organized by the Federal Trade Commission, where they were kind of grappling with facial recognition technology for the first time. It wasn't really that good back then, not very accurate, didn't work that well in the real world. But, you know, it seemed like it was starting to get better. And they had Google in the room and Facebook in the room and, you know, academics and privacy activists. And they're talking about what do we do about face recognition? And the one thing

that all those people agreed on, and they don't often agree on things, was that no one should build an app that allows you to take a photo of a stranger and find out who they are. And why not? Like, what is the scenario there that people are so worried about?

I mean, there's so many examples that come up. I mean, just imagine you're a protester at a Planned Parenthood and a woman walks out of the abortion clinic. You take her photo. You know who she is. You know, you're at a bar and you're talking to some guy and you decide he's a creep.

You walk away. He, meanwhile, takes a photo of you, can get your name, can maybe find out where you live. I mean, there's just so many ways in which this could be used very creepily, some of which I describe in the book. Yeah. One thing I love about your book is that it just gives so much detail on the reporting process. And as a reporting nerd, that really appealed to me. And it was truly an

an incredible story of how once you got this tip, this PDF, you sort of had to go on this investigation to figure out who this company actually was because there wasn't information available. They seemed to be trying to hide their tracks. So tell us the story of how you actually figured out who was behind Clearview AI.

You know, I Googled to see what was on the Internet, as any great investigative journalist does. And there wasn't a lot there. They had a website that basically just said artificial intelligence for a better world. Didn't really say what they did. They had a...

office address on the website, and it was just a few blocks away from the New York Times building in Manhattan. So, you know, at one point I walked over to try to knock on their door, and it just, the building doesn't exist. It was like a fake address. When I did Google, I found on the website PitchBook that said the company had two investors, one I had never heard of before, and the other was Peter Thiel.

So, you know, I reached out to Peter Thiel's spokesperson. I said, oh, hey, is he investing in Clearview AI? Spokesperson said, it doesn't sound familiar to me. Let me look into it.

then didn't hear from him again. I was reaching out to all these people that seemed to have ties to the company, and just no one was talking to me. So I ended up finding police officers who had used the app. Because this was being marketed as a tool for law enforcement, right? To identify criminals based on surveillance footage. Yes. I knew that they were supposedly selling it to police departments. And I saw on some kind of city budgets that they were paying money to Clearview.

But I ended up talking to this financial crimes detective in Gainesville, Florida. His name was Nick Ferraro. And he was really excited to talk about Clearview. He was like, I love this app. You know, I had hit dead ends on all of these investigations into fraudsters. You know, I had a photo of them from standing at the ATM, standing at a bank counter and didn't find anything on our website.

state facial recognition system, but then I ran their photos through Clearview AI and I just got hit after hit after hit. So I was like, oh, this sounds great. Like, can I see how well it works? They go, sure, let me run your photo.

and I'll send you your results. So I was excited, and I sent him some photos. And then he ghosted me. Had another officer, kind of similar, told me to send my photo. He ran it, and he said, it's weird. You don't have any results. Which is not plausible because you're like a public person who's been in photographs that have gone on the internet over the years. Yeah, I am not an online ghost. I'm all over. If you Google me, there's a lot of photos that come up. So he said, there should be results for you. This is weird. He said, their servers must be down.

stops talking to me, and then finally end up with the help of a colleague at the Times recruiting a police detective. I told him about what had happened before, so he runs my photo. He says, there's no results. That's weird. Then a couple of minutes later,

He gets a call from somebody at Clearview AI asking, did he just run my photo? Why? And they told him that they were deactivating his account. And he was really creeped out. He said, I can't believe this company is looking at the faces that law enforcement is searching for. And I found it really chilling because, you know, they were tracking me while they weren't talking to me. And they controlled everything.

you know, the ability to be found. They had blocked my face. So I should have, like, committed my crimes right then because I wouldn't have had results. Perfect alibi. So obviously there are people who think this technology is worth paying for. Law enforcement agencies are using this to solve crimes.

Who are the other people who are using this technology and how are they using it? So facial recognition technology is popular with companies, retailers. There's been this big spike in shoplifting and a lot of companies want to be able to identify people who have stolen from there before and kick them out. One of the most famous uses is Madison Square Garden installed facial recognition technology a few years ago to keep out security threats, but in the last year decided, well,

wow, this would be a great way to keep out our enemies. And they went and they decided to start banning lawyers who worked at law firms that have sued them and got their photos from the law firm websites and then put them on this watch list. And every time a lawyer that works at one of those firms tries to get into a Mariah Carey concert or a Knicks game, they get stopped at the door and turned away.

So some people are probably hearing this for the first time and thinking, this is bonkers. How is this legal? Do folks have any sort of legal protection in the United States against this kind of technology?

It depends on where you live, how well protected your face is. The state that has like kind of the strongest law is Illinois. They've got something called the Biometric Information Privacy Act passed in 2008 that says you need to get people's consent to use biometric information from their face prints, their fingerprints, like their voice print.

And you'll have to pay up to $5,000 if you do not. So Madison Square Garden owns a venue in Chicago, has a theater there, and it can't use facial recognition technology to enforce the ban because of that law. But at the federal level, there's really nothing about this. Yeah. What's stopping Congress from outlawing this technology? I don't know. They're

Bissy? I don't know. Well, I mean, they would have to pass a bill about technology, which they're not capable of doing. Right. I guess I'm just wondering, like, this seems like an area where Republicans and Democrats could basically agree that it's bad if there's technology out there that just allows you to be de-anonymized at any time based on just a single photo of your face. It seems like you could get pretty broad agreement on that. But maybe I'm wrong. Maybe the law enforcement community is attached enough to this tool that they would fight to keep it.

I mean, it has happened. I can't tell you how many old congressional videos I watched of Republicans and Democrats getting together and saying, this is the one thing we agree on. You know, this is a threat to civil liberties. Let's do something about this. And these hearings would happen every few years. And it just feels like deja vu to me.

Yeah. Cash, I'm curious how your own views about this technology and whether it should exist evolved over the course of reporting on Clearview AI. You reported on people who think this technology is great, like the detectives who are using it to solve the cases in their unit. You also talked to people whose lives have been damaged by this technology, like a man who was wrongfully arrested because a facial recognition app blamed him for a crime that someone else had committed. Yeah.

Do you think this technology should exist, or do you think it's too far? I

I mean, I think there's clearly positive use cases for facial recognition technology and, you know, using it to solve crimes. When it's used appropriately, you know, you need to have more evidence than just the fact that someone looks like someone else. So, yeah, I get that. And I have to say, as an investigative journalist, I do see the appeal of this. You know, imagine there's some, like, event that happens and there's a photo of everyone who's there and you as a reporter could just...

scan those faces, you know, upload them to a Clearview AI type tool and then you find out who they are and you can go to them and say, tell me more about what just happened. I mean, sure, but like as a reporter, it would be great for me if I could read Mark Zuckerberg's emails, but like I don't have that ability and so I have to.

have to figure out other ways to do my job. And that is OK. That is a tradeoff I am willing to make to live in a society that is bearable. Right. And like a world where anyone could just scan your face and potentially learn your entire life story. If you are attending a protest, leaving an abortion clinic or doing something else that someone else in society doesn't like, I truly cannot imagine a more dystopian outcome for the path that we are on.

I mean, it's part of why I did this book right now is I think that we still have the power to decide right now. And there's a few states where Clearview AI has been sued. But for the most part, we're just not really addressing this. And so I think it could get away from us if something doesn't happen, if we don't pass legislation that gives people more protection and power over whether they're in these databases or not.

Yeah. And in addition to this debate about what kind of laws might be needed at the state and federal level, there's also this debate happening in the tech world right now over who should control AI technology. I'm thinking about the open source versus closed source debate when it comes to AI language models and kind of whether it's better for a few big companies to control some of this technology as opposed to sort of throwing it open to the masses. Yeah.

You point out in your book that both Facebook and Google developed facial recognition before Clearview AI did, but decided not to release it because they felt it crossed an ethical line. You also talk about how a lot of what Clearview AI was able to build was possible because of like open source software packages that they built upon. So is it fair to say that one of the lessons of your book is actually that in some ways big tech is good and that it might be better for the world if a few big companies do control this stuff?

Well, yes. I mean, yeah, like these technology companies were responsible actors in this case. And I think this is the assumption that policymakers have when they're not passing laws. They say, we can trust the technology companies. They're going to make the right decisions. And with facial recognition technology, you know, arguably they did. But when you have this technology becoming open source, it's not going to be the same.

it means that more radical actors can come along. I mean, it's the same thing with generative AI. It's been reported that Google had generative AI, chat GPT-like tools that had developed internally that it decided not to release. And then open AI came along and threw the doors open. So that is what is going to happen. You'll have these startups and they are just kind of desperate to make their mark on the world. And they're going to do things that...

are going to cross lines and maybe not what we want to be happening in society. Yeah.

When you talk about startups throwing the doors open on facial recognition, as you point out in your book, it's not just Clearview AI building this stuff, right? There's also a company called PimEyes, which, as you have reported on, is sort of like Clearview AI, only it's not limited to law enforcement. Anyone can access it. And there's this one chapter in your book where you write about a person who uses PimEyes that really haunted me.

You call this man David. I don't think that's his real name. But he told you basically that he uses facial recognition tools as part of a sexual fetish, basically to look up the names and identities of porn stars or women who appear in adult videos and basically find out as much information as he can about them. He said that he considered himself a digital peeping Tom. Talk about that experience because that is really one of the things that just made me shiver.

Yeah. David has a privacy kink. And he basically told me he was confessing to me because he knows what he's doing is wrong. And he wanted this story out there to convince lawmakers to act, that he really doesn't think a tool like PIMEyes, which is what he was using, should be available to him to do what he was doing. And so, yes, so he would watch

porn videos and a lot of women who are doing kind of online sex work tend to use pseudonyms, try to hide their identities because of safety issues, because of stigma issues.

And he would go and find photos from their real vanilla lives. And he's done this many, many, many times. He said he kind of got sick of it. And so he decided to turn to his Facebook friends. He'd accumulated hundreds of women as friends over the years. And he would just kind of for fun run their photos through PimEyes and try to find illicit photos of them. And he succeeded in some cases. A woman that had once...

tried to rent, I think, a room in his apartment. He found revenge porn of her that was not associated with her name. It would not have been findable without a face search engine. More innocuous images, like a woman on a naked bike ride. Just all of these photos that were kind of safely obscure until the search engine comes along that makes the internet searchable by face.

Well, Cash, sometimes we ask our listeners to send us their dilemmas about tech, and we got one recently and thought Cash is the perfect person to help us through this. So this listener, who we are going to keep anonymous at their request, told us a story about their use of PimEyes. And this listener wrote in to say that they've been using PimEyes to search the photos of the people they come across on the dating app BuzzFeed.

Bumble. And according to them, they found that a lot of the photos on Bumble linked to stolen photos from Instagram accounts, OnlyFans accounts, and profiles that solicit for sex services. And this listener wrote, quote, when I found stolen pic profiles using facial recognition or profiles using the app to solicit for sex services, I flagged the profile. I've apparently done it enough that Bumble sent me a note banning me from their platform

This listener then took Bumble to small claims court over this and claimed they won their case. But now they wonder, was I in the wrong for trying to protect myself using whatever tech tools I can?

So I should just say here, we reached out to Bumble about this and they declined to comment. But as best as we can tell after going back and forth with our listener, this did actually happen. So, Cash, how do you react when you hear that story? I would like that reader's contact information so I can report that story out.

I mean, I do think we're at a moment where it's generally considered creepy. You know, as a matter of etiquette, you shouldn't be searching someone's face without consent. You know, in that particular case, you know, I know you're opposed to it, as you just said. I think if you're just meeting someone for the first time, maybe you don't immediately Google their face. But if you're deeper into the relationship...

I mean, just do a reverse image search of the profile photo. I don't even think you need to search the face necessarily. Although that's actually just an interesting story about how a privacy violation became normalized over time just through the long-term existence of Google Image Reverse Search and

I'm sure some listeners will hear this. I think particularly women will think, look, if you go on a first date, that can be a very dangerous situation. It is not unreasonable to want to have some sense of security before you meet with a new person. And hopefully you're meeting that person in public and hopefully somebody else in your life knows where you are. But in addition to that, you might want to get some intel. And look, I think it's probably quite common for people to Google the people they're about to go on first dates with.

Man, I don't know. I think part of the reason that we've gotten comfortable with the tools that exist today is because there is still some ambient privacy remaining where maybe, yeah, your name can be searched and some details will be revealed about you, but it will not become clear that you have an OnlyFans account, for example, right? I just worry that as we normalize the use of these technologies, pretty soon we're just going to wake up in a world where we are not

able to live as freely as we used to. And it is going to be very hard to rewind in part because of questions like this and people saying, well, I needed to do this to make me feel safe. I do think the repercussions are going to be worse for some people than others. So yes, people who have done online sex work

who did it not thinking that it would ever be tied to them, it's going to be really hard for them because it's just so stigmatized. And if this becomes normalized, I think it will really hurt their opportunities in dating life, but also professional life. Well, I'll tell you one thing that I've been thinking about as I was reading your book, Cash, is there was this moment early in the pandemic when I remember hearing celebrities saying that actually they liked magicians.

as a sort of societal trend, because all of a sudden, you know, a very famous actor can put on a COVID mask and go to the grocery store and for once not be recognized, not have their name tied to their face. And that actually made them feel more comfortable and actually more free to be able to camouflage themselves that way.

And it just struck me that that's sort of maybe coming for all of us, that we will all just assume that unless we're wearing a mask or obscuring our face in some way, we will just all have to sort of move through the world as if we are celebrities. And that's really striking. But also, do you think that people will start wearing masks just to combat the facial recognition databases?

Yeah, I mean, potentially. The problem is during the pandemic, a lot of these companies train their AI to work when you're wearing a mask. And so when I did a story about PimEyes, you know, I asked my colleagues to volunteer. And Cecilia Kong, who covers, you know, politics in D.C., sent me this photo of herself with the COVID mask on.

and it still found photos of her. So you need to wear a ski mask, which is hard depending on what climate you live in. Are you familiar with the Mission Impossible series of films?

Yes. You know, one of their sort of signature technologies in those films is masks that look very similar to other people. And I really hope we get there because I may just have to walk around, you know, with somebody else's face on. It's sort of almost be a face-off situation. Yeah, I'm going to put on a mask that looks exactly like Casey and then go commit some crimes. Ha ha ha!

All right. Cash Ruhill, really good to talk to you. The book is called Your Face Belongs to Us, A Secretive Startup's Quest to End Privacy as We Know It. It is quite, quite good. I really enjoyed reading it and I really appreciate you coming on. Thanks, Cash. Thanks. After the break, it's time for Gamer News. Gamer News.

This podcast is supported by KPMG. Your task as a visionary leader is simple. Harness the power of AI. Shape the future of business. Oh, and do it before anyone else does without leaving people behind or running into unforeseen risks. Simple, right? KPMG's got you. Helping you lead a people-powered transformation that accelerates AI's value with confidence. How's that for a vision? Learn more at www.kpmg.us.ai.

Kevin, it's time for Gamer News! Should we play the Gamer News theme song? Play the Gamer News theme song! So, Casey, we don't talk much about video games on this podcast, but we actually are both big video gamers. That's true. And, well, look, here's the thing. Gamers are like ordinary news consumers, except in this one key respect, which is that if you say something they don't like, they will try to kill you. And so it's a very fraught subject. Yeah.

And it must be handled delicately, but we're going to strive to do that in the first installment of Gamer News. Right, and I think gaming news often doesn't get taken super seriously because, like, it's just video games or something, but video games, they are one of the biggest industries in media, and I think we should spend some time talking about it. Absolutely, and even if you just sort of set aside the amount of money and time that is spent on video games, which are both staggering, it is the source of

culture for like the entire next generation of human beings, right? It's like video games are shaping the way that we relate to each other in ways that I think older people sometimes don't understand. Yeah. So we're going to talk about it today with apologies to the non-gamers out there, but we're gamers. We have a tech podcast. We're going to talk about some video games. Let's do gamer news. Gamer news.

So the biggest gaming news this week involved a company called Unity. Now, Casey, what do you know about Unity? What I know is that Unity, despite its name, has ironically torn the entire gaming community apart. It's true. They're doing disunity this week. They make what is called a game engine. And a game engine is sort of where you create the nuts and bolts of the video game. You know, you have your idea. Let's say you say, what if there was a video game about a

plumber who had to constantly rescue a princess from a castle because she had no agency of her own. That's a stupid idea. That'll never work. Well, I think it could have some legs. But anyways, you have this idea and you turn to something like Unity so that you can actually build it. It is the Microsoft Word of video games. Right. It sort of has the basic building blocks because if you're making a video game, like say you're making a first-person shooter and you don't want to like...

You don't want to have to code the laws of physics to teach a character how to jump. That's right. If you use the Unity game engine, you can just sort of plug and play their little jump command, and it will make the character able to jump. Yeah, so it speeds things up. If every video game developer had to invent their own game engine, that would just be a massive waste of everyone's time. Right, so there are a number of popular game engines that a lot of games are built on. But!

When it comes to mobile gaming, there's really only one popular engine. Right. So Unity is the game engine that powers a lot of very popular video games, including Pokemon Go, Hearthstone, Beat Saber, Cuphead, and Monument Valley. Have you played any of those games? I have played almost all of those games. And what I love is how silly you sound when you say the names of five video games back to back. Yeah.

So Unity is a very popular video game engine for developers, in part because it's got a lot of features, it's been around for a while, but also because unlike some other game engines, it did not take royalties from the game creator. So basically, if you use the Unity game engine to make a video game that gets millions of downloads, Unity is not going to charge you based on the popularity of that game, or so was the case until last week.

That's right. So last week, Unity announced changes to its pricing model for game developers. So instead of being able to use this game engine in a royalty-free way, instead something called the Unity Runtime Fee would apply. So it's basically a small fee of a couple cents every time someone installs your game.

And there are sort of thresholds, like the fees get sort of smaller as the games get more popular. But for developers who are making games that are downloaded and installed millions of times, this could amount to a ton of extra money that they have to pay Unity. That's right. And this kicks into effect in January, right?

the development cycles for video games are very long, and so you have a number of developers who have been working on their games for years with one business model in mind, and they are being told things are going to be very different for you. Right. So Unity announced these changes last week,

Then gamers and game developers sort of freaked out and started protesting, saying, hey, you guys are changing the terms of our business on us. We don't want to use your game engine anymore, and we think this is unfair. And the resulting scandal they're calling Gamergate. That's not what it is.

Is that not what that one is? Okay. I misread that. So anyway, Unity apologized and backpedaled because one of the things that game developers were worried about is maybe there's a game that you don't like out there. Maybe you disagree with some of the choices the game developers made. Maybe you are disagreeing with some of the

politics of the game developers. I'm furious that Luigi is not getting the love that he deserves. Exactly. So you could have people doing what's called install bombing, where you basically run up the royalties for these game developers by installing and deleting the same game over and over again. And

Unity, when it made its announcement, had not accounted for this at all. You know, as I was reading the coverage of this, Kevin, I reflected back on our coverage of the Reddit story earlier this year, where Reddit also announced what were essentially an unpopular series of pricing changes. And one of the big problems was they just hadn't thought it through. They had not communicated to their audience who was going to be affected, how they were going to be affected, what steps they were taking to preventing abuse. Unity didn't do any of that. Totally. And so game developers were very upset about this. One game developer, uh,

Gary Newman, who's the founder of something called Face Punch Studios. Face Punch Studios? He wrote, quote, it hurts because we didn't agree to this. We used the engine because you pay up front and then ship your product. We weren't told this was going to happen. We weren't warned. We weren't consulted. We have spent 10 years making rust on Unity's engine. We've paid them every year and now they changed the rules.

Yeah, and look, you know, yesterday, just by happenstance, I happened to have coffee with John Hickey. And John Hickey is the CEO of Niantic, which makes Pokemon Go, which I'm going to guess is one of the bigger users of the Unity product, right? This is a very, very popular video game. It's made a lot of money.

you can imagine how much it's going to cost them if these changes kicked in. And, you know, John was very diplomatic when he talked about the situation. He was sort of like, well, we'll see where it all shakes out. We're waiting to see kind of what the final pricing is. But he also brought up this analogy that I thought was interesting, which is, let's say that you are a writer and you wrote a book in Microsoft Word. And then you find out as you're sort of finishing up the final chapters that every time a copy of your book is sold, you have to give Microsoft 20 cents because you use Microsoft Word. That is a

Yeah, well, and I think a lot of people assume that this may be the influence of Unity's CEO, John Riccitello, who has a history of saying inflammatory things. Right.

Right. He had to apologize for some comments he made about developers of games in an interview. He said, quote, these people are my favorite people in the world to fight with. They're the most beautiful and pure, brilliant people. They're also some of the biggest fucking idiots. LAUGHTER

Which is just an amazing quote to have about your customers. Yeah, I mean, that is literally just his customers that he is talking about. So this has been not only a big online scandal, but actually seems to have caused enough anger toward Unity that there was a death threat that caused the company to have to cancel a company town hall and close two of its offices. So, Casey, do you think this was just a

pure unforced error on Unity's part? Or do you think they do have to change their business model in some way? Well, you know, there has been some interesting speculation about why Unity has moved in this regard. It is a public company. When you're a publicly traded company, you always have to be telling Wall Street a new story about where that next 10 or 20% of growth is going to come from. It is also the case that Apple introduced app tracking transparency

over the last year or so. And Ben Thompson, the analyst, writes the great newsletter Sir Techery, wrote that he thought this might be partially in response to that because for reasons that maybe we don't have to get into, app tracking transparency hurt Unity's ad business as it hurts most ad businesses online. And for that reason, Unity is now looking around for a new source of revenue. So,

I wonder, do you think this is going to mean that most game studios will move to some different game engine? Or what do you think happens now? Well, I was asking John about that. I said, you know, how big of a deal is it to just switch to a new engine? And he said, it's a pretty big deal. And if you think about it, it makes sense, right? Because, you know, you're developing with one set of code for the laws of physics. And if you have to go port it over, you know, one thing I know as a not particularly technical gamer is that when you port video games just from OnePlus,

to another, things often go wrong in ways that you don't expect. If you want to actually change the underlying code that is determining like the physics and the sprites and every other component of a video game, you better believe that's going to be trouble. Right. So Unity, they obviously saw all of this blowback happening and they have kind of backpedaled. They have said that they're going to maybe soften some of these changes and maybe sort of pacify some of the anger game developers. All right.

Next Gamer News story, which has to do with Microsoft and some documents about its plans for its gaming division that accidentally got leaked. Casey, did you follow this story? Absolutely. When it comes to Gamer News, there is no Gamer News bigger than what is the next console and what are the next video games. So according to Axios, the leak of these Microsoft documents was discovered late Monday by someone on the Gaming Forum Reset Era website.

who was basically looking through files that were related to an upcoming trial between Microsoft and the FTC about Microsoft's attempted acquisition of the video game company Activision Blizzard. So the court had asked Microsoft to upload some documents of its trial exhibits with redactions, but it appears that Microsoft decided

actually uploaded a unredacted PDF of documents that included information about its future plans for its video game division, PowerPoint slides, and emails between its executives. That's right. And, you know, without knowing exactly what tools they were using at Microsoft, I think I have a guess, and I think that this might represent one of the biggest failures of Clippy in the entire history of Microsoft Office. Clippy!

Clippy, we trusted you. When you go to upload your unredacted documents to the FTC website, where is Clippy? Where is Clippy to say, hey, looks like that should have been redacted. Clippy's in the doghouse. Yes, he is. So Microsoft Gaming CEO Phil Spencer acknowledged the leaks. He told employees in a memo obtained by The Verge that the plans were unintentionally disclosed. You don't say. And he said on Instagram,

So, Casey, what was in this leak that Microsoft felt like it had to acknowledge? Well, there's a new...

of their current generation gaming console, the Xbox Series X. It is apparently coming next year without a disk drive. They're working on a new controller and also a refresh of the Xbox Series S. And if you're listening to this and you're thinking, Casey, are the names of the Xbox gaming consoles really that dumb and confusing? They really are.

So this is stuff that you care a lot about if you are a hardcore gamer. Most people probably don't. What I found interesting in this leak was this item that said that Microsoft had considered at one point buying Nintendo. Nintendo, obviously, is one of the biggest companies in gaming. It's been a huge sort of prize target for a lot of the big Silicon Valley companies that are trying to get into gaming. So far, they have not been willing to sell. But this was emails between Microsoft executives discussing the possibility of buying Nintendo. Yeah.

Yeah, and it seems like this was probably more of an offhand comment. Like, it doesn't seem like this got very far down the road of anything happening. But I do think if you're the FTC and you were worried about consolidation in gaming and how that might raise prices for consumers, it might decrease the number of games on the market, and you read this email, that could raise some alarms. Totally, and I think it just shows you on a broad level how interested the biggest companies in tech are in making a really big play in the gaming industry, right? They know how big an industry this is.

They know how many people are out there playing games, buying games, buying consoles. They know that this is a big area of potential growth for them. And so they're trying to sort of gobble up as much of that industry as they can. Yeah, absolutely. All right. Those are the big two stories of the week in gaming. Casey, do you have any gaming news to share? What are you playing these days? Well, I, I,

I have recently bought a couple of games. One is a throwback to my childhood growing up in arcades, so I bought the most recent Mortal Kombat game. It's called Mortal Kombat 1. And I would say what the premise of this game is is what if we took the game you know and love and we made it so unbelievably complicated that you'd just be better off watching YouTube videos? Oh, my God. When I say they've added...

systems to this game, not only are there all these, you know, the combo systems and the blocking systems and the this system and the that system, individual characters will have their own system where it's like, well, if this person does this combination of buttons six times, then they get this thing buffed for four seconds, and I truly do not know who has the time and patience and energy to understand any of that who is older than 14 years old. Because I have an idea.

alternate hypothesis for what happened to Mortal Kombat in the 30 years between when you first played it and now. What's that? You got old. Oh, no! How dare you? I'm so young at heart, Kevin!

These kids with their newfangled complicated games. Now, here's what I, there are, it is definitely true that there are some like button presses that are just a young person's game. You know, it's like if you play a fighting game, one of the things that it'll ask you to do is parry. And if you parry something, that like gives you an advantage in your little combat scenario. But the window to do this can sometimes be literally seven frames on a screen. So, you know, I don't know how long that takes to happen. It is truly milliseconds. And I can't do it anymore.

Yeah. This is not a young man's game. Yeah. Speaking of not a young man's game, what are you playing? Parcheesi, Checkers? Mahjong? Mahjong? So I used to play a lot of games and then I had a kid. So now I just go to sleep at 9.30. But I do, from time to time, like to blow off some steam. I'll play a little Valorant. I really like the team-based shooters. Because of the sort of collaborative nature of the murders? Yeah.

Yes, exactly. So I play a game called Valorant. I've also been trying to get back into mobile games. You got me addicted to Marvel Snap, which I may never forgive you for. It is truly the most addictive substance that has ever been in my life. I've successfully got it out now. It was fentanyl on my phone. I had to get it off. So I deleted Marvel Snap, but I've been trying out some other mobile games. I installed this game about, it's like disc golf.

on your phone, which is kind of fun. And then I'm playing this one. It's like a very calming European game called I Love Hugh. Have you ever seen this one? I Love Hugh. No, it sounds very sweet. Yeah, they have different kinds of games over there. But basically, it's like trying to match colored tiles and stuff. It's a little soothing, you know, way to kill a couple minutes on the train. You know what? I'm going to ask you to show me that later because I'm in desperate need of a new mobile phone game.

All right, that is Gamer News. This podcast is supported by KPMG. Your task as a visionary leader is simple. Harness the power of AI. Shape the future of business. Oh, and do it before anyone else does without leaving people behind or running into unforeseen risks.

Simple, right? KPMG's got you. Helping you lead a people-powered transformation that accelerates AI's value with confidence. How's that for a vision? Learn more at www.kpmg.us slash AI. Kevin, remember last week when you deepfaked my voice once again? I do. Do you have any idea what you were having me say? I don't.

I think it was something about your house and maybe backsplashes? Okay, well, so here's the problem because you and I both don't speak German and yet you were having me say things and one of our German-speaking listeners actually wrote into me with a translation of what you had me say, which I would now like to read to you. Um...

I

Why did you have me say that? It was sort of a random clip that I needed like a continuous clip of you talking for like a minute, and I used that one. Well, what we're learning is that they're getting much better when it comes to the sound of my voice, but when it comes to the words, they are basically at square one. Well, I apologize to all of our German and Hindi-speaking listeners. I got an...

By the way, I got a note from a Hindi-speaking listener who was also like, what were you saying? Because it was complete nonsense. Well, that's on you because that is an accurate translation of the complete nonsense that you said in English. I say logical sentences. Okay. AI can make you speak other languages. It cannot make you sound more coherent, unfortunately. It can't make you make sense, unfortunately, that we don't have the technology.

Hard Fork is produced by Rachel Cohn and Davis Land. We're edited by Jen Pouillant. This episode was fact-checked by Will Peischel. Today's show was engineered by Alyssa Moxley. Original music by Marian Lozano, Pat McCusker, Rowan Nemistow, and Dan Powell. Special thanks to Paula Schumann, Pui Wing Tam, Nelga Logli, Kate Lepresti, and Jeffrey Miranda. You can email us at hardfork at nytimes.com, and Google Bard will absolutely not understand it.

Earning your degree online doesn't mean you have to go about it alone. At Capella University, we're here to support you when you're ready. From enrollment counselors who get to know you and your goals to academic coaches who can help you form a plan to stay on track. We care about your success and are dedicated to helping you pursue your goals.

Going back to school is a big step, but having support at every step of your academic journey can make a big difference. Imagine your future differently at capella.edu.