cover of episode The Trust Engineers

The Trust Engineers

Publish Date: 2023/2/24
logo of podcast Radiolab

Radiolab

Chapters

Shownotes Transcript

WNYC Studios is supported by Zuckerman Spader. Through nearly five decades of taking on high-stakes legal matters, Zuckerman Spader is recognized nationally as a premier litigation and investigations firm. Their lawyers routinely represent individuals, organizations, and law firms in business disputes, government, and internal investigations and at trial. When the lawyer you choose matters most. Online at Zuckerman.com.

Radio Lab is supported by Progressive Insurance. Whether you love true crime or comedy, celebrity interviews or news, you call the shots on what's in your podcast queue. And guess what? Now you can call the shots on your auto insurance too with the Name Your Price tool from Progressive. It works just the way it sounds. You tell Progressive how much you want to pay for car insurance and they'll show you coverage options that fit your budget.

Get your quote today at Progressive.com to join the over 28 million drivers who trust Progressive. Progressive Casualty Insurance Company and Affiliates. Price and coverage match limited by state law. Listener supported. WNYC Studios. Wait, you're listening? Okay. All right. Okay. All right. You're listening to Radiolab. Radiolab. From WNYC. WNYC.

Rewind. All right. Hey, I'm Chad Ipumrod. I'm Robert Krolwich. This is Radiolab, the podcast. So here's a story we've been following for a while. Yes.

Comes from a friend of mine, Andrew Zolle, who is a great thinker and writer. He wrote a book called Resilience, Why Things Bounce Back. And he's a guy who thinks a lot about technology. I have been interested in a long time, for a long time, in the relationship between technology and emotion. And because...

Well, I've thrown more than one cell phone to the ground. Andrew and I were having breakfast one day, and he pitched me on this idea of doing a story about Facebook. I remember. I am not a huge believer in doing stories about Facebook, but this story was wickedly interesting. I know.

Yeah. And profound in its way. So he and I have been following it for a couple of years up and down through this roller coaster of events. It really begins in 2011. Well, let me back up for a minute. One of the challenges talking about Facebook is just the scale of the thing. So, you know, there's 1.3 billion people on Earth as of March 2014. Yeah.

Those are active monthly users. There's a billion people who access the site through mobile devices. Just to put that in perspective, there's more Facebook users than there are Catholics. That can't be true. Yeah. No. Yeah. It turns out it is true, but they're neck and neck.

Anyhow, the overall point is that when you have one out of every seven people on the planet in the same space trying to connect across time and geography... You are bound to create problems sometimes. Facebook making headlines again tonight. The issue this time... Before we go there, we should introduce you to the guy in our story who is the problem solver. My name is Arturo Bejar, and I'm a director of engineering at Facebook. The story begins Christmas 2011. ♪

People are doing what they do every holiday season. They're getting back together with their families and they're going to family parties and they're taking lots and lots of pictures. And they're all uploading them to Facebook. And at the time, the number of photos that were getting uploaded was going pretty crazy. In fact, in just those few days between Christmas and New Year's, there are more images uploaded to Facebook than there were the entirety of Facebook. Wait.

You're saying more images were uploaded in a week to Facebook than all of Flickr all time? Yeah. Which created a situation. The number of photos was going up, and along with the number of photos going up, the number of reports was going up. What he means by reports is this. Back in 2011? If you saw something on Facebook that really upset you, you could click a button to report it. You could tell Facebook to take it down, which from their perspective...

is a really important mechanism because if you're Facebook, you don't want certain kinds of content on your site. You don't want nudity, you don't want drug use, hate speech, things like that. So a day or so after Christmas, Facebook engineers come back to work and they find waiting for them literally millions of photo reports. Yes. The number of people that would be necessary to review everything that was coming in

It kind of boggled the mind. How many people would you have needed? I think at the time we were looking at it, which was two years ago, and again, all this has grown much since then. We're looking at like thousands. Like some giant facility in Nevada filled with nothing but humans looking at Christmas board. We were actually joking about this, but we found out later there actually are thousands of people across the world who do this for Internet companies all day long.

which clearly warrants its own show. But for our purposes...

Just know that when a photo is reported, a human being has to look at it. Exactly right, because there needs to be a judgment on the image, and humans are the best at that. So Arturo decided, before we do anything, let's just figure out what we're dealing with. And so we sat down with a team of people, and we started going through the photos that people were reporting. And what they found was that about 97% of these million or so photo reports were drastically miscategorized. ♪

They were seeing moms holding little babies. Reported for harassment. Pictures of families in matching Christmas sweaters. Reported for nudity. Pictures of puppies reported for hate speech. Puppies reported as hate speech? Yes. And we're like, what's going on, right? Hmm. So they decide, let's investigate. Okay, so step one for Facebook, just ask a few of these people. Why don't you like this photo? Why did you report this?

Responses come back and the first thing they realize is that almost always the person complaining about the image was in the image they were complaining about. And they just hate the picture. Maybe they were doing a goofy dance, someone snapped a photo and they're like, why did you post that? Take it down. Maybe they were at a party. They got a little too

drunk, they hooked up with their ex, somebody took a picture, and that person says, oof, you know, that's a one-time thing, that's never happening again. Take it down. Arturo said there were definitely a lot of reports from people who used to be couples. And then they broke up, and then they're asking to take the photos down. And the puppy? What would be the reason for that? Oh, because it was maybe a shared puppy. You know, maybe it's your ex-wife's puppy. You see it, makes you sad. Take it down. So once we've begun investigating, you find that there's all of this...

relationship things that happen that are like really complicated. You're talking about stuff that's the kind of natural detritus of human dramas. And the only reason that the person reporting it flagged it as like hate speech is because that was one of the only options. They were just picking because they needed to get to the next screen to submit the report. So we added a step.

Arturo and his team set it up so that when people were choosing that option... I want this photo to be removed from Facebook. Some of them would see a little box on the screen. That said, how does the photo make you feel? And the box gave several choices. The options were...

Embarrassing. Saddening. Upsetting. Bad photo. And then we always put in an other. Where you could write in whatever you wanted about the image. And it worked incredibly well. I mean, like 50% of people would select an emotion. Like, for instance, embarrassing. And then 34% of people would select other. And we read those. We sit down and we're reading the other. And what was the most frequent thing that people were typing into other? It was, it's embarrassing.

It's embarrassing, but you had embarrassing on the list. I know. That's weird. I know.

Arturo was like, okay, maybe we should just put it's in front of the choices? As in, please describe this piece of content. It's embarrassing. It's about four of me. It makes me sad. Et cetera. And when they wrote out the choices that way, with that extra word. We went from 50% of people selecting an emotion to 78% people selecting an emotion. In other words, the word it's.

All by itself boosted the response by 28%. From 50 to 78. And in Facebook land, that means thousands and thousands of people. Let me just slow down for a second. I'm trying to think. What could that be? Do people like full sentences? Here's thinking. It's always good to mirror the way people talk. Arturo's idea, though, which I find kind of interesting, is that –

When you just say embarrassing and there's no subject, it's silently implied that you are embarrassing. But if you say it's embarrassing, well, then that shifts the sort of emotional energy to this thing. Photograph. And so then it's less hot and it's easier to deal with. Oh, how interesting. That thing is embarrassing. I'm fine. It's embarrassing. It is responsible, not me. Good for Aichero. That's a subtle thought. It's very subtle.

But it still doesn't solve their basic problem because even if Facebook now knows why the person flagged the photo, that it was embarrassing and not actually hate speech.

They still can't take it down. I mean, there's nothing in the policy, the terms of service that says you can't put up embarrassing photos. And in fact, if they took it down, they'd be violating the rights of the person who posted it. Like, there's nothing we can do. I'm sorry. Oh, so they'd actually fence themselves in a little bit. Yeah. For me, I'd always put in another. I would just be like, go deal with it yourself. That's what I would say.

Talk to the person. No, honestly, that's the solution. He wouldn't put it that way, but what he needed to have happen was for the person who posted the picture and the person who was pissed about it. To talk to each other. To work it out themselves. So Arturo and his team made a tweak where if you said this photo was embarrassing or whatever, a new screen would pop up.

And it would ask, Do you want your friend to take the photo down? And if you said, Yes, I would like my stupid friend to take the photo down. We put up an empty message box. Just an empty box that said, We think it's a good idea for you to tell the person who upset you that they upset you.

And only 20% of people would type something in and send that message. They just didn't do it. They just said, I'd rather you deal with this. So Arturo and his team were like, okay, let's take it one step further. When that message box popped up... We gave people a default message that we crafted. To start that conversation. Just get the conversation going. And it's kind of funny, the first version of the message that we did was like, hey, I didn't like this photo. Take it down. Hey, I don't like that photo. That's a little aggressive. It is.

But when they started presenting people with a message box with that sentence pre-written in... Almost immediately. We went from 20% of people sending a message to 50% of people sending a message. Really? It's surprising to all of us. We weren't expecting to see that big of a shift. So this means that people just don't want to write. They'll sign up for pretty much anything. No, not necessarily. Maybe it's just that it's so easy to shirk the responsibility of...

confronting another person that you need every little stupid nudge you can get. I see. Okay. That's how I see it. So they put out this pre-written message. It seems to really have an effect. So they're like, okay, if that works so well, why don't we try some different wordings instead of, hey, I didn't like this photo. Take it down. Why don't we try, hey, Robert, I didn't like this photo. Take it down. Just putting in your name works about 7% better than leaving it out. I mean,

Meaning what? It means that you're 7% more likely either to get the person to do what you ask them to do, take down the photo, or to start a conversation about how to resolve your feelings about it. Oh, we're now measuring the effectiveness of the message. So if I'm objecting, will the other party pull it off the computer? Pull it off or just talk to you about it. Okay. They also tried variations like, hey, Robert, would you please take it down, throwing in the word please, or would you mind taking it down?

taking it down. And it turns out that "would you please" performs 4% better than "would you mind". They're not totally sure why, but they try dozens of phrases like "would you please mind", "would you mind", "I'm sorry to bring this up", "but would you please take it down", "I'm sorry to bring this up", "but would you mind taking it down", and at a certain point, Andrew and I got... We just wanted to see this whole process they're going through up close.

So we took a trip out to Facebook headquarters, Menlo Park, California. This was about a year ago. So it's before the hubbub. We met up with Arturo, who sort of walked us through the campus. It's one of these sort of like socialist utopics, Silicon Valley campuses where people are like in hammocks and there's volleyball happening. They had foxes running around at one point?

So we were there on a Friday because every Friday afternoon Arturo assembles this really big group to review all the data. I mean you've got about

About 15 people crammed into a conference room, like technical folks. Mustafa, software engineer, trust engineering at Facebook. Dan Farrell, I'm a data scientist. Paul, I'm also an engineer. A lot of these guys call themselves trust engineers. And every Friday, the trust engineers are joined by a bunch of outside scientists. DeAkker Kellner, professor of psychology at UC Berkeley. Matt Killingsworth, I study the causes and nature of human happiness. Miljana Simon-Thomas, and my background is neuroscience.

This is the meeting where the team was reviewing all the data about these phrases, and so everybody was looking at a giant graph projected on the wall. It's kind of supporting your slightly U-shaped curve there in that, especially in the deletion numbers, the "Hey, I don't like this photo, take it down" and the "Hey, I don't like this photo, would you please take it down" are kind of the winners here. It's kind of interesting.

that you see the person that's receiving a more direct message is higher, 11% versus 4%. One of the things they notice is that anytime they use the word sorry in a phrase, like, hey Robert, sorry to bring this up, but would you please take it down? Turns out the I'm sorry doesn't actually help. It makes the numbers go down. Really?

Seven and nine are some of the low points, and those are the ones that say sorry. So, like, just don't apologize. Just don't apologize. Because, like, it shifts the responsibility back to you, I guess. No, it doesn't. It's just... No, I mean, it's like, it's a linguistic psychology subtle thing. You're making that up. I am, kind of. But one of the things that really struck me at this meeting on a different subject is that the scientists in the room, as they were looking at the graph, taking in the numbers, a lot of them had this look on their face of, like, holy...

I'm just stunned and humbled at the numbers that we generally get in these studies. That's Amelia Simon-Thomas from Berkeley. My background is in neuroscience, and I'm used to studies where we look at 20 people, and that's sufficient to say something general about how brains work. Like, in general at Facebook, like, people would scoff at sample sizes that small.

That's Rob Boyle, who's a project manager at Facebook. The magnitudes that we're used to working with are in the hundreds of thousands to millions. It's kind of an interesting moment because there's been a lot of criticism recently about

especially in social science, about the sample sizes, how they're too small and how they're too often filled with white undergraduate college kids and how can you generalize from that. So you could tell that some of the scientists in the room, like, for example, Dacher Keltner, who's a psychologist at UC Berkeley, they were like, oh, my God, look at what we can do now. We can get all these different people. Of different class backgrounds, different countries. To him, this kind of work with Facebook...

This could be the future of social science right here. There has never been a human community like this in human history. And somewhere in the middle of all the excitement about the data and the speed at which they can now test things. The bottleneck is no longer how fast we can test how things work. It's coming up with the right things to test. Andrew threw out a question. What is the statistical likelihood that I have been a guinea pig in one of your experiments? I believe 100%.

If we look at the data, any given person... That's Stan Farrell, data scientist. When we look at the data, any given person is probably currently involved in, what, 10 different experiments? And they've been exposed to 10 different experimental things. Yep. That kind of blew me back a little bit. I was like, I've been a research subject, and I had no idea. Coming up, everybody gets the idea.

and the lab rats revolt. Stay with us. This is Radiolab, and we'll pick up the story with Andrew Zali and I sitting in a meeting at Facebook headquarters. This was about a year and a half ago. We had just learned that at any given moment, any given Facebook user is part of 10 experiments at once without really their knowledge. And sitting there in that meeting, you know, this was a while ago, we both were like, did we just hear that correctly?

That kind of blew me back a little bit. I was like, I've been a research subject and I had no idea. And I had that moment of discovery on a Friday and literally the next day, Saturday. This is scary. The world had that experience.

Facebook using you and me as lab rats for a Facebook experiment on emotions. Barely a day after we'd gotten off the plane from Facebook headquarters, the kerfuffle occurred. Facebook exposed for using us as lab rats. As lab rats. Lab rats, shall we say? Facebook messing with your emotions.

You might remember this story because for a hot second it was everywhere. Facebook altered the amount of... It was all over Facebook. The story was an academic paper had come out that showed that with some scientists, the company... Had intentionally manipulated user news feeds to study a person's emotional response. Seriously, they wanted to see how emotions spread on social media. They basically tinkered with the news feeds of about 700,000 people. 700,000 users to test how they'd react if they saw more positive versus negative posts and vice versa.

And they found an effect that when people saw more positive stuff in their news feeds, they would post more positive things themselves and vice versa. It was a tiny effect. Tiny effect, but the results weren't really the story. The real story was that

Facebook was messing with us. "It gives you pause and scares me when you think that they were just doing an experiment to manipulate how people were feeling and how they then reacted on Facebook." People went apoplectic. "It has this big brother element to it that I think people are gonna be very uncomfortable with." And some people went so far

as to argue. I wonder if Facebook killed anyone with their emotional manipulation stunt. If a person had a psychological or psychiatric disorder, manipulating their social world could cause them real harm. Make sure you read those terms and conditions, my friends. Always. That's the big takeaway.

What you hear is a sense of betrayal, that I really wasn't aware that this space of mine was being treated in these ways and that I was part of your psychological experimentation. That's Kate Crawford. I'm a principal researcher at Microsoft Research. Visiting professor at MIT, strong critic of Facebook throughout the kerfuffle. There is a power imbalance at work. I think when we look at the way that that experiment was done, it's an example of highly centralized power and highly opaque power at work.

And I don't want to see us in a situation where we just have to blindly trust that platforms are looking out for us. Here I'm thinking of an earlier Facebook study, actually, back in 2010, where they did a study looking at whether they could increase voter turnout. They had this quite simple design. They came up with...

you know, a little box that would pop up and show you where your nearest voting booth was. And then they said, oh, well, in addition to that, when you voted, here's a button you can press that says I voted. And then you'll also see the pictures of six of your friends who'd also voted that day. Would this change the number of people who went out to vote that day?

And Facebook found that it did, that if you saw a bunch of pictures of your friends who had voted and you saw those pictures on election day, you were then 2% more likely to click the I voted button yourself, presumably because you too had gone out and voted. Now, 2% might not sound like a lot, but... It was not insignificant. Again, I think by the order of 340,000 votes, the votes that they estimate they actually...

shifted by getting people to go out. These are people who wouldn't have voted and who did? Who wouldn't have voted and who they have said in their own paper and published paper that they increased the number of votes that day by 340,000. Simply by saying that your neighbors did it too? Yeah, by your friends. Now, my first reaction to this, I must admit, was, okay, I mean, we're at historic lows when it comes to voter turnout. This sounds like a good thing. Yes. But what happens if someone's running a platform that a lot of people are on and they say, hey,

you know, I'm really interested in this candidate. This candidate is going to look out not just for my interests but the interests of the technology sector and I think they're a great candidate. Why don't we just show that get out to vote message and that little system design that we have to the people who clearly, because we already have their political preferences, the ones who kind of agree with us and the people who disagree with that candidate, they won't get those little nudges. Now that is a profound democratic challenge.

that you have. Kate's basic position is that when it comes to social engineering, which is what this is, companies and the people that use them need to be really, really careful. In fact, when Andrew mentioned to her that

That Arturo had this group and the group had a name. He actually runs a group called the Trust Engineering Group. His job is to engineer trust. When Andrew told her that? Facebook users. That's his job. You're smacking your forehead. I think we call that a facepalm. She facepalmed really hard. These ideas that we could somehow engineer compassion, I think to some degree have a kind of hubris in them.

Who are we to decide whether we can make somebody more compassionate or not? How do you want to set this up? A couple months after our first interview, we spoke to Arturo Behar again. At this point, the kerfuffle was dying down. We asked him about all the uproar. I know this is not your work. This is the emotional contagion stuff. But literally, like, hours after we got back from that meeting, that thing erupted. Do you understand the backlash? No, I mean, I think that...

I mean, we really care about the people who use Facebook. I don't think that there's such a thing as... I mean, if anything, I've learned in this work is that you really have to respect people's response and emotions, no matter what they are. He says the whole thing definitely...

made them take stock? There was a moment of concern of what it would mean to the work. And there was like, is this going to mean that we can't do this? Hmm.

Part of me, like, being honest comic here is I actually want to reclaim back the word emotion and reclaim back the ability to do very thoughtful and careful experiments. I want to claim back the word experiment. Do you want to reclaim it from what? Well...

Suddenly, like the word emotion and the word experiment, all these things became really charged. Well, yeah, because people thought that Facebook was manipulating emotion and they were like, how could they? Yes, but in our case, right, and in the work that we're talking about right now, all of the work that we do begins with a person asking us for help. This was Arturo's most emphatic point. He said it over and over that, you know, Facebook isn't just doing this for fun. People are asking for help. They need help.

Which points to one of the biggest challenges of living online, which is that, you know, offline, you know, when we try and engineer trust offline, or at least just read one another, we do it in these super subtle ways using eye contact and facial expressions and posture and tone of voice, all this nonverbal stuff. And of course, when we go online, we don't have access to any of that. In the absence of that feedback, how do we communicate? What does communication turn into online?

I mean, I think about what it means to be in the presence of a friend or a loved one. And how you build experiences

that facilitate that when you cannot be physically together. Arturo says that's really all he's up to. He's just trying to nudge people a tiny bit so that their online selves are a little bit closer to how they are offline. And I got to say, if he can do that by engineering a couple of phrases like, hey, Robert, would you mind, et cetera, et cetera, well, then I'm all for it. Why not take the position that to create a company that stands between two people who

who are interacting and then giving them boxes and statuses and little, and advertising and so forth. This is not doing a service. This is just, this is,

This is a way to wedge yourself into the ordinary business of social intercourse and make money on it. And you're acting like this group of people now is going to try to create the moral equivalent of an actual conversation. First of all, it's probably not engineerable. And second of all, I don't believe that for a moment. All I'm thinking is they're going to just go and figure out other ways in which to make a revenue enhancer. No, I don't think it's one or the other. I think they're in it for the money. In fact, if...

they can figure this out and make the internet universe more inducive to trust, less annoying. It could mean trillions of dollars. So yeah, it's the money. But still, that doesn't negate the fact that we have to build these systems, right? That we have to make the internet a little bit better. That's fine. This idea, however, that you're going to have to coach people into the subtleties of the relationship –

Tell him you're sorry. Tell him this, you know, here's the formula for this. He doesn't want, he did something you need to repair that. Here are the seven ways you might repair that. To do all that,

It's as if the Hallmark card company, instead of living only on Mother's Day, Father's Day, and birthdays, just spread its evil wings out into the whole rest of your life. And I don't think that's a wonderful thing. I think, you know, I have a slightly different opinion of it.

I mean, you got to keep in mind how this thing came about. I mean, they tried to get people to talk to each other. They gave them the blank text box, but nobody used it. Right. So they're like, OK, let's come up with some stock phrases that, yes, are generic. But think about the next step.

After you send the message saying, you know, Jad, I don't like the photo. Please take it down. Presumably then you and I get into a conversation. Maybe I explain myself. I say, oh my God, I'm so sorry. I didn't realize that you didn't like that photo. I just thought that was an amazing night. I just thought that was a great night. I didn't realize you thought you looked so sorry. I'll take it down.

It's cool. See, now, presumably we're having that conversation as a next step. Why do you presume that? How many of the birthday cards that you've sent to first cousins have resulted in a conversation? Maybe not. See, that's the thing. Sometimes these things are actually not, they're really the opposite of what you're saying. They're conversation substitutes. Maybe. Maybe they're conversation starters.

Maybe that's the deep experiment. Are they conversation starters or substitutes? Well, I hope they're conversation starters. Yeah. Because maybe that would be a beginning. It kind of, in my mind, goes back to like the beginning of the automobile age. This is how Andrew puts it. There was a time when automobiles were new. And, you know, they didn't have

turn signals. The tools they did have, like the horn, didn't necessarily indicate all the things that we use it to indicate. It wasn't clear what the horn was actually there to do. Was it there to say hello? Or is it there to say get out of the way? And over time, we created norms. We created roads with lanes. We created turn signals that are primarily there for other people.

so that we can coexist in this great flow without crashing into each other. And we still have road rage. And we still have road rage. We still have places where those tools are incomplete.

Thanks to Andrew Zolley. Many, many, many, many thanks. Yes, definitely. For bringing us that story and for reporting it with me for so long. And to Arturo, who you kept bringing back into the studio. Yes, thank you very much to Arturo and the whole team over there. And by the way, they have changed their name. It's no longer Trust Engineering. It is the Facebook Protect and Care team. Really? Yeah.

Yeah. We had some original music this hour from Moonanite, thanks to them. Props to Andy Mills for production support. And also, Andrew Zolle put together a blog post. If you go to Radiolab.org, you can see it, which covers some really interesting research that we didn't get a chance to talk about. And if you've ever sent an email with a little smiley face, you're definitely going to want to read this. Radiolab.org. I'm Chad Abumrad. I'm Robert Krolich. Thanks for listening.

Radiolab was created by Jad Abumrad and is edited by Soren Wheeler. Lulu Miller and Latif Nasser are our co-hosts. Dylan Keefe is our director of sound design.

Our staff includes: With help from Andrew Vinales. Our fact checkers are Diane Kelly, Emily Krieger, and Natalie Middleton.

Hi, this is Ellie from Cleveland, Ohio. Leadership support for Radiolab science programming is provided by the Gordon and Betty Moore Foundation, Science Sandbox, Assignments Foundation Initiative, and the John Templeton Foundation. Foundational support for Radiolab was provided by the Alfred P. Sloan Foundation. Hi, this is Ellie from Cleveland, Ohio.

NYC Now delivers breaking news, top headlines, and in-depth coverage from WNYC and Gothamist every morning, midday, and evening. By sponsoring our programming, you'll reach a community of passionate listeners in an uncluttered audio experience. Visit sponsorship.wnyc.org to learn more.