cover of episode "MIT Professor Max Tegmark: LIVE in Boston"

"MIT Professor Max Tegmark: LIVE in Boston"

Publish Date: 2023/7/13
logo of podcast SmartLess

SmartLess

Chapters

Shownotes Transcript

This is officially a cold open, I guess. This is a cold open. Right? This is like a... Like on the podcast, we just kind of talk before we say what the name of it is. Like an intro, and then we've got to be like... We've got to be like... Professional. Yeah. And like, oh, we... We usually just copy what I'm saying. You're just copying what I'm saying. You're just copying what I'm saying. I don't know how to tap into this. That's okay. Just say the only thing that you know how to say.

- Welcome to Smart Less! - We're so happy to be in Boston. - We are so happy. - Thank you for being here tonight. - And you guys rolled out your nicest weather for us. - Yeah. - Oh, felt like being home in Canada almost.

But thank you, thank you, thank you, not only for listening to our garbage, but coming out and looking at our garbage. There's a better way to say that. There's a way for a better way to say that. So, all right, let's sit down. Yeah, we're just excited. So...

I, uh... Oh, wait, somebody left their phone here. That's me. I brought it. It's in my pocket. I don't know why. If I do get a call, though, we're just gonna take a quick break, okay? Is that your phone? It is mine. Sorry. The camera was on, too, Grandad. Yeah, my flash was on. Here, put it over here. And I'm chewing gum. It's a really professional operation here. Um...

All right, so... Let me ask you something. Yes. Do you guys... How is traveling going for you with all of this? Great, great interview question. What about... Sean did a little preparation. I know. No, I really want to know that because as of this morning, Will started calling me Katy Perry. Well... Because I bring so many outfits with me. Okay.

And this is the best you could use. It was just... I know. Well, the true story is the sweet girl who picks out my clothes for me because I don't know how to do that, clearly. She sent me over... God love her. She's a really close friend. But she sent me over two of the exact same outfits today. Right. So I kind of put this together willy-nilly. It is a new thing for us because we do this thing on our laptops and we wear... No, we don't. We're together. We're together. We're together. We wear...

- I have pajamas. - Yeah. So this whole notion of having to dress and actually have a specific time and all that stuff, it is odd. And that there's people here. Yeah, yeah. And usually one of us is a few minutes late

I am common, I am commonly late. Oh, that's you. I am. Okay. But here's the, I don't like to be, if you're early, you're, you've wasted time, right? Do you have, you have an issue? No, I've just, we've got a lot of milling around going on there. By the way. I'm interested. I too, I go to a doctor's office now late so I don't have to wait the 20 minutes they make me wait. Right. I just go.

Right? By the way, you still have to wait. But do they like that? They love me for that. This sounds familiar. I feel like we talked about this maybe. I don't think we've ever talked about that. I don't think we've ever. You're never early or late, are you? No, I'm right on time. You're always right on time. What is that? Just because I have, just because I respect you.

Primarily. The implication is I don't respect him. Here's the other thing I discovered the other day. Go on, what were you going to say? Well, no, I was just going to say about calling you Katy Perry, it wasn't just that. I just kept going, I don't know, ask Katy Perry over here. Like,

like that so it wasn't just bully and then we all have you're gonna make me roar in our head yeah and jason you've been singing that all day i can't i'm so i can't i'm is everybody when you get a song in your head you can't get it out am i the only one like but like it lasts an abnormal amount of time like a week i'll get something stuck on my head they say like to count down backwards from a certain amount of number and that will make you stop thinking of the

Are you doing it right now? Does it just work for songs? Yeah. Because there's a lot of shit I'd like to forget.

Like that kind of... Yeah, so I would just... If that works, I want to do that. Well, the other thing that happened to me yesterday was I found... My back has been itching, and then we have... Wait, are you going to... Yeah, so we have... He's backing into a plug for Hypochondriac or his other podcast. So, Sean, we're in conversations in the hotel room, and Sean does a lot of this. He's like, uh-huh, what's going on? And he gets up against the door, and he's going, really? Yeah. Like a dog scooches across the rug and scratches his...

Yeah, so then our friend Eli is with us, who we love and is a very good friend of ours. And I'm like, "What's going on with my back?" He goes, "Take your shirt off, let me see." So I took the thing and I'm back and he goes, "That's shingles." - Truly? - Yeah, he said I have shingles. - Oh, were you not there? - No. No, he went, "Oh my God, that's shingles." And I'm like, "What? I just got the vaccine for shingles. How could I have shingles?" Right. Because if there's a vaccine for it, it's in my body.

Yeah. Yeah. Right? Yeah. Good health isn't political. No. Wow. Say that again. Thank you. Thank you. Who's got a pen? I used a pen earlier to write my intro, and since we're talking about science, it's a great segue. Fellas, I wanted to tap into the brainpower of this city.

Okay? We got a big brain coming out. Uh-oh. This fella has a master's and a PhD from Berkeley. He's a fellow at Princeton. He has tenure at Penn. He arrived here at MIT in 2004, where he still works today. He does it all, from physics to cosmology to quantum stuff and computers. He's gonna explain what it is. Stephen Hawking. Will Ferrell. Stephen Hawking.

Please welcome a guy who can definitely make us all more smart, not less smart. Smart less, you get it? Max Tegmark! What? Come on, Max. Get out of here. There he is. Max!

This is Matt Stegmark. How are you, man? It's so nice to meet you. Pleasure to meet you, sir. Please, please. This is so exciting. Well, see, we have the same stylist. Wow. He wears a little better than you do. He certainly does. Now, can you do a better job than I just did of explaining what it is that you do? First of all, how do you introduce yourself? Call yourself what you do. By the way, you look like a rock star.

Well, it depends on what I want. Like if I'm on a long flight and I just want to be left alone, and the person asks me what I do, you say you're a pedophile. Physics.

That was my worst subject in high school. Five hours of silence. Right, right. But you are a... If I want to talk, I'll be, oh, astronomy. I'm like, oh, I'm a Virgo. Oh, right, right. Or maybe if I say cosmology, they'll be talking about eyeliner and makeup. Okay, so the class you teach is...

Oh, it's whatever they want me to torture the students with that year. So it can be either torturing the freshman who came out of high school with the basic physics of how stuff moves, to torturing the grad students with some advanced topics about our universe. Okay. Or most of my time I spend actually torturing my grad students doing research on artificial intelligence.

Okay, okay, good. By the way, this is everything I'm for. Yeah. Will you marry me? No, I'm kidding. I want to... Go, you probably have... You'll have to ask my wife over there. Well, that's...

I saw this documentary on artificial intelligence and what I was surprised to learn is that it's not about robots like the Steven Spielberg movie. It's more about the amount of computing speed that we now can do such that, like I think they said in this documentary, you can put all the books that have ever been written into a computer now.

And you're gonna tell me whether I'm right or wrong. I bet I'm close to right, but probably wrong. You can put all the computers in-- all the books into a computer, and the computer will ingest all that information, separate it out, and be able to give you an answer about anything that you can ask them if the information was in any of those books, from languages to rocks to... I mean... Isn't that called Google, though? Well...

I'm sure you could explain that, but that's artificial intelligence. That's... Well, yes and no. So on one hand, yeah, you can take all the books that were printed and put them on a memory card so small that you might have a hard time finding it in your pocket. But that doesn't mean that a computer necessarily understands what's there, just because it can store it and kind of regurgitate it back to you, right? And I think...

The truth is, despite all the hype, that artificial intelligence is still mostly pretty dumb today compared to humans or even cats and dogs. But, you know, that doesn't mean it's going to remain that way. I think a lot of people make the huge mistake of thinking just because...

AI is kind of dumb today. It's always going to be that way. Right. Well, shouldn't we keep it dumb? Because if we let it get too smart, etc. What is that threshold, the point of no return? Yeah, because remember that thing about Facebook where they started, I don't know if this is true, they started doing AI technology and they started talking to each other and they shut it down. Is that true? Because they were gossiping. Yeah. Yeah.

It's true, but I think Hollywood and media often make us worry about the wrong things. What do you mean? First of all...

People often ask me if we should fear AI or be excited about it. The answer is obviously both. AI is like any technology. It's not evil or good. If I ask you, what about fire? Are you for it or against it? What are you going to say? It can hurt if you use it incorrectly. Exactly. And the same thing with all tech. The only difference...

is that AI is going to be the most powerful technology ever. Because look, why is it that we humans here are the most powerful species on this planet? Fucking A. Is it because we have bigger biceps, sharper teeth than the tigers? No, it's because we're smarter, right? So obviously, if we make machines that are way smarter than us, which is obviously possible, and most researchers in the field think it's going to happen in our lifetime,

then it's either going to be the best thing ever or the worst thing ever. My question is, when it's the worst thing ever, by the time it becomes the worst thing ever, then we're fucked. Then it's too late. So let's not let it be the worst thing ever. So that's the catch, though. We humans have had to play this game over and over again where technology got more powerful. We're trying to win this race, making sure the...

The wisdom with which you manage the tech keeps pace with the power of the tech. And we always use the same strategy, learn from mistakes. But it seems like the big safeguard that we have as humans that we don't yet have with machines is that we have ethics, we have empathy, we have emotion. And what is the computer program that you would need to put together to inject that into this new machine with all of this information? Can we put some snuggles in it?

That's a fantastic question. What's the snuggle recipe? So you're hitting exactly my wish list. If you want to have ever more powerful AI that's actually beneficial for humanity, so you can be excited rather than horrified about the future, there are three things you're going to need. First, you're going to need the AI to understand our human goals and then get it to actually adopt the goals and then to actually keep those goals as it gets smarter. And if you think about it for a little longer, each of these are really hard. So suppose you have a...

tell your future self-driving car to take you to Logan Airport as fast as possible. And then you get there covered in vomit and chased by helicopters. And you're like, no, no, no, no, no. That's not what I meant. And the car goes like, that's exactly what you asked for. You know, it clearly lacks something. See, they do talk like that. Literally. Literally, he's the Terminator. He sounds like the Terminator, and he's talking about Terminator stuff. So we humans have so much more background knowledge, right, that a machine...

doesn't, because it's like a very alien species of a sort. So that's hard for starters. And then suppose you can get that right. Now, you have to-- But let me stop you there. Is there any chance of getting that right? In other words, the formula, the equation that equals emotion, responsibility, ethics, can you even create a computer equation for that?

I think right now we don't know how to do it. It's probably possible. We're not working enough on it yet. The catch is, though...

Computers are just like, you know, if you think of a baby that's six months old, you're not going to explain the fine details of ethics to them because they can't quite get it yet. By the time they're a teenager, they're not going to want to listen to you anymore, those of you who have kids out there, right? But you have a window with human children when they're smart enough to get it and maybe still malleable enough to hopefully pay attention. That's fascinating. With computers, though, they might blow through that window.

so quickly, which makes it harder. Did you see Ex Machina? Did anybody see Ex Machina? I did. I did. What did you think of that? Yeah, let's give it up for Ex Machina. No, no. What is the most accurate film to science? Like, is it How in 2001 or Ex Machina? Those are my top two, actually. Because...

Hal emphasizes this key thing that the thing you should really fear in an advanced AI is not that it's going to turn evil, but that it's going to just turn really competent and not have its goals aligned with yours. That's what happens in Hal's life. No spoilers. Right. And like that taxi I mentioned. 60 years in, we're good, but...

But then the other thing you should also worry about is even if you can solve all these things, and I think it might take 30 years to figure this out, which is where we should start now, not the night before some folks on too much Red Bull switch it on. I've got two cans in me right now.

Not in the can. Not in my can, but in... Keep the super intelligence away from him. But the other thing is, even if you manage to solve those technical problems, which we should have more research on, you also have to worry about...

human society, because just think for a moment about your least favorite leader on the planet. Don't tell me who it is so we don't offend anyone in the audience. Thank you for thinking he's a leader. But just imagine their face for a moment here, okay? And now imagine they are the one who controls the first super intelligence and take control over the whole planet and impose their will. How does that make you feel? Not good. None of it makes me feel great. Listen, after a lifetime of doing all this stuff with the robot, how does it feel to talk to an actual robot? Like, that must feel...

And we will be right back. And now, back to the show. So is there, do people have proprietary right over certain stuff? Or does one country control a lot? Like, who's leading? China's leading the AI, are they not?

US and China are both very strong. I mean, most research suggests that US is still kind of ahead, but there's a lot of hype around. Both countries, of course, try to-- If anybody serves on USA, I'm going to lose it. And I'll bet when they say the USA, they're talking about MIT, and they're probably talking about him. Well, you know how it goes.

Both countries are trying to claim that the other one has the heads up so you get more funding. That's how we researchers always do it. But seriously, the interesting key here, I think, is ultimately it's not really going to matter which country gets it first. It's going to matter most is it going to be

us who control the machines or are they who control us? I mean, but it really is all joking aside. I'm obsessed with the Terminator movies and anything sci-fi. Sure, all joking aside. Yeah, joking aside. Yeah, let's put the jokes aside and let's talk about it. No, but that's kind of the idea behind a lot of Hollywood movies and stuff is that what if the AI has become more intelligent than the humans. But here's the thing. I saw something on 60 Minutes years ago which fascinates me to this day and I'm not going to get this right, but it's some guy... Andy Rooney's eyebrows. That's it.

No, so... You ever wonder why? You ever wonder why? I opened my old desk drawer and I got a tie from the 1968 Democratic Convention. It's got soup on it. I don't like soup. Nobody knows who Andy Rooney is. Yeah, yeah.

It's also his Dax Shepard impression. And it was true. But anyway, so there was this guy, the interviewer was interviewing the scientist who claimed to have come up with this idea. This thing was like wrapped around his ear and it was like tied to the side of his head. And the interviewer was asking him a question like, what's the population of Utah? And all he had to do was think of the answer.

and it popped up on the screen. Do you know what I'm talking about? Well, this sort of stuff you can already do if you have a connection to Google.

Oh, yeah. Ergo, you saw it 15 fucking years ago. What are you talking about? No, no. Sean. No, but that you... Oh, you mean Peter Jamby? No, that you can... If you think of a response... He got it. Okay. He heard you. Answered it. Let me move on to another question. Something current. So, you know, we were talking earlier about...

We're gonna kiss it out later. What do you mean when you read out people's brain waves? Well, I found it fascinating. I saw the segment where the guy was thinking an answer and it popped up on the screen. Again, the third time is really clear. Hey, Sean, your best robot voice. Go. Quick. He's gonna sing Katy Perry. Watch. Do you want to play a game of war? No. Do you want to play tic-tac-toe? Another 15-year-old...

Do you want to play a game? That's it. He's teeing himself up. This is the voiceover artist here. Let's have it. Do you want to play a game? No matter how I say it, it's just the gayest computer ever. I'm sorry, Dave. I can't do that. Do you want to play a game? Like, yeah. Okay, yeah, sorry. And what's your best computer voice?

I'm sorry, Dave. I just love that house scene again because it points out what you really should worry about. But coming back to it, there are two things. One, again, to summarize, we need to make sure the machines can actually align their goals with ours because if you have something much smarter than us that has other goals, we're screwed.

It's like playing chess today against a computer that has a goal to win when you have... It's no fun. What's the next... Because when somebody says AI, all I picture are those mechanical dogs that walk around. And they don't really do anything. They're just like, oh, look, we invented a robot dog, and it doesn't really do anything. So I want to know, like, what's the next thing that we can use that's... Is it mechanical cats? Mechanical cats?

Yeah, no, you know what I mean? Like, what's the-- Oh, imagine the Mechanical Cats doing cats. Oh, my God, that would be amazing. Sorry, we'll get back-- You're saying what's the next thing we can look forward to enjoying out of science? Yeah, like, in the pop sense. Okay, in the pop sense. So first of all, so just to finish off what you were talking about,

You know, Hollywood, it makes us associate AI so much with robots and the Boston Dynamics dogs. You should check them dancing, by the way, if you haven't. Dancing dogs? The dancing Boston Dynamics robots. Super cool. But the biggest impact right now AI is having is actually not robots at all. It's just software, right?

I mentioned this improvethenews.org project we're doing, which is just a little academic thing. But if you think about social media, that's all about AI. One of the reasons people hate each other so much more in America now is the effect of AI. Not AI that had the goal of making people evil, but just had the goal of making people watch as many ads as possible. But the AI was so smart, so

figuring out how to manipulate people into watching ads, that it realized that the best way to do it is to make them really angry and keep showing them more and more extreme stuff until they were so angry they wouldn't fragment. And if you get really, really pissed off, you then research that thing even more, and then you get more ads and all that stuff. Boom. Yeah. So that's one. All of social media, media. Another one is, let's talk about some positive things, because AI, intelligence, right?

It's human intelligence that's given us everything we like about civilization. So clearly, if we can amplify it with artificial intelligence, we can use it to solve all sorts of problems we're stumped on now, like cancer and lifting everybody out of poverty and so on. Will there ever be a... Go ahead, sorry. So I was just going to say, another pure software thing that has nothing to do with bots is...

use AI for doing better science, better medical research, for example. I was just going to ask about that. So is there any, like, I read a long time ago about, like, you know, like you put a locator chip in your dog or your cat, whatever. Can you, I heard that they might be making, like, a chip that has all your medical files and you put it under your skin so you can just scan it. Because filling out all those fucking forms over and over...

It's like, I just filled out the form and now you're asking me the questions all over again. Read the thing I just spent 20 minutes on. Stop talking to the doctor once a week. I'm dying. Right, I'm like, what? And it's like, what's your name? I just filled three forms out that says what my name is. I think I'm personally going to pass on that ship implant and just ask the hospital to have a less stone-age computer system. But seriously, of course. Did you hear about Sean showing up to doctor's appointments late so that he doesn't have to wait? LAUGHTER

It's riveting. I don't know if that's AI or whatever. Something huge that happened this year, for example, is biologists have spent 50 years trying to figure out just from the DNA how the shape is going to turn out to be of different proteins that our bodies make. It's called a protein folding problem. And then Google DeepMind solved it. No way. Yeah, with an AI. And now you can develop medicines faster. So this is a fantastic example, I think, of AI for good. Another one.

But then the robots that are probably going to have the biggest effect, I think, on us and the job market the next five years are probably cars, actually, just autonomous vehicles. That's pretty cool. I'm worried about everybody with their cars and automatic driving or whatever, and then they show up, and then people are just going to show up at the valet dead in their car.

It's gonna be really... You know what I mean? Like, people are gonna get in their car and then they're gonna be like... The pizza guy shows up and he's like, slumped over. - You open that... - Oh, fuck! - They'll open the door and just... - Yeah, just out. You know what I mean? Like, that's... That's what I'm worried about. Now, with the combination of your... of what you know about computers, what you know about space, what you know about intelligence,

I know, I know what you want to know, honey. - Go for you, okay. - Guess what Sean wants to know. He wants to know if there's aliens, but we're not gonna ask him that. - No, not that. - What I want-- - I can say something about aliens. - What I want to know from you is, is it, based on your knowledge of all those areas, does it seem possible to you that there is the requisite amount of intelligence and technology at a place other than Earth? - Ooh, see, he asks it a different way than I would.

Well, of course it's possible. Although, you know, my guess based on spending a lot of years dealing with telescopes and thinking about these things is that when we, for the first time, get way more advanced technological intelligence showing up on this planet, it's probably going to be in our lifetime, and it's probably not going to come from outer space. It's going to be something we built.

What do you mean? What do you mean? So you're saying that we're building something to bring them here? What are you talking about? No, I mean, we're basically... The goal of artificial intelligence has always been to build stuff that's way smarter than us. Yeah, they just don't have cars. They need to get here. And they're all going to be dead when they get here. No, wait, keep going. No, but really, if

If you basically build a new species, a new life form that's way, way smarter than us, right? That's alien. That's alien. It's incredibly alien. It's much more different from a chipmunk or a tiger in that it really has nothing in common with our evolutionary history. Nothing. It doesn't necessarily care even about food or reproducing itself. So, yeah.

If we do that, it's going to be just as big an event on Earth as if aliens show up. And that's why I'm kind of weirded out that people talk so little about it. That's what I'm saying. As despicable as human beings, we're all, including me, everybody, we're just so despicable that if...

If the announcement came that like, oh my God, there are aliens from another friend is visiting our planet. People would be like, oh, okay. I got to check my Instagram. Like, I don't think people would be like, based on that. It seems to me that what you would want to do is make sure that somehow built in all this stuff is some kind of a kill switch.

and that the wise men who are... And women. You know what I mean, but like a group, right, probably made up of you and your other colleagues, male or female, from around the world that are the leading scientists in this area would get together on some encrypted platform and say, let's make sure we, only us five, know about this one thing that we could press...

to shut all these AI robots down that we've created on Earth. I won't tell anybody. All four kill switches. That's what I'd do. I'd say, let's go sidebar real quick. I know you would. This sounds a little too elitist for me because the idea that somehow, you know, a bunch of the dudes who are like,

know a lot about AI, should decide humanity's future. I want it in your hands. That's how it is now. If the rest of us, everybody doesn't get engaged in these things, who's going to make all these decisions? It's going to be probably a bunch of dudes in some back room who have not been elected. Are people who are super AI nerds like specialists in human happiness?

Only they would know the ramifications of getting in the wrong hands. But are they the ones who should be deciding what makes you choose the bad or the happy? Yes, not some elected weirdo. Yeah. Look what happens.

I don't trust particularly elected people, but I also don't trust tech nerds with being experts in psychology and what... Then who can we trust? Then we've got to shut it all down. Us! Everybody! You can't trust it to the nerds! What are you talking about? We should all talk about it. So let me ask you guys. Suppose you have a magic wand that you can wave, okay, and create this future...

35 years from now, okay, when there is this very advanced AI and you get to decide how the planet is organized, what it's used for, and what it's not used for. So it's not gonna be your standard dystopian Hollywood flick. What is this future like? What do you want it to be like? - Well, that makes me wanna take a nap.

Well, you would... Yes, and what do you want it to be like? I would want all the technological advances that we have to go to the bettering of the living experience, which means health and kindness and all that stuff. You know, you point it all in that direction, and then good decisions come from that? I don't know. Yes.

Brave new world. Well, that sounds great. A frighteningly tepid response. Let's compare it with what we're mostly spending AI money on now, right?

So a massive amount of money is spent on advertising, which ends up making teenage girls anorexic. And then we have a massive amount of-- an enormous amount of money now building robots to kill people. For the first time, and they were used in Libya last year, they hunted down these fleeing people and just killed them because the robots decided that they were probably bad guys. And I think-- I did not know about that. I don't know.

Did you guys all know about that? About the robots that hunted the people down? Yeah, let's not gloss over that. What happened? This is the shit that I'm talking about, man. So it actually has some dark comedic value, I think. Yeah, it sounds hilarious. The current policy, actually, of the U.S. government on killer robots and slaughter bots is... Slaughter bots.

Three things. First of all, the US says, you know, these are the... That's like murder hornets. First of all, we're saying, this is nasty stuff that we don't ever want to delegate kill decisions to machines, so we're not going to do it. Second,

Second, it's going to have a decisive impact on the battlefield. And third, we're going to reserve the right for all other countries to build them and export them to whoever they want. So this was this Turkish company decided to sell them to Libya against the arms embargo, and that's why they hunted these people down. We went, like, in a really short span, we went from replicating sheep to slaughter bots. Like, it seems like...

That happened really quickly, man. So, is there... We were talking to a guest earlier today about time travel. And now, so it's on our minds. And we do have to ask you, like we did before, did you time travel here? I did. You did? Is there any chance

And I won't bore you with the same question that I asked that astronomer that we had about the mirrors. Neil deGrasse Tyson. It was a real highlight. But in the... I'm going to try here. Will said, Neil deGrasse Tyson was on, and Jason asked a really long question about time travel, and Will said, hey, do you think we could put enough mirrors and go travel back in time to the beginning of Jason's question? And it's a valid... It's hard.

- It's a valid... - Okay, so the sun... The light we get from the sun has been traveling seven minutes. - Eight minutes. - Eight minutes. Okay. So we're basically feeling something that's eight minutes old. - We're back on this. - That's right. - Right? Okay. - What the... So isn't there a way to have a mirror...

that creates... Anyway. Yes, actually, there is. Well, hang on. There is. He says yes. There is. He likes my thinking. Nature already built one for us, actually. In the middle of our galaxy, there's this monster black hole that weighs about 4 billion times as much as the sun. And it's black. Oh, that's mine. No, it's a black hole. And if you look at it really carefully...

light that went from you actually was bent by its gravity so much that it comes back on the other side of the black hole, like a mirror. So if you looked with a really good telescope, in principle, you could see your own reflection, except so long ago that you weren't born yet. Are you serious? And now, a word from our sponsor. And now, back to the show.

So that's basically time travel. In other words, these telescopes, like the one we just launched off, they're looking so far back, they might actually see the Big Bang at some point or something. We're seeing the galaxy at an earlier stage than us. So eventually, if you get a telescope strong enough, you could see the start of Earth.

Potentially, so it all came out of the big bang. We all took mushrooms before. Oh, that's just funny. Somewhere in there is there an answer about the possibility of time travel. Yeah, so this is a kind of see but not touch time travel. The sky is a time machine just like that. You see me three nanoseconds ago, you see the sun eight minutes ago, you see stars at night if it's clear.

So long ago that people over there looking at us would see maybe the Boston Tea Party and we can see things that happened 13, over 13 billion years ago. Right. You can also travel forward in time for real. No bullshit, real time travel. You go to this black hole here. That's mine. I was told actually when I moved to the US that in America possession is 95% of the law. I think so. Is that true? Yeah.

You know what? Now it's yours. So anyway, if you just orbit around this black hole really close, which you can actually do. I give this as a homework problem to my MIT students. Then your time will actually slow down so much that if you're on Skype with you, you'll be hearing him go like, hello, I'm here. And then you're going to hear him say, oh my god, I'm so worried about what's going on in that area. Well, that's accurate. Because--

There were times you were actually running at different rates. And then when you come back, you just be like, "What? You look so good, so youthful." 'Cause you're actually younger than you would have been otherwise. So are we gonna be alive when we start to see any of this stuff that's gonna really blow our minds? Yeah, because... And Mars. Are we living on Mars? Go.

So this is the upside of artificial intelligence. - Throw that into this. - That's not a follow-up. It's a different subject. By the way, you mentioned before Boston Tea Party. Too soon, dude. This is the upside of artificial intelligence. It's just for us. Too soon, huh? This is the what? This is the upside of artificial intelligence. Because either we can use it to go extinct in our lifetime, or we can use it to bring all these awesome things about in our lifetime. We used to think, oh, this is going to take 10,000 years to get, like, the sci-fi novels, because we humans have to figure out all the tech ourselves. No.

If we can build this incredibly advanced AI, then build more advanced AI, et cetera, et cetera.

we might be able to build this tech, you know, 30, 40 years from now. Right. And suddenly we're not limited by our own pace of developing tech. We just go, "Boom!" And we're limited by the laws of physics. Well, and by the laws of ethics. Like, if just because we can, should we? Like, how do we know when we as a society are mature enough to handle some of the technology that we can access? I think going and having some fun, joyriding around black holes,

is ethically okay. It's my inner nerd speaking here. As long as you don't force other people to go with you. But on the other hand... That's one of the... Don't be a nerd. We're just going to the black hole. Like, we're not forcing, but it is peer pressure. That's one of the questions that they asked in Jurassic Park was just... Oh, good, Jurassic Park, good. Yes, I'm in. I'm in.

Just because you can create this island with these dinosaurs, should you? Now, based on the science of that, the amber that was frozen in there with the DNA of the dinosaurs, is that real?

Is that real? Well, now we have my friend George Church down at Harvard here. He's talking about already bringing back the mammoth by just taking the DNA together, assembling it, error correcting it, then basically DNA printing out the mammoth DNA, and boom, mammoth. We can do a lot of these things.

Leaving... We can come back to the ethical questions. Who's stopping him? Exactly. Who's... What's the... That's me. What's his name again? George Church. What's the... To put him on the show. What's the council that's going to say yes or no? Let me just say a bigger thing first, though, just to get the controversies out of the way, you know. I think we humans...

To really get the ethical decisions right and let people forfeit some dumb stuff, you have to remember how much upside there is also. We're living on this little spinning ball in space with almost 8 billion people on it and we've been spending so many years killing each other over a little bit more sand here and a little more forest there.

We're in this huge universe, right? We thought was off limits. Well, with AI, it could be on limits again. We could go to the Alpha Centauri system in a lifetime. We could have a future where life is flourishing in our galaxy and in our other galaxies where there's just such an amazing abundance that people are going to be wondering, like, why?

Why do these guys futz around for so many years on this little planet and fight, squabble about breadcrumbs instead of going out here? Most of this universe, despite all the Star Trek episodes out there, no offense, so far really don't seem to have woken up and come alive in any major way. And I feel that we humans...

have a moral responsibility to see if we can help life, our universe wake up some more and help life spread. But what if we run out of time with our use on this planet because of environment where we don't... I'm reading your mind. Yeah, we're getting to Mars. So can we point the AI to our challenges here regarding the environment, fix that real quick, and then we can explore everywhere else? That's right.

I think we need to fix things here in parallel. The problem, the reason that the rainforest is partly gone, the reason we're messing up our climate and so many other things isn't because we didn't know 10 years ago what to do about it. It's because we've kind of already built another kind of AI. These very powerful systems,

corporations, governments, etc. that have goals that aren't so aligned with the rainforest and maybe the overall goals of... If we can use the AI to tell them how they can make more profit doing things that don't kill the earth, then they'll stop chopping down the forest. Well, maybe we should take a bigger step back. The whole point of having...

You've got stock in Exxon, don't you? He's not comfortable answering this. My undergrad was in economics, so I'm very sympathetic to the free market, you doing things more efficient. But the whole point of the free market is that you should...

get done efficiently things that you want to get done. And then you should have, of course, some guidelines. That's why we decided to ban child labor in the US. That's why we decided to invent the weekend. So you know-- DAVID BABIN: Wait, you don't like the weekend? I think you should change-- DAVID BABIN: But right now, if you create something, whether it be

super powerful dictatorship or it beats the tobacco company that tries to convince you that cancer, that smoking isn't dangerous or whatever. It has its own goals and it's going to act. It's good to think of these things a little bit like an AI, even though it's not made out of robots, it's made of people because there's no person in a tobacco company that can single-handedly change its goals, right? If the CEO decides to stop selling cigarettes, he's just going to get fired, right? So

We should start thinking about how do we just align the incentives of all the companies. I want to keep private companies, incentives of people, incentives of companies, incentives of politicians with the incentives of humanity itself to get what you were asking for, you know, a society in the future where people are happy. The change is more cultural rather than scientific, if you will. Yeah, although you do need to geek out a lot about the whole business with incentives. Like, why did we invent the legal system in the U.S.? Well, because...

We realize it's not so smart that people always kill each other every time they get into a squabble about a hot dog, right? Right, create consequences. So you change consequences and now they'll think twice and they'll just punch each other instead or settle it some other way. We...

Alignment is kind of the big slogan a lot of us nerds have for this. You want to align the incentives, not just the machines, but also the organizations with what's actually good for humanity. And we're in this unfortunate situation now where whenever an entity gets too powerful, it doesn't have to be a machine or a dictator, it could even be a company, that they start to now like...

take over whoever was supposed to regulate them and turn them into like a rubber stamp, now they're certainly not going to be so aligned with what's good for America anymore or good for humanity. And this problem...

We cannot wait for AI to solve it. We have to start solving that in the meantime. Amen, but you know, I mean, that's been the modus operandi up till now and we're running out of time and people are taking their profits because they figure they're going to be dead before the ramifications of it really... So I think the computers have to help us out. Yes, yes, absolutely. So this is why...

I'm so into this AI empowerment thing, why I want to think about how can we use AI and put it into the hands of people so that they can very easily catch other powerful entities that are trying to screw them over. It's a way of using technology to strengthen democracy. Are we going to live on the moon at all?

You want to? Yeah. Is that possible? Are we planning on that? Would you want to, though? Can you believe that? Do you want to? Yeah, I would. I would totally live there. All of these things are certainly possible. I'm very much of the opinion that it's easier to make a really comfortable and pleasant life on this planet. So I would also like to make sure we don't ruin it. Is there one project that you're working on right now, just one that you feel extremely passionate about right now that you could share with us? It's actually improvethenews.org, this thing I mentioned earlier. Yeah, yeah.

It's called improvenews.org. Improvethenews.org. It's just this free little news aggregator, but it's all powered by machine learning, so that's why it can actually read 5,000 news articles every day, which I can't. And then what we're doing is instead of just saying, okay, today we have a lot of news sites. You can go there and read about all the good things that Democrats have done and all the bad things Republicans have done. And then there are other ones.

where you can read about all the great things that Republicans have done and the bad things Democrats have done. With this one, the AI figures out which articles are about the same story. Maybe it finds now 62 things about the new US national debt passing 30 trillion or whatever. And then it's like, OK.

Then you can come in and say, okay, here are the facts that all the articles agree on. Boom, boom, boom. If you're a fact kind of guy, you can now click away and go to the next story. But if you want to know all the narratives, it separates out. Here is this narrative, that narrative. Does it have photos? Would it have a photo of Will last January 6th?

You mean in the capital on that photo? No, wait, with the goggles and the thing? There's no way you're getting that photo. How much will you pay me again? Yeah. What's best Wordle score? Go. And be careful. He got an unbeatable score today. Got it in two, this guy. Not bad. Yeah. Not bad. That sounds amazing. But it's so exciting. Also, just all the emails you get from people. Because I think...

Yeah, bits are free. You can give them away to the world. And AI sounds fancy, but it's just code. Yeah, yeah, yeah. Well, listen, I want you with a fresh mind tomorrow when you get back at it, so I don't want you to stay up later anymore today. Thank you for joining us. Do you guys feel a little smarter? Yes, I hope so. A little bit smarter? Amazing. I feel smarter. I definitely do. Please say thank you to Max. Thank you, Mark. Thank you. Thank you very, very much, buddy. Here, pal.

Thank you very much. Wow! Max! Thank you, Max! Now here's the thing. Now how... I... Go ahead. How much dumber do you think you are than him? Like on an IQ score, what do you think his score is versus yours? He reminded me how much smarter I am than you, which was great. That's fair. Actually, I feel very buoyed by that whole experience. Do you think it's double his intelligence over mine? His over mine? Over mine.

Oh, easily. No, no, no, no. I mean, he's just, he has a very big brain, obviously. I could talk to him for hours. I don't know that he would listen to me for more than five minutes, but I could talk to him for hours. I love all of the... Fantastic guests. Like, right up my alley. Right up my alley. And, and...

I think that if we all spent more time thinking about that kind of stuff, just a little bit, that maybe we could get around to solving some big issues tonight. Let's solve it tonight, guys. Everybody huddle up.

That was so cool. Thank you. I know you, we all kind of share, we love all of our guests. Those are nice pops because there's stuff that we don't usually cover on the podcast and I'm just repeating myself but I just love that stuff. I could ask him a million more questions. Well, it's the original conceit of this thing, ergo the title. We thought we'd, you know, bring people on that can educate us

a little bit more on things that we don't know about. We happen to get lazy and ask some of our famous fancy friends to come on. This is a real treat to be able to access these, you know, big, big thinkers in this incredible town. So, thank you for having us. And now, like, it's sort of, it's...

incumbent upon us to really kind of do something about it. We can't sit around all day in our pajamas, you know, and in our slippers, you know what I mean? Or then just go to the golf course and then get in our Teslas. We have to. Right? Don't you think that's important for us? And I don't want to single anybody out. I will never be one of those people. Yeah.

They're ruining things. But, no, it is true. Thank you for, you've educated us a little bit more. It's pretty rad. And I think, you know, I could talk to him, I want to talk to him about the Webb Telescope because, you know, those kinds of things. Oh, God, here comes a bye, everybody. Can't you feel it when he starts to ramp up the engine? Sean, if you don't land it, you can't do it. No.

You just got to get into it more subtly. I can smell you a mile away. I'm just saying, like, a telescope like that is much better than any, you know, thing like this. What are these called? Oh, those are... Are you kidding? Bye, Michael! Bye, Michael! Thank you, Boston! Thank you so much! Smart. Live. Smart. Live.

SmartLess is 100% organic and artisanally handcrafted by Michael Grant Terry, Rob Armjarv, and Bennett Barbico. SmartLess. If you like SmartLess, you can listen early and add free right now by joining Wondery Plus in the Wondery app or on Apple Podcasts. Before you go, tell us about yourself by filling out a short survey at wondery.com slash survey.