cover of episode Episode #185 ... Should we prepare for an AI revolution?

Episode #185 ... Should we prepare for an AI revolution?

Publish Date: 2023/8/10
logo of podcast Philosophize This!

Philosophize This!

Chapters

Shownotes Transcript

Hello everyone! I'm Stephen West. This is Philosophize This. Thanks to everyone who supports the show on Patreon. Patreon.com slash Philosophize This. Shows on Instagram at Philosophize This Podcast, Twitter at IamStephenWest. Posted a reading list for Simone Weil on there this week. Gonna start doing posts for individual philosophers that have really impacted my thinking over the years. Trying to give people a roadmap for taking a journey through a particular philosopher's work. So if that's something you'll want, check it out on there, I guess.

That said, I have an announcement. Hardcore fans of the show will know that over the last few years, this podcast has sometimes only had one episode per month. And as some of you know, it really hit me a few weeks ago how much good this podcast could be doing if I was able to do more of it. I know none of you are going to be complaining about more episodes of the show. Long story short, the way I can do it is if I hire somebody to do a lot of the back-end work of the show that takes up most of my time. And the way we can do that, just with how the economy's been the last year plus years,

is by running some ads. The good news is that this now makes it possible to produce three to four times more of these episodes for you. You have no idea the amount of time I would spend in navigating technical stuff that I have absolutely zero talent for. Patreon subscribers at any level will always have an ad-free experience of the show. Check out your account on Patreon for a Patreon-only RSS feed. And for everybody supporting the show by supporting sponsors, I'll make sure to express my gratitude to you every episode, but thank you right now.

Last thing I want to say is that I'd never talk about a sponsor on this show if I wasn't already using the thing they're selling and having it add value to my life. So if one of these sponsors is doing something that you want to try out, thanks in advance for using the promo codes of this show. It allows the podcast to keep going. You guys know how it works. It's the only thing that makes them want to advertise again. So before we get into the episode, here are the sponsors for this week. This episode is brought to you by BetterHelp. BetterHelp is an online service that helps to match people like you with the right licensed therapist.

I was saying to somebody on an email the other day that philosophy, part of what we like about it is that it is fundamentally destabilizing. It destabilizes rigid ways of thinking about reality. It can prevent you from thinking about stuff too narrowly and getting stuck in spots in life that you end up sitting in for years. That's part of what we love about it.

And if you're a fan of philosophy, something you always got to be careful of is not using philosophy responsibly. Sometimes you got to know when you've been destabilizing your worldview a bit too much. Sometimes you got to know when to take a break. And if you love philosophy, you have to find a way to recenter yourself after shaking things up all the time. Therapy has been that for me for a very long time. Talking to somebody in a therapy setting is something you can do for yourself and your family that truly can change your life. It

It did for me, at least. So if you're thinking about trying therapy or starting back up after taking a break, but you're dreading going down to some stranger's office and staring at a box of tissues for an hour, BetterHelp is entirely online, designed to be convenient, flexible, suited to your schedule. Just fill out a brief questionnaire to get matched with a licensed therapist and switch therapists anytime for no additional charge. Let therapy be your map with BetterHelp.

Visit BetterHelp.com slash fill this today to get 10% off your first month. P-H-I-L-T-H-I-S. That's BetterHelp, H-E-L-P, dot com slash fill this. And now for the last sponsor of the episode today, Element. Philosophers don't agree on much. Okay, in fact, they usually disagree about everything.

But one thing I've never seen him disagree on is whether or not people should be hydrated. They just don't argue about it, turns out. And part of being hydrated is replenishing electrolytes after you sweat. I'm gonna leave you to fill in the blank your own reasons for wanting to replenish electrolytes. For me, it's exercise at this mid-30s stage in my life. I've genuinely tried every flavor of these salts that Element sells, and it's just what I drink before my coffee every morning now. It tastes great, it's salty, not sweet,

But it feels great, because I know I'm getting all them sweet, sweet electrolytes. 1,000 milligrams of sodium, 200 milligrams of potassium, 60 milligrams of magnesium. Electrolytes facilitate hundreds of different functions of the human body. Nerve impulses, hormonal regulation, nutrient absorption,

I mean, I don't know, for the roles I play in this world, philosophy person, dad, I tend to think a lot, and I never want to be tired all the time or feeling like I'm at 80%. Element makes me feel like I'm firing on all cylinders. So it's all risk-free. Right now, Element is offering a free sample pack with any purchase. That's eight single serving packets free with any Element order.

It's a great way to try out all eight flavors or share Element with a salty friend. Try it totally risk-free. If you don't like it, they'll give you your money back. No questions asked. Get yours at drinkelement.com slash philo. P-H-I-L-O. This deal is only available through my link. Go to d-r-i-n-k-e-l-m-e-n-t dot com slash philo. Thanks to everybody who tries these out as an alternative way to support the podcast. Now with that said, I hope you love the show today.

So to start out the podcast today, I want you to try to imagine a hypothetical person living in an agrarian society in Western Europe right before the Industrial Revolution starts to take off. Imagine someone who's a peasant farmer, deeply religious, someone who has children and a family. In other words, imagine someone who has no way of knowing, in terms of the Industrial Revolution, anything that's about to happen to their world. Now for the sake of the example, let's pretend this person and their family had access to books every now and again.

And I fully realize most peasant farmers wouldn't have been able to read back then. Bear with me, it's supposed to be a ridiculous example. But let's pretend this person had access to books. And let's say this person got their hot little hands on a new book that just came out, and that the book was something like a dystopian, futuristic sci-fi novel. Instead of the book being called 1984, let's pretend this one's called 2024. And as this person's reading it, the book paints a vivid picture of a future world that's controlled by machines.

Like any good sci-fi novel, everything starts out very innocently. Machines are seen as this amazing technology that's going to make the lives of people better. Scientific rationality applied to economics is going to make things possible for people that have never existed before. A montage begins in the book, telling the story over the decades. People are moving from farm work to working with these machines on the assembly lines of these factories. People are becoming consumers of the things that are being mass-produced by these machines.

It's all very fun to imagine for someone living in the 1700s. Until eventually, the people in the book start feeling the cultural effects of this massive change. They start to feel the alienation. Now the book goes on to talk about these philosophers that come along. Weird names on these guys. Nietzsche, Adorno, Weber. They start talking about this new feeling of malaise that starts to overtake the people living in these societies.

Fast forward to the beginning of the 2020s, and this book is telling a story about how the world is transformed into something, where these machines are not just economically entrenched, they have now managed to take control of the psychology of the population. And again, this is a sci-fi novel, so grain of salt here, but in this book, the life of the average person is spent consuming media chosen for them by machine algorithms.

Many of these people are addicted to this media in some capacity. Many of them suffer in their mental health because they spend their days being fed content that's optimized to get them upset and commenting on it. People doom scroll in this book. People get trapped in media echo chambers. Free speech and censorship become pressing issues in this society in the face of exactly how these machines are divvying out content.

And then in the book, right towards the end, there's a new invention that comes out. It's a new version of these machines. People are calling it generative AI. And then our peasant friend has to slam the book shut and stop reading because their mom comes into the room and starts screaming at him. She says, are you wasting your time reading all that science fiction nonsense again? Go milk the cows.

Go pickle some stuff for the winter. I don't know what a mom would say to a kid back then. Point is, imagine she tells her kid to stop worrying about this fantasy world so much and to go out there and actually get serious about life. If you just read the headlines of the major newspapers in today's world, they will tell you that we are on the verge of an artificial intelligence revolution. It's not uncommon to hear people start to compare this revolution to the Industrial Revolution in terms of its potential impact on people's lives.

And what people are talking about when they say these sorts of things is not AGI like we covered last episode. This isn't about robots taking over the world. These people are simply talking about artificial intelligence as it exists right now. They say right now this technology puts us on the verge of a new technological revolution. But a skeptical person could say back to all that, how is anybody really thinking that way? I mean, have these people writing these articles even used this technology?

Look, I get it. If you're writing an article and you want to generate some excitement, fine. But what exactly is it you think this technology is going to disrupt right now? I've talked to this chat GPT, asked it to write something. I mean, this thing is hallucinating...

Like it's on a giant hammer and sickle float at Burning Man. This thing will write a bedtime story for a six-year-old. It'll write a thank you letter to grandma for sending you $5 and a birthday card. What do these people think? This is going to disrupt the bedtime story industry? How is this in any way comparable to the Industrial Revolution?

And I think the most optimistic people on the other side of the argument would say that you're right, it is nothing like the Industrial Revolution, because AI has the ability to change things at a far greater level than the Industrial Revolution ever did. And to start to explain why, they may start with a little historical context on the state of AI right now, a story that begins in the year 2017.

See, because before 2017, artificial intelligence research was done differently than it is today. Back then, there were many different fragmented compartments that all tried to improve their AI research from within their particular field.

What you had was one group of brilliant people working on image recognition, another group of smart people working on conversational AI, another group maybe working on music-related AI. And if you were someone at the top levels of any one of these particular groups, you wouldn't be able to hang in discussions at the very top levels of any of these other groups. Everybody was more or less doing their own thing. But then in 2017, there was the emergence of something called a Transformer. Transformer is the T in the name ChatGPT, by the way.

And the use of transformers as the engine of AI, coupled with a change in strategy that massively consolidated how natural language processing tasks were approached, allowed for these formerly compartmentalized fields to unify their efforts towards the development of AI in a way that is completely unprecedented. See, if before 2017 someone in image recognition came up with a breakthrough, that was generally just an advancement in image recognition.

But after 2017, in the fields using these transformers, which are almost all the ones you've heard about in the news lately, a breakthrough in one compartment of AI becomes a breakthrough in every compartment of AI. This is why since 2017, improvement in artificial intelligence has skyrocketed. This is why the amount of money going into research this stuff has skyrocketed. Which brings me back to the person that's skeptical of the technology and what it can possibly revolutionize as of now.

You know, the skeptic's obviously not saying that technology's incapable of revolutionizing the world at all. They're just saying, "Please show me the evidence of this AI disrupting anything, and I'll be on board with you." In fact, it seems that both sides can agree that the level of revolution that this technology is capable of directly corresponds to its capabilities.

And that as these capabilities improve and as people get better and better at using the technology, the more possible areas of human life we may start to see this tech bleed into. The more tasks these things are capable of replicating one-to-one that a human being is currently doing, the more impact we may see it have on replacing human beings. None of this seems too controversial to say on either side. Again, the skeptic would just say that the burden of proof is on the person claiming these things can replace human beings. So get to work.

Well the optimistic person could say back to that, look, first things first, these artificial intelligences, whatever they are, they're not just writing bedtime stories and thank you cards to grandma. Also, the application of this technology is not just creative either, you're thinking too narrowly. There are economists who say, just as the technology exists right now, that it has implications on 300 million jobs around the world.

And when you take that figure and you consider the rapid level of change that's gone on since the move to Transformers, the burden of proof is on me? No, the burden of proof is on you to explain why you don't think this tech is going to continue to improve and change more and more about the world. Now, most skeptics don't go that far. Most of them acknowledge that the technology is improving, but they ask, how is this any different than any other technology that's come out in the past?

People have literally said this about everything. Every new, exciting technology that has a little buzz around it is going to change the world. Oh my God, it's wonderful.

But what always happens is eventually it becomes integrated into people's lives. It becomes a subtle part of the landscape. And then the people that were hyping it up that can't seem to function if they're not hyping up something in life, those people just move on to the next technology and forget all about it. I mean, just in the last five years, it's been Web 3.0 and then it was crypto and then it's NFTs. Can't these people just relax on this stuff for two seconds? But is artificial intelligence different than those things?

Well, the only way we're going to find out is if we try to look at it in as unbiased a way as we possibly can. With the express intent to be able to look at it in a totally biased way after we're done with that.

But just to start out, try as hard as you can right now to not bring in any connotations about what artificial intelligence is. Because you either run the risk of bringing in what AI used to be prior to 2017, the NPC in the video game running into a box for 20 minutes, or you run the risk of bringing in AI religious fantasies from people LARPing in the woods with their friends. No, let's try it first to see it as generally as we possibly can, as simply a technology.

Let's examine it by considering the different affordances that it brings about. What does AI allow people to do now that they couldn't do before? And what areas of life does it prevent or make obsolete? Well, this is exactly what the thinkers Asa Raskin and Tristan Harris have been trying to do with AI for years now. And when considering this most recent advancement in the capabilities of AI, they give three criteria that you gotta consider if you're looking at any piece of disruptive technology.

They say the first thing you've got to acknowledge about a tech is one, that whenever you create a new technology, you bring about a new class of responsibilities along with it. Two, they say if that technology that's being introduced confers power, then naturally a race will begin to try to possess that power

And three, if there's no coordination by the people that are racing for it, that race will usually end in tragedy. They say we've already seen a version of this when it comes to our relationship to AI in an earlier form. To Asa Raskin and Tristan Harris, first contact with artificial intelligence was the way that it affected us through social media. See, AI, up until very recently, was just known as ranking artificial intelligence.

Meaning that it was people that produced the content. People made the articles, videos, social media posts. And then the AI would sift through that mountain of content and then rank it in some sort of way. Think of Google taking 10 million results and giving you the most relevant ones to you in three seconds. Think of your Facebook feed ranking and delivering the most likely piece of content that's going to keep you scrolling and clicking. This has been the type of artificial intelligence that we've all become familiar with.

And this ranking form of artificial intelligence completely changed people's lives. In many ways, we're still figuring out how to deal with all the impacts that it had. Think of the dystopian future laid out in the hypothetical sci-fi novel. Addiction, echo chambers, polarization, doom scrolling, censorship.

simply by ranking and distributing content that people were already making. Artificial intelligence as it existed at the time was capable of messing with people's mental health, massively influencing their worldview, and undermining major pieces of the democratic system in the process.

As Yuval Harari pointed out recently, in the United States, in what some may see as one of the most advanced information delivery machines in the world, simply with AI deciding which content people get to see over others, optimizing for engagement, think of all that happened. We can't agree on who won the last election. We can't agree on whether or not climate change is real, whether vaccines actually prevent illness.

Again, we're not talking about what the truth is here. We're talking about the level of disagreement about basic facts that the current way of doing things has managed to make possible. The reality is, you already live in a world where you can be sitting on a bus next to someone and they are effectively living in an entirely different universe than you are. And that is made possible simply by having this more basic form of artificial intelligence ranking human-created content.

And now comes the very recent innovation in the field of AI of what's called generative AI. This is a leap forward. And there's a lot of people out there who are saying that this particular leap forward is different than the previous ones. A lot of people predicting that if we're talking about this version of AI and whatever lies beyond this, that people in the future are going to look back on this time in history and think of there being a time before generative AI and a time after it.

So what is generative AI? Well, it's right there in the name, really. At its core, generative AI is about getting trained on masses of data, learning a probability distribution, and then generating new content that's similar to the content that it was trained on. If it's chat GPT, it'll produce similar text. If it's stable diffusion or mid-journey, it'll produce similar images. But it's important to note that it really isn't limited to just artistic stuff. With an API, this thing can generate a shopping list for you.

It can generate a list of the most relevant pieces of information from 100 different emails. The limits of the possibilities are truly unknown. And the big thing to take away from this is that the whole point of this leap forward in the technology is that AI can generate things now, not just rank them like in a search engine.

So then you gotta ask at that point, if someone can generate things for next to zero cost and way faster and more efficiently than a person could ever hope to do, the question becomes not if there's anything this can be a one-to-one replacement for that takes 100% of a human brain to accomplish. In other words, this thing doesn't need to be able to write the new works of Shakespeare. In order to be disruptive to the world as it is, this thing only needs to be able to do repetitive, data-intensive things that it takes 10% of a person's brain to do.

But this thing does them at zero cost with no wait time. Think of how word processing used to be. When your grandparents hit the workforce, being able to use the Microsoft Office suite was not a required skill for getting hired. But by the time they retired, along the way, they needed to become literate in Microsoft Word. The technology disrupted how they were doing their job. If when Microsoft Word came around, your grandpa refused to do it, he went into his boss's office and said no.

No, I'll work twice as hard as everyone else. I'm the fastest writer on planet Earth. I'll show you. I'll stay late. Please, please. The dude would get fired. Because no matter how fast of a writer you are, just between copying and pasting, formatting, the sharing of files, there was a world of word processing before digital computers and after digital computers.

Even a mediocre newspaper delivery person in a car can deliver newspapers five times faster than the best person on a bike. In other words, the technology raised the former standard for what the bare minimum was in terms of efficiency. And everybody else either had to play catch-up or get left behind. Now I don't want to focus too much on the economic side of this here. Fact is, there's a lot more to talk about with the impact of AI that doesn't have to do with people's jobs.

But the economic side of this is something that a lot of these optimistic people in the conversation are talking about. And people like Ahmad Matosk, CEO of Stability AI, he has said that while there's all these people out there who may be worried about AI replacing human beings in the workforce, that's not necessarily true in the short term. He says AI won't replace people. People with AI will replace people.

Fellow optimists might say, take what was just said about Microsoft Word and apply the same thing that happened to the inefficiency of how word processing used to be and apply that to every single piece of your career where you do anything repetitive, data intensive, or time consuming. Think of your job right now, whatever it is.

Real question, they would ask. What percentage of your time at work is not you utilizing your full expert capacity on what you're good at, but is instead dealing with rudimentary, simple, time-consuming tasks that could be automated if AI improved even a little? How much of your job could be made more efficient just at where the technology is now?

Now, a lot of people don't work jobs that are on a computer. These people would say doesn't mean they can't optimize certain aspects of their job with AI. And if they don't, their employer certainly will. And eventually, AI literacy will be mandatory like Microsoft Office literacy.

See, this optimistic perspective just needs to be said at some point in this episode. There are voices in this conversation that's going on about generative AI that say, in a very motivational sort of way, that we are living in amazing, amazing times, man. Like, you have the opportunity to learn about this technology early on and then be at a huge advantage over other people around you as you watch this stuff restructure what life is for a human being.

that there's a relative handful of people even thinking about AI right now. And these people think this is one of those areas you could truly, in the next year or two of learning about this stuff, you could become somebody part of the decision-making process of how to roll this stuff out because you'd genuinely be one of the most educated people on it in the entire world. Just go to aidouchebag.com and get their five pillars to success.

And we're going to talk about the good effects this could potentially have on society. We're going to talk about all the bad. I just want to say again, before we get into this, my job here is to give you different takes. Your job is to figure out where you land on all this. It just seems tempting for people to fall into either that contrarian echo chamber camp where they're endlessly skeptical about AI or to fall into the fanatic religious person that's committed to the computational theory of mind. Just

Just make sure as we talk about the good and the bad here that you try to decide what you think may be based in reality and when these people talking about it go off and start LARPing. Because there's definitely some LARPing going on. And I want to let them LARP. It's fun.

But I don't want to waste your time with too much of that, so here's some applications for the good it can do in the world that seem relatively reasonable. Really, all you gotta do if you want to find one is picture any area of society where one person's expertise isn't scalable to thousands of people, where then there's bottlenecks that are created in terms of access to those services. Generative AI, people say, can massively help with these sorts of things. So the obvious low-hanging fruit here is going to be healthcare and education.

The average doctor visit takes people two hours of time between scheduling, waiting in line, transportation. And that is only because people are navigating a system where there's a small number of qualified doctors who need to be able to help everybody.

But most of the sessions where these doctors are seeing people, doctors not using 100% of their creative expertise. Depending on the type of doctor, they're looking at a pimple and telling people it's a pimple, not skin cancer. They're doing routine follow-ups. There are equivalents in the world of medicine of busy work. But imagine a world, these people say, where you could feed an AI your entire medical history on your phone, everything that goes into a chart, and

And then imagine, in between episodes of your favorite TV show, you could be sitting on your couch, send it a picture of your pimple, which the AI would then compare to billions of other pimples it has in its pimple database, and it could diagnose that you have a pimple without taking up a spot in the hospital, without any feelings of embarrassment that often stop people from seeking out information. Imagine the countless potential applications in healthcare when it comes to diagnostics.

or sifting through mountains of data. And imagine the service being provided basically for free to people. This is something that an optimist in this conversation might say. Think also of how education could change. Think of just the applications in the area of tutoring when it comes to lower income kids that don't have access to great teachers or test prep.

This AI, this thing doesn't have to be teaching the highest levels of quantum mechanics, it's just teaching basic academics. So while usually a teacher's time has to be fragmented between students, it's not scalable, larger and larger class sizes and then there's that one kid in the corner that's struggling that needs the extra help,

Imagine one of these algorithms learning everything about that kid and then generating, in the generative AI sense, generating lessons for that kid specifically tailored to them. Knowing how they learn, knowing their biases, the way they get distracted, knowing what things are easy for them, what things are more difficult. Again, an optimist would say, imagine that being available to everybody at next to zero cost. What might the world start to look like?

Lots of people talking right now about how this could be applied to weather predictions, floods, earthquakes, hurricanes, when it comes to traffic predictions. I mean, imagine the increased levels of efficiency you could have if you had a person directing traffic at every streetlight in a city. Yeah, that'd be great, but we just can't do it right now because it's not scalable. But what if we could? Oh.

On that same note, people say, "How about farming?" It is not currently scalable for a farmer to go out into the fields like a scarecrow and just stare at every square foot of their crops. But imagine how this might change the world of irrigation and waste. Imagine how it maybe changes the entire way that we talk about the environment and what's acceptable in terms of how those resources are used. Lots of people talking about how much more efficient this makes drug discovery.

Just a few months ago with this technology, thousands of years of what it took people in a lab doing protein folding prediction took artificial intelligence only a couple weeks. That actually happened. I'll save you some time here. Virtually any area of society where there's one of these bottlenecks when it comes to the scalability of someone's time, where the thing they're doing doesn't take all their expertise, these people would say generative AI is not far away from automating it.

And this extends to your personal life as well. As Yuval Harari asks, why would anyone sit around for an hour and read the newspaper drinking their coffee when they can just ask an oracle what the relevant stuff is that's going on in the world? Think of how this improves the life of someone who's lonely, someone who just wants someone to have a conversation with. Think of how this helps senior citizens. Think of how this changes intimacy. The point is, there's an optimistic way to be looking at all this, that this technology could fundamentally change the entire world.

It could make the human race so much more efficient and intelligence so scalable for the average person that it creates an economic abundance where nobody has to go without anything anymore. And in that world, these optimistic people say, what it is to be a human being would fundamentally change as well. It would go from having a mindset that's constantly focused on how do I acquire the means of survival today to provide for my family to more like, what do I want to do with my life?

Which, no doubt, would create new problems for people. The utopian LARPing starts to get pretty real right around here. All this stuff certainly needs to be said, okay, but it should also be said that basically every one of these things we just talked about, that the technology could be used for to produce something good in some other sector, some other application, the same capabilities could be used equally effectively in a predatory way.

And then imagine the impacts of that being in the hands of virtually everyone for free. For example, the personalized education we just talked about. How wouldn't it be great if the AI could know everything about you and personalized lessons just for you? Wow.

Well, the flip side of that is imagine an AI could know everything about you and then uses all that information to sell you stuff. I mean, if you think it's weird now when an algorithm knows what to recommend for you to buy, imagine a world where these things know every interest that you have, every bias that you have, every fear that you have.

And not just when it comes to sales. Imagine something that's optimized to personally take advantage of your psychology in any way that it can. And something that can generate content that then reinforces that story that it's telling you.

Asa Raskin and Tristan Harris talk about a lot of things that could go wrong if we're not careful with this generative AI. But one of the scariest things they talk about is that when you consider the capabilities of this technology, when you consider fake news stories, when you consider deep fake videos, when you consider bots that are able to take over comment sections and potentially influence elections, the big fear is that people will go from where they are now, where they have a hard time being able to trust sources, to not being able to trust sources at all.

In fact, it all gets pretty philosophical fast in a world like that, because how does anybody know whether they can trust anything about the state of the world they're receiving through content? How can anybody know whether they're ever talking to a real person? Hate to bring up old people in this episode again, but look, everybody knows what it's like to have a relative or someone you know show you a video of something crazy they saw on their phone the other day.

And to anyone with basic critical thinking skills that's been around the internet for their whole life, to you, it's obvious what they're looking at is CGI. To you, this is CGI on a level of Star Wars Phantom Menace 2001. But to them, they can't tell the difference. They think we're living in a world where they just interviewed an alien on the news last week.

What I'm saying is, we can all see how helpless you would be if you truly couldn't tell the difference between reality and a synthesized video. And it doesn't take much to imagine a world where this technology improves enough that you can't tell the difference either.

And short of giving up, short of just accepting you can't know anything about the world that isn't immediately in front of you, short of us creating counter technology of some sort, what could you do in a world where AI can generate content that manipulates your attention and then creates a false reality that imprisons you? It's already doing it to a lot of people just using algorithms and their political bias.

You know, it just reminds you of Guy Debord and the Society of the Spectacle and how he said all the way back in the 1960s that reality is no longer something that people are participating in. For thousands of years, what was relevant to you as a person was what was going on right around you that you could see. But he says now we live in a world where your social role is not to participate in reality anymore, but just to contemplate the spectacle that's given to you.

With generative AI in the mix, this would be that spectacle of his, taken to an exponentially greater level. Think of all the implications this could have on democratic systems in particular. The whole thing that makes modern democracies work is that we can rely on there being free citizens, educating themselves, having real discussions with each other, trying to come to a progressively more accurate understanding of the general will of the people.

But in a world where people are being curated every piece of information they consume, and they can't trust anything that they read, and they can't even know if the person they're talking to online is a real person or just some persuasion bot deployed by an ideology that progressively gets to know you and convinces you to join their side, in that world, how does democracy even continue? How do democratic systems built around checks and balances and slow-moving progress to safeguard against tyranny

How do these kinds of systems regulate something like generative AI that potentially changes faster than you can regulate it? There's a great metaphor I heard a long time ago, of all people, by Dan Carlin, the history podcaster. Shout out to the goat, by the way. But he compares the state of modern democratic systems like the United States to a couple that buys a house. And at one weekend, they're out looking at their house, and they notice there's some mold growing on the outside of the house. Now they have a problem to solve. They have mold.

So they start talking about options as to how they're going to solve it. But they can't agree. They can't agree on a solution, so they just put it off until next weekend. And then next weekend, the two parties, as it were in this example, keep arguing about it for years. Heated arguments for decades it went on. Great ideas on both sides. But they eventually get to a place where now the mold they were arguing about has gotten worse. Now it's spread into the foundation of the house.

Now, what originally was a problem that could have been solved in a weekend, now this is something that's going to require a radical change to be able to fix. One of the questions he asks is, how are these modern democracies that are designed to change slowly to prevent radical groups from overtaking them, how are they supposed to fix problems that are systemic? And now, how can something that is this slow moving ever work efficiently enough to keep up with something that improves as efficiently as generative AI?

Makes you wonder if democracy needs to be reoriented to be able to deal with the pace of change. Makes you wonder if people will lose faith and move more in the direction of electing powerful people that can implement sweeping change and regulations. Just something to keep your eye on in the coming years. But another thing that needs to be mentioned on the potential negative side of this is that the people on the optimistic side were very hopeful about the possibility of this improving people's efficiency and bringing about economic abundance that changes what it is to be a human.

But if we're being realistic, isn't it also possible that this just eliminates jobs? I mean, if one person can now do the work of three people by automating basic stuff with AI, what's more likely? Are companies just going to get three times as much work done? Or are they going to fire two-thirds of their workforce and have 10 people do the work of 30? See, that's the thing about this technology that takes any task it's capable of disrupting and then amplifies it, good or bad.

Does this just take the flaws that are already present in the existing system and amplify those? Does this technology lead to a tech-driven, resource-abundant utopia, or does it just make the rich richer and the poor poorer? Do these super-rich just accept a super-high tax rate to be able to fund a universal basic income?

Would that even fix the problem? There's plenty more bad I could talk about here. Training data with copyright infringement, plagiarism. Algorithmic bias alone could be an entire episode, and how AI is less of a technology as it is a social phenomenon when you consider just how many people are being affected by this stuff. Kate Crawford's work is pretty illuminating on the subject if you're looking for something cool to read.

But I don't personally want to sit here and speculate for hours about all the bad that could happen, because I think you guys get the point. As Aza Raskin and Tristan Harris say at the Center for Humane Technology, we have a window here where the ground rules for how this generative AI is going to be rolled out have not yet been established. They say we have to do something different than we did with our first contact with AI back in the days solely of ranking on social media. That didn't turn out too good. And consider the fact that as we're trying to figure out how to handle all this,

There are a handful of CEOs in the tech industry that are essentially deciding the fate for all of humanity. That has to change. Even the CEOs say it has to change. And if you care at all and you want to find out more about what you can do, you can go to the Center for Humane Technology on the internet, or you can look for it in your yellow pages. But to return back to our agrarian friend at the beginning of the episode, reading a sci-fi novel about the coming industrial revolution and the age of machines, and

As we said at the beginning, there is no way someone could have known back then how alien life was going to look like for someone living just two generations from them in the future. And it's important to document the benefits and the challenges people faced living in the world after the Industrial Revolution that changed what it meant to be a human being in the Western world. And if we're on the verge of a revolution similar to that of the Third Industrial Revolution, the World Economic Forum says all this AI business is just part of what they're calling the Fourth Industrial Revolution, if we're on the verge of that,

Then as someone who's living through it, who's a fan of philosophy and thinks about large-scale historical trends and thinking, you find yourself in a pretty unique position here. You have the ability to prepare yourself for something like this. Most people aren't even thinking about this stuff. Most people don't have the luxury to. Most people are just doing their best, working hard, trying to earn a paycheck and spend some time with their family.

When a new AI feature shows up on an app on their phone, they just use it. They're not thinking, what does it mean for the future of life on planet Earth when I use this app?

No, that's just you. Okay, all of us listening to this, and this guy saying it, we are all truly, truly insufferable. Okay, together, we're insufferable. But it comes with some benefits to be insufferable sometimes, if you think about it. You know, as someone that's into philosophy like this, you are in a unique place where you can see the fall of Alexander the Great's empire before it actually happens. You know, a lot of people have been talking about stoicism lately.

Well, Stoicism was a school of philosophy that emerged after the death of Alexander the Great. He dies, his empire is broken up into four giant pieces, those pieces get even more complicated, and the life of somebody in the Mediterranean Sea region at the time becomes one of nearly constant change. So what emerges are schools of thought that try to deal with that constant change, for Stoicism in particular, with your inability to control anything other than your response to things.

Point is, how do you deal with a world that is changing so fast you can never find your footing in it? How do you ever feel a strong sense of who you are? If generative AI continues to improve and then continues to disrupt and replace the skills of human beings, think about it. People are going to be living in a world where they live long enough to witness their own obsolescence.

Skills that people spent tens of thousands of hours developing, mastering. Things that make up a large part of their identity and how they fit into the world. People will live long enough to see an AI able to do it in under three seconds. You want to talk about a malaise that affected people after the Industrial Revolution.

You want to talk about philosophers like Nietzsche having the philosophical context to see it coming. Just imagine the malaise for people in this new world that may be created. And you are the philosopher that decides to be insufferable and educate yourself about this stuff. You're the one that can see it coming.

It already happens to old people in the world. God, what is it with old people this episode? Sorry. What I'm saying is old people will often see their skill sets become obsolete because of new technologies or just the world changing. But they've often worked for their entire career and can retire and ride off into the sunset. But what if you witness your own obsolescence in your mid-20s? What if somebody signs up for a four-year degree and by the time they're done getting it, the entire field has been replaced by AI?

That is going to happen to someone if this stuff keeps improving. The only question is, on the other side of that, will we be living in an economic utopia? A panopticon? Somewhere in the middle? I don't know. But what I do know is that philosophy can help people see it coming. See, we're not like the peasant farmers that lived before the Industrial Revolution. Just like the technology we're facing, we are something different. Thank you for listening. Talk to you next time.