Authored by AI

4. CLICK HERE IF SCARED

Episode Summary

In this episode we dive into our feelings surrounding the sudden acceleration in AI capabilities, what it means for us humans if it starts writing movies and tackling creative and cognitive tasks.

Episode Notes

In this episode we dive into our feelings surrounding the sudden acceleration in AI capabilities, what it means for us humans if it starts writing movies and tackling creative and cognitive tasks. https://authoredby.ai

Hosted by Stephen Follows and Eliel Camargo-Molina

Guests (in order of appearance):

Edited by Jess Yung
Music by Eliel Camargo-Molina and GPT-3
Mastering by Adanze "Lady Ze" Unaegbu

Episode Transcription

Episode 4 - CLICK HERE IF SCARED

 

Corrie Vaus:

Hey, Jake.

Jake Vaus:

Hey mom.

Corrie Vaus:

How you doing?

Jake Vaus:

Good. How's it going?

Corrie Vaus:

It's good. Good. What's up?

Jake Vaus:

Not much. Just wanted to call you for a few things. I think I'm probably going to come home, either on Monday or Tuesday, if that works?

Corrie Vaus:

Perfect. Yeah. The sooner you can get here, the better. I think I'm going to make pumpkin bread today.

Jake Vaus:

Okay, cool.

Corrie Vaus:

What are you doing today?

Jake Vaus:

Not much. Just did a little writing this morning.

Corrie Vaus:

Good for you.

Jake Vaus:

But I talked to dad about something that I'm curious for your thoughts on. So Eli and I are making all those AI videos and that sort of thing, and then recently, this podcast reached out to me and they're kind of interested in doing cool stuff with AI. I don't know, it feels like a bunch of different opportunities are coming up about AI and that sort of thing. I don't know, Eli and I are being used a lot more by different people, are being seen by people. So it's coming to a point where I feel like I kind of have to figure out my ethics with it or whatnot, or what I believe. And I know that you, for a while, I don't know if... We haven't talked in a minute about it, but I know that you felt for a little while that you were worried about AI. So I'm just curious. I don't know.

Corrie Vaus:

Yeah, I would say worried is even a bit of an understatement.

Jake Vaus:

Why, do you think?

Corrie Vaus:

I just look to the future. So when I look to the future, and I don't know what that means, is that five years, 25 years, a 100 years? I think, if I'm honest with you, my greatest fear is that we're going to outsmart ourselves to oblivion.

Jake Vaus:

What does that mean to you?

Corrie Vaus:

That means the end of us.

Jake Vaus:

Why?

Corrie Vaus:

Because we're going to hand our human intelligence off to something that's going to become smarter and faster than we are, that will eventually in some way, overwhelm us or turn on us. Now that's like worst case, end of the world, apocalypse kind of thing. But I think there's a very slippery slope between then and where we are right now. And so it's those one step at a time things that are concerning to me.

Jake Vaus:

So do you think... Because Eli and I are making scripts with AI and that sort of thing, and it's writing short films, which are decent, or sometimes they're bad, but sometimes they're decent. I don't know. What do you think is?

Corrie Vaus:

Well, I mean, the things that you've done with it, that I've seen have been fascinating and I'm amazed at how good they are, how good the content is. So that, in itself, is concerning to me. Because it's decent stuff. It works. And with a little tweaking here and there. Anything this can be used for good and it can be used for evil. So your intent, I know you and Eli and all of that, is for good, right? For entertainment, for experiment. Let's see where this goes. But it's the same thing that can be put in the hands of someone else and turned on us for evil.

Jake Vaus:

I was listening to this podcast and they were saying that there's the potential that once an AI is smart enough to just make itself smarter, it can become infinitely smarter than us. But on the other side of that, at least before that happened, there would probably be a few years where AI developments, I don't know, made things a lot better, cured diseases that we don't have the knowledge to cure now or that sort of thing.

Corrie Vaus:

So it'll cure us before it kills us, that's terrific.

Jake Vaus:

I guess what I'm wondering is, do you see any possibility for it to be a tool? Or do you think it's the Tower of Babylon where it's like, don't mess-

Corrie Vaus:

Well, this is the pessimist in me. All right. I just don't believe that when something is created, that there's a way to contain it within good.

It's not that it won't happen, it will happen. We're not going to stop it. And I think, I'm not pointing the finger at you, but you're dabbling in it. In a way you're moving it forward, in a good way, in a fascinating way, but at the same time, you're moving it forward. There, sleep with that tonight.

Jake Vaus:

So what would you say to Eli and I about making... Do you think there's a way to do it responsibly where it's like, oh, I'm doing this in a way to shed light on how much it's capable of, rather than doing it to... Because I mean, Eli and I aren't AI engineers or anything like that.

Corrie Vaus:

Yeah, Jake, actually, that's really helpful. I think too, along with informing and educating and sharing with people, why not share the, "Hey, this could be scary and here's what they're talking about if..." I think that's a very responsible thing to do.

Jake Vaus:

Okay.

Stephen Follows:

So how's your understanding of AI emotionally, your feeling about AI in the oncoming AI world? How has that changed in the last year, as you've been actively using it and talking to people? Do you find yourself being more at ease, less at ease, fearful?

Eliel Camargo-Molina:

I think I should ask you that question. Because my answer is, it hasn't changed at all.

Stephen Follows:

That's fascinating. But is that because you were right at the beginning and you haven't changed or because you're not paying attention or because you have one opinion a year and damn it, you've already decided what it is?

Eliel Camargo-Molina:

No, to be fair, my specific opinion about details of what AI is and isn't, has changed quite a bit. But there are details, in the sense my emotional response to it has always been one of wonder, excitement, and just really eager to see what this is going to do. But I am a very particular individual. I'm a theoretical physicist that grew up reading things about rockets and robots and science and the future. I had these tiny books when I was a kid that were just like, "The world in 2020." And it was a bunch of robots and people getting fake organs and-

Stephen Follows:

Well, you should be hugely disappointed then because you've been promised a huge amount.

Eliel Camargo-Molina:

I mean, we still have, what? Seven years, for the 2020s to finish? I don't know.

Stephen Follows:

You're such an optimist. I can't knock it. I'm trying to find chink in your armor and you're like, "Nope, nope, nope."

Eliel Camargo-Molina:

No, but that's the thing that my perspective is like, okay, yes, AI, it's incredible. There's a lot of possible pitfalls, there's a lot of problems, but so have there been with any kind of technology that came and revolutionized what it means to be human. And yes, we have to be careful. And part of the reason I'm doing all of this with you is because I want to be part of the people being careful about it. And I want to be part and give my grain of sand to help whatever AI is going to do, to be the most beneficial to everybody. And I'm not scared because I understand that, by definition, what makes us, I shouldn't say by definition, by my definition, what makes us human, is that intricate connection between technology that we develop and how that changes what we are. I mean, technology in the broadest sense, where I include things like language, a spoken word, written word, clothings, civilization, as technologies.

I mean, I think for example, there's countless examples, but one good one is electricity and the story of Tesla and Edison, AC versus DC. People weren't sure what electricity would do. A lot of interesting, I would call it, I know at this point it's probably folklore, but mysticism and myths around the area of electricity and life. People doing experiments in the street, reviving animals, bringing them back to life, which is just putting electricity and then they would twitch their muscles and people were, "Oh my god." People that really believed and pursued the kind of pseudo-scientific, but at the time, probably a scientific, most of the science, not all of the science, but some of the science, of trying to bring things back to life. Understanding the life force behind electricity or simply what electricity could do. And there was a lot of that. And there was a centralized on any... I mean, making electricity is not that hard. I mean, we understood it from even way before there was like, I don't know what's the oldest battery, but it's pretty old.

Stephen Follows:

Yeah, well, I think there is a Syrian... But there's certainly very, very old batteries. But that's a really good point actually, because that hits on, I guess the thing that it's really interesting for me, which is barriers to entry. Where the barrier to entry to be involved in this AI revolution is an internet access. That's it. Because you can sign up to OpenAI, they'll give you enough free credits that you can keep getting free accounts if you want, but you can have conversations and make discoveries.

Now that's not the same as Google perhaps, with their compute power and their engineers. I'm not suggesting it's flat and everyone does the same. But I have definitely, in my mind, thinking this is one of the first technologies that... I mean, even when we were kids, we would be programming on computers, which were kind of open, but they were expensive items. But if your example of electricity, that's a good one, I haven't heard that. Then I guess if there is power coming through or it's easy to self-generate without having to... It's not a huge barrier to entry, then yeah, I guess there must have been lots of people mucking around on a Thursday evening with their own electricity to see what it can do.

That is what I'm thinking AI is at the moment. It's some aspects of AI and that's the bit I found slightly terrifying. But I guess you're right, it's just every generation thinks they've invented sex, every generation thinks they've fixed war. And actually, it's just our version now.

Pei-Sze Chow:

There's always going to be an initial fear with these things and it just takes time for people to get used to it, to slowly accept it for what it is. A really nice calculator.

Stephen Follows:

Would you like to introduce yourself?

Pei-Sze Chow:

Yes. So I am Pei-Sze.

Stephen Follows:

That's Pei-Sze Chow.

Pei-Sze Chow:

Lovely to meet you. Thank you very much again.

Stephen Follows:

She's the assistant professor of media and culture at the University of Amsterdam.

Pei-Sze Chow:

And I primarily research the cinemas of small nations. And I look at questions of representation, identity, and not just onscreen, but also representation offscreen. So in terms of the cast, the crew, the stories. So now, where is AI in all of this? (laughs) Where is AI in all of this? What's a film scholar got to do with AI, really? Nothing. But this is where the connection is. So while I was in Denmark, I was attending lots of industry events. I was researching the people who work in film, television. So I was attending these industry events and it was there, around 2018, 2019 that this topic of AI in the film industry, or AI in the media industry more broadly, was being brought up at these events.

I think at the same time, yeah, 2018, '19, there were lots of headlines in the trade press, in Variety, in Forbes about, with very doomsday-ish headlines, AI is now going to decide what films to make. Very sensationalist. And so I noticed that these headlines were getting a bit more frequent. So this was a thing, and I thought as a film scholar who is interested in representation, this is something that's totally relevant. There was a great intersection there between my interests and what's happening out there in the industry.

So my interest in this really, is how the use of these tools, these AI platforms and tools, how they impact the stories that are being created basically, and how that affects the way we see ourselves through film. So that was kind of my big question. How is this affecting film culture? So not just in terms of the stories it tells, but also in terms of the labor that it potentially replaces.

Stephen Follows:

I guess could you tell us the little story about ScriptBook first, the little case study of that?

Pei-Sze Chow:

Yeah, sure. So ScriptBook was, is... Is, was? Was a company that formed in around 2017, I believe, based in Antwerp in Belgium. Their product basically, was a platform in which you as the user, would upload a screenplay and their platform would sort of analyze the screenplay and produce all sorts of data analytics to show you what your screenplay is comprised of. So it breaks it down to the genre, it predicts the genre, it predicts the box office return, the potential box office performance. It also breaks down the gender makeup of the characters. It gives you a nice graph showing you the intensity of the action throughout the screenplay. It basically takes everything from the screenplay and produces nice pretty graphs, so that you can see through data visualizations, what your story looks like.

But I think the main thing that they were pushing, was this predicted box office performance, not just in an overall sense, but there was quite a fine level of granularity there. It gave you an idea of where the film would be positioned against other existing releases. So these were all sort of data that would be useful for producers to make a decision as to whether this screenplay would be a box office smash hit or not. So do I green light the screenplay or not?

Cinelytic is a US based company that does something very similar. So their platform asks the user to input, I think up to 90 data fields, could be more, from the film's title to its budget, to its running length, genre, even suggested casting. And it crunches all of that information and again, gives you very nice graphs and predictions about its performance, predictions about how well the film would do if you replace a certain actor with another. Their selling point, very interesting, was that it's very accurate, super accurate.

Stephen Follows:

So what happened with ScriptBook then? Because if they came out in 2018, 2017 and they were like, "We've solved it." And assuming all their tech was accurate, which I have no reason to think it wasn't, why are they not now millionaire unicorns living on private islands? What happened after that?

Pei-Sze Chow:

Yeah, the pushback from the creative communities, that's something that a lot of these companies faced, right? And they've had to address it, not just at these big industry events, but they've eventually had to address it on their websites in the FAQ sections. And they're very emphatic about the fact that, we are not here to replace creatives. We are not here to replace the human in the loop, so to speak. We are merely assisting in business decision making. We're basically a very glorified calculator, to put it very crudely.

And I guess, that's where Cinelytic succeeded and a company like ScriptBook failed because ScriptBook I think, in their communications, as you said in their PR, they really pushed the idea that we could be an essential tool in your creative process, that we are going to enhance your creativity. We're going to make human creativity even better with data. When all these headlines came out in 2018, 2019, the way it was framed in the media was very alarmist, that very, very Terminator, Skynet's going to take over the film business, sort of thing.

So yeah, the media has had a hand in really framing the discussion about these tools. I don't think they're a bad thing. I don't think they're a bad sort of development in the film industry. I see it as something that's very natural. It's a new technological tool that will now become very much a part of the whole, the chain, the creative process in the film industry. But with new technologies, as we've seen with the wheel, the car, with digital video, digital editing suites. I mean, it's basically speeding up something that's already being done by spreadsheets.

Eliel Camargo-Molina:

I think there's another aspect of it where I think I want to hear what you think about and what you have to say about it. And it's the fact that AI right now, is stacked up against a whole mythos, a whole story of the evil machine, of the dystopian intelligence from a computer.

Stephen Follows:

The Terminator.

Eliel Camargo-Molina:

Terminator, so many movies, so many stories, so much in the public consciousness, in our common story box has the idea of a rogue AI, a bad computer, 2001 Space Odyssey to start with. Why, where does that come from?

Stephen Follows:

Well, it's accurate, isn't it? No, I'm joking. (laughs)

Eliel Camargo-Molina:

We don't know yet.

Stephen Follows:

It's funny, I did a study a while back, where I was looking at the content of movies, various things to do with whether they had objectionable material or swearing or violence. And then I also have looked at different emotions in movies and things. And it led me to realize that when you look at positive messages and positive role models, actually the films that on average, that have higher levels of positivity when it comes to aspirational characters or stories, they tend to make more money, or they're more likely to be profitable, I think it was rather than make more.

And that led me down this route to think, okay, well sci-fi is something that I'm particularly interested to analyze. Because I think it is more open to interpretation because if you're doing historical film, there's so many more things that are locked down. The west was the way the west was. Maybe you can bring a fresh perspective, but maybe not. But sci-fi, it's an open field because you don't even have to, you can just invent far more, which is why a lot of the pioneering stories we are told happen in sci-fi. And the movies and popular culture around communists were best told through sci-fi. The Body Snatchers or the first interracial kiss was on Star Trek because it was between two... An alien and a human.

Even and so sci-fi has often played that part where we can talk about the things without talking about them, in a way that everyone knows, but no one says. And so I wanted to study whether dystopian sci-fi movies do better than utopian sci-fi movies. I just thought that was interesting. And I had a personal theory that just... I'm getting tired of bleak sci-fi movies, like the new Blade Runner film that came out few years ago. I'm just like, oh, can I just have an inspirational, aspirational sci-fi movie? And I couldn't study it. And I even put on, to my community, I said, "Look, can anyone think of how we overcome this barrier?" And the barrier is what is dystopian and what is utopian when it comes to fictional stories?

Because all stories need some form of drama or conflict, even a kid's TV show because if you just sit or you just go and have a nice time or you go and eat a bagel, that's not a story that anyone wants to watch. You have to have, "Oh my God, we're out of cream cheese. Where's the nearest place?" But even if, micro drama can work, "Am I going to get back in time to watch my TV program?" That can make... I mean we've seen all sorts of stories becoming fascinating stories, even small moments. So you need some conflict. And if you have a utopian world, then actually, it's harder to have conflict because everything's fine and sorted.

But it actually goes one deeper than that, which is that one person's utopia is someone else's dystopia. It's like, I couldn't do this study of dystopian versus utopian because it was subjective and because almost everything was dystopian. So you asked, why are the narratives about AI in movies, et cetera, almost always negative? I would say, A) that you need to have negativity of some kind for a story. A utopian AI is created, the perfect AI is created, is not an interesting story, a bad one, inherently is. We also want to hear negative stories because they are warnings and horror films, people want to be scared so they feel like their freeze, fright or flight mechanism is being activated and tested. We feel like we're being tested.

But also because there's an extra component, which is I think that the negativity of something new, a stranger comes to town, is the essence of most stories. It is easier to assume that stranger might be a bad thing. We have a more vivid... That's the one things in behavioral economics, the more vivid one of the options are, the more likely we are to believe that's the case. The availability bias of, I can think of the things and it's easier to think of the negative consequences of a super smart thing that has control, than it is to think of the healthcare benefits.

Even though we are not many steps away from that. Many of our friends, family, or even ourselves will be afflicted by negative health outcomes. And if I told you there was an AI that could fix that, solve cancer, whatever, then actually, that would be great. But it feels more intangible than they'd come with a gun and try and kill you. And so the vividness of it is playing a role there.

Eliel Camargo-Molina:

And that makes a lot of sense. And I think in particular, the idea that the stranger in town is always perceived as a dangerous thing. And it points back again, at our relationship with new technologies historically, which is what I always try to look back at. It's not always a perfect analogy because we have the extra spicy ingredient that things are going much faster now, which means that we have less time to figure out how to do it in a good way for humanity. But I remember reading, this might be apocryphal or not, maybe you also know about this, that when trains were new, people were afraid to jump on them because a lot of them believe that if the human body would go so fast, you would just die? Your heart would stop or something like that.

Stephen Follows:

That's true. But it's even worse than that because they believed that it would affect women's wombs. So not only was it dumb, it was also sexist. And so-

Eliel Camargo-Molina:

Oh my God.

Stephen Follows:

Yeah, it's amazing that you think about how could you get it so wrong in so many ways? And yeah, there was a sexism to that as well, as what we would now regard in our current understanding, as stupidity as well. Yeah.

Eliel Camargo-Molina:

All right. But is that feeling that we tend, I mean maybe it's a good defense mechanism, maybe it's better to be wary than to be embracing something. Maybe it's a survival mechanism.

Stephen Follows:

Oh yeah. I mean throughout the whole of course of human evolution and it's always been successful to be paranoid and assume the worst. That's a Rorschach test, you see a random thing, you see a face because if you are wrong, 99 times out of 100, 99 times you just look like an idiot or you just very worried. But that 100th time you survive and pass on your genes. So paranoid people live much longer lives, worse lives, but longer lives. And so the paranoia of assuming that something that is irrevocable, once you've created this super smart AI that can create more smart AIs, it's one step. It makes more logical sense to fear it and destroy it, in the human nature way, I'm not saying we should, but that makes sense.

I have an interesting moment actually today, I went for a walking meeting with someone I wanted to catch up with. It's such a lovely idea. We were like, "Oh, let's have a coffee. It's been a while. Let's catch up. We've got some stuff, some work we need to do." And he was like, "Can we do it as a walking meeting?" I was like, "Yeah, that's great." So we went for this long walk on the canal, walk for a few hours. And then he realized he left his bike where he chained up, where we met. So we had to take the shortcut back and we'd taken such a meandering walk along the canal that actually, there was a much quicker route to go back, once we decided to go back. So I got out Google Maps and I put in the location and I was like, "Okay, great. It's about 40 minutes walking back. Easy."

And while we were carrying on chatting, I have my phone out and the route only had about four or five major turns. That's it. Because it's pretty much as the crow flies, because London is, it's not a grid, but there are so many places you can just walk along big roads. But I found myself constantly checking the route. And the thing is, I was in the Boy Scouts, I really liked hiking and I feel like I had a very good sense of direction and navigation, instinctively. And also, even living in London all my life, I get the sense of you need to go in that direction and you take too many lefts, you got to take a right. I feel like I had that, but I felt really paranoid that I could take one turn wrong and end up in the wrong place or whatever.

So while I was talking to him, funnily enough, about AI, I said, "Oh my God, this is actually an example I've been giving as an idea generally, but actually it's happening to me live in this moment." Which is that we've offloaded the concept of navigation and direction to these devices. And that's nothing inherently bad in the sense that I don't worry about calculation, I don't worry about remembering phone numbers. The storing of information is not something I need to worry about, which in theory and maybe in practice, frees me up to think about other things. But what worries me slightly as someone, I am intelligent, I care about intelligence and I try and use my intelligence to help people.

The idea that intelligence is the next frontier to be offloaded is really kind of terrifying to me because it's almost like I was strong, or very fast at doing something before the industrial revolution, or I had a very good memory before the computer hard drive was invented. And I actually, as someone who thinks that that is part of my identity and part of my skill and part of what I can contribute to the community, the idea that that will become ubiquitous is scary. It's not scary as in, I feel like I'm going to lose my place, but what does it mean when everyone can do that?

I'm sure artists are feeling that now, with these AI image generation, where they're like, "All my skill was conceive of something no one's thought of before and then I could execute it well. It would only take me a month." And now I'm like, "Yeah, I just did four while you were saying that." That's something that... And so realizing I'd lost that navigational stuff and then trying to apply it to what that might be in intelligence within my lifetime, that, scares me, I think

Eliel Camargo-Molina:

That's a harbor of both intelligence and location awareness. You feel like that.

Stephen Follows:

Yeah, totally. Because I've got something to lose.

Eliel Camargo-Molina:

I have the opposite example with the location thing, because I famously have none, since before GPS was something you could carry cheaply in your phone.

Stephen Follows:

Was it made for you?

Eliel Camargo-Molina:

The minute you could have GPS in your phone, I saved up money. I used all the money that I got by teaching when I was a student in university. I taught other students and whatever. I bought this Nokia N95 phone in, I guess it's 2006, 2007, that had GPS on it.

Stephen Follows:

And dude, you were so cool. I mean back then, obviously.

Eliel Camargo-Molina:

But driving in the streets of Caracas 2006, 7. with a GPS, because there was no other way I could drive in Caracas. But yeah. So to me, now we are equal. To any functional level, it doesn't matter anymore that you're good at it and I'm bad at it. So I think in that sense, if we're going to offload cognitive or at least some slice of what we call intelligence today, because that's the other conversation, how much of what we call intelligence today is based on what we need to do to be, to do things? But I mean that's lever is going to go away and it's just going to equalize us a bit more.

Stephen Follows:

Yeah, that doesn't worry you, though? And I'm not suggesting it's a bad thing to be worried about, but maybe uncertainty worries me more than... I don't have a concrete negativity, dystopian view of AI. I have an unsettled feeling of change. It's anxiety I guess, more than fear. Fear is more directed, right? Anxiety is indirected, it's omnidirectional, I don't like it. I don't like the change.

Dan Rockmore:

What is the anxiety that we have over machines doing creative work that we might enjoy?

Eliel Camargo-Molina:

That's Dan Rockmore, he's a professor of math and computer science at Dartmouth.

Dan Rockmore:

My interests are in network science and lately, in machine learning, especially around text analysis and text generation. I'm super interested in the problem for some reason I can't quite figure out, is whether or not a machine can write a good poem and whether or not these large language models are encoding anything poetic, anything artistic, any literary, or if it's just kind of word salad? And I think I've always been interested in the problem of creativity. And so for me, it's a problem. And the way that machines now interact with the creative process, is an interesting subject to me. And philosophical dimensions as well as just the creative possibilities. So anyway, that's me.

In the '70s there was this line of work from Bartes and it was all about the death of the author. And he wrote about this and Calvino wrote about this. It was all about the fact that the stuff that any one of us writes is some redigestion of all the cultural memes that have been swimming around in our heads, spit out by my collection of choices about how to organize those thoughts. I would argue that GPT-3 is a realization of the Bartes model of writing and the fact that people don't like to acknowledge that that's the way we write.

So that's a conversation. But I think there's pretty good evidence that we do write stuff that we've heard and we've read, with modifications based on who we are, I suppose inflected by our experience. But at the end of the day, we're spitting out words. And so the way in which we organize those words, absolutely 100%, have been affected by the words that I've read and the words that I've heard. And GPT-3 has all those words... You could have more of those words or you could retrain one of these models to have a particular set of words or conversations and texts and things like that.

So I think there's a pretty valid argument to be made, that GPT-3 is writing kind of the way we write. So I guess, this is a long kind of way of saying, it's not obvious to me that machines aren't writing the way we write. And the fact that we want to say that they're not producing something that's human, that gets at a very different question. I mean people worry about, are machines going to be like humans? But I think that the real worry we have is that we're concerned that we're like machines.

Stephen Follows:

So are there any aspects of AI that do worry you?

Eliel Camargo-Molina:

Yeah, as mentioned, I think there's a lot that can go wrong. A lot of it has to do with the fact that it's in some way an amplification of ourselves. And every time you amplify something, you run the risk of also amplifying the bad parts of it. So these models, as we have seen and we'll understand even more so, they are making data that we have produced as humans, do things. It's another way of saying that they're learning from what we are teaching them, but they're also in many instances, learning in a somewhat cleaned up and not really set of data, in a way that is sometimes unsupervised, both in the technical and non-technical sense of the word.

So there is a risk that a lot of the things that we are trying to get rid of, get amplified. And things like sexism, racism, extremism, a lot of powerful things that we are, as a civilization, understanding and learning how to deal with and that we are not yet through how in understanding they're going to get amplified, as well as the good ones, in a sense. But all of this means that there's... We have to think about it and we have to have that in mind, that it's a mirror. And I think that's very dangerous.

The other thing that can be very dangerous is such a powerful thing. Who's going to control it? Who's going to be in charge of the best ones? And are we introducing a new economy of making a new hierarchy of who is powerful and who has things and who has resources and who doesn't? And you can think of a million dystopian scenarios. Countries that only get access to the old models that nobody's using anymore. And the other countries having the good ones and maybe not even countries, maybe we're talking about worldwide, where there's no countries, but whoever can afford to have the good ones, has the good ones.

And there's so many things like that. And of course, they are there. Because I think one of the things that plays a role here, is the fact that we are not going to be able to each of us, develop a state-of-the-art AI because it requires millions of dollars and access to the data. So I think it's already a given. I agree that we don't know yet, but what it's a given is that, at least the best models are going to be in control of a few people. They're going to make them, only a few people are going to be able to make the best ones, for a while.

Stephen Follows:

Well that's the thing, for the while, because one of the more hopeful things I have, is this idea of moving from the historical example of moving from Web 1.0 to Web 2.0. So Web 1.0, the new, sort of the internet revolution, a lot of that had physicality to it. Laying down cables, having routers, you had to have your own servers, things like that. So it really became further entrenched. The people who had access to wealth in some way, or business faith in people who can give you access to wealth. Then there's a dot com crash and then the Web 2.0 was built on that. But you could then just spin up a server and now you can pay for service space. You don't need any physicality. So out of the wreckage of that, those cables were still there. New businesses could come on.

Now, there were new orthodoxy and that allowed Amazon to exist and Amazon's now a behemoth and whatever. But fundamentally, AI will have I think, a faster half life. So it may well be that you have the better model than me, but if I can use your model temporarily to write my own model or whatever the version would be, I can't pretend to know how that would work. I might be able to unseat you. In the same way that at one point, MySpace was number one, Facebook was number one. Maybe it will become again, but probably not. And I don't know yet whether we've already hit that entrenchment point, as in the people who have most of the Bitcoin are not really going to change. That is a new inequality that is baked in the way that privilege and power and wealth in the fear world is already. Or, whether it's the Web 1.0s, 2.0, where there will be these cycles where people can build on the shoulders of others and then unseat them through innovation, through some technical idea.

And I'm hopeful for the latter because yes, let's say you have the best model right now. If you sell me access to it, that I can use that to make some leap or discovery or something. Or even I can do the discovery elsewhere because a lot of AI tech is actually, the core technology is in the open, it's just in the training that needs the money. And if I develop a faster way of training or whatever it might be, I'm hopeful that it will be a more unstable power structure, meaning more refreshing. Which in theory, there's a lot of optimism going on in each of these leaps, in theory allows for greater movement. And therefore less likely that you will have people who are disenfranchised being further disenfranchised.

But I have no basis for that. I'm not saying I'm predicting that. I just hope that the uncertainty and the instability of it creates a fresher cycle than what would be in the past, where buildings and money and power would create more buildings, money and power. The information previously needed access to the library. And I might be saying, "Yeah, I have the greatest library, I'll give you the key and you can come in for a hundred thousand hours." Now, you have Google and Wikipedia, you don't need me. So the irrelevance of the current gatekeepers is the root of optimism for me, not the facilitation of those gatekeepers. It's not their permission, it's their relevance.

Eliel Camargo-Molina:

The speed in which the rules change, it's so great that the old power holders cannot adapt.

Stephen Follows:

Yeah, exactly. And you see that in business a lot, when you look at the top 100 companies from a 100 years ago and you're like, I know three of them. And really, those three are just logos that have been bought and sold. They're not really the same company because some of that would be petrochemical companies and countries as well. The countries that hold power now and have financial power, some of it is long-term stuff, but even that's geography based. Some of it is to do with size. They've got a lot of people and some of it's because they happen to have oil and suddenly oil has become, coincidentally to them, they didn't make that happen. But they have oil and oil has become essential. When oil dries up or becomes less essential that will undermine some of their power. Or even geographic, they sit between two nations, or whatever the thing might be. There is a certain belief I have that, as the requirements change, who's benefiting from those requirements will shift, not for the better or for the worse, but will shift.

And AI has a faster half life on this stuff and has so much more unknown, that I wouldn't say I am optimistic. I would choose to be optimistic. I choose to believe the uncertainty because if I thought it was certain, I'd almost think of a bad certainty. I can't think of a positive certainty, but I can believe in the possibility of certainty.

Eliel Camargo-Molina:

It's fair to point out, that that also depends on assuming that the development will just continue to be either as fast or faster than it's happening right now. And to be fair, one of the things that I have completely learned, by talking to the experts we've talked to and by diving into this project and reading a lot because of the project, is that we are uncertain. We don't know if there's going to be a ceiling. And a lot of people in this hypothesis world of opinions, educated opinions, people that know much more than I know about these things, some of them think that we are going about it wrong and that we are just putting ourselves into a place where things look very great now because we are throwing computer power at it. But it's going to stagnate because the way that we are doing it is not right. So that might also happen.

Stephen Follows:

Yeah. One of the people we spoke to, which I don't know if we've used the clip yet, but we will definitely in future episodes, was Gary Marcus, who's a knowledgeable skeptic, if you will, not on all AI stuff. And I'm perhaps tying in with a simplistic brush and his views are more nuanced. But he cited an example, which I thought was a good one, which is that we've built this incredible ladder to the top of the Empire State Building and everyone's going, "Great, we can do it to the moon." And not understanding that that's a different type of challenge. It doesn't mean we can't get to the moon. It just means that you can't necessarily just keep doing what you've done and keep adding a new step and eventually you get anywhere. And I thought that was a nice example, as an allegory, it doesn't prove you won't. It just means that you've got to check your thinking, that just because you have gone up, up, up, you will go up again, line goes up.

Eliel Camargo-Molina:

And all of those are interesting things. But at the end of the day, my kind of feeling with the response to the fear I see in most lay people I tell about AI, is that it is a good thing, as long as it doesn't drive you to inaction. As long as it doesn't drive you to separate and not participate in whatever way. Whether that is using it, whether that is learning how to code, because I am a preacher, I tell everybody, it's almost a meme among my humanist friends, I always say, "Go learn to code. It's easy." Or if it is just being a user, which is actually a very valid way of exploring these things and designing sensitivity training for future models, how do we teach them to be not racist?

Stephen Follows:

I think we have to teach humans first, but...

Eliel Camargo-Molina:

I think we have to teach both. I think that's the thing. We have to welcome this. In some ways, it's not like us, in some ways it is a little bit like us and we have to then problem solve in the same way that we problem solve with humans.

Stephen Follows:

It makes me think of, there's a really good comedian Eddie Izzard, where (s)he talks about parents deciding things for kids and he's like, "You shall play the violin because I never had the chance." And the kid's like, "Well, you could learn now." He's like, "Shut up."

*NOTE: Eddie Izzard announced in 2020 that she prefers female pronouns. Stephen was unaware of this as of the time of recording, we apologize for the mistake.

Eliel Camargo-Molina:

That's a very... I love that.

Stephen Follows:

We need to make sure that these AIs are not racist. How about you not being racist? We need to make sure these AIs are not racist because they are the future.

Eliel Camargo-Molina:

That's amazing.

AI Voice:

Authored By AI is brought to you by Stephen Follows, Eliel Camargo-Molina, Isadora Campregher Paiva, Bob Schultz, Jess Yung and GPT-3. Audio post-production from Adanze "Lady Ze" Unaegbuand with thanks to Rob Cave and Ina del Rosario. Find out more at authoredby.ai.