Podchaser Logo
Home
Clear+Vivid with Alan Alda - Season 24 trailer

Clear+Vivid with Alan Alda - Season 24 trailer

Released Tuesday, 16th January 2024
 1 person rated this episode
Clear+Vivid with Alan Alda - Season 24 trailer

Clear+Vivid with Alan Alda - Season 24 trailer

Clear+Vivid with Alan Alda - Season 24 trailer

Clear+Vivid with Alan Alda - Season 24 trailer

Tuesday, 16th January 2024
 1 person rated this episode
Rate Episode

Episode Transcript

Transcripts are displayed as originally observed. Some content, including advertisements may have changed.

Use Ctrl + F to search

0:00

Remember. To take a rescue. What?

0:02

About the mission depicted in Black Hawk

0:04

Down Or the epic rescue shown in

0:06

Captain Phillips. You've. Probably heard of all

0:08

of these, but did you know that the Us

0:10

Air Force Special Warfare played a pivotal role in

0:12

all of them? These. Airman are

0:14

the most highly trained warriors on the planet.

0:17

Other forces like the Seals and Army

0:19

Rangers call on them to provide skills no

0:21

one else can. Not. Many people make

0:24

the cut. If you think you can visit

0:26

Air Force that got to learn more. When.

0:29

You're ready to pop the question the

0:31

last thing you want a deal as

0:33

second guess the ring. At bluenile.com You

0:35

can design a one of a kind

0:38

ring with the ease and convenience of

0:40

shopping online. Choose your down and and

0:42

setting. When you found the one, you'll

0:44

get it delivered right to your door.

0:46

Go to bluenile.com and use promo code

0:48

Welcome To get fifty dollars off your

0:51

purchase A Five hundred dollars or more.

0:53

That's code Welcome Make Bluenile dot Com

0:55

for fifty dollars off your purchase. Bluenile

0:57

That Com code Welcome. I'm.

1:05

Alan Alda and this is

1:08

clear in vivid conversations about

1:10

connecting and communicating. Welcome

1:16

do a preview of Season Twenty Four.

1:18

I'm here with our executive producer Graham

1:20

Shared to give you a little taste

1:22

of what's coming up in this notable

1:24

season. One thing is notable to me

1:27

Graham is that this season will include

1:29

are three hundred shows. It's clear and

1:31

vivid began. Hard to believe. It's

1:33

a lot of conversation with some of

1:35

the most interesting people in a wide

1:37

range of fields, including some people who

1:39

don't exist like the robots or he

1:42

interviewed last year. Which brings

1:44

us to a special series of three

1:46

shows, restarting the season off with. Since.

1:49

The theme of our show is connecting and

1:51

communicating. Were taking a dive into what may

1:54

be the more meant is changing the way

1:56

we relate and communicate in our history. More

1:59

and more. In communication with something

2:01

that sounds human. But. Isn't.

2:04

Artificial. Intelligence is already affecting

2:06

our lives and making progress at

2:09

a very fast clip which is

2:11

both good and bad. Were.

2:13

Told by people who were creating

2:15

a I that he can radically

2:17

make our lives better, for instance,

2:19

by eliminated diseases. Or

2:22

he can make our lives worse

2:24

by eliminating us. That.

2:26

Sounds like a deserve some attention. So.

2:29

He invited three people on the

2:31

show, each with their own unique

2:33

perspective on a I started with

2:35

face, a leader who is often

2:37

called the godmother of ai his

2:39

Instagram vs somewhat to her embarrassment.

2:42

She made a breakthrough that revolutionized a

2:44

I when she realized that the key

2:46

to making artificial intelligence intelligence is some

2:48

the harvesting of huge amounts of data.

2:52

And her personal story is inspiring.

2:55

Born in China and nineteen seventy six, she

2:57

emigrated with a family to the U S

2:59

when she was a teenager. To

3:01

spoke little or no exists, but so impressed

3:04

some math teacher a New Jersey high school

3:06

that he meant to hurt her and help

3:08

to get into Princeton as an underground. He

3:11

saw something special in her. It

3:14

really started with says it's

3:16

I was I don't know.

3:19

like since I was a

3:21

little girl like eleven twelve

3:23

year old. I just loved

3:25

physics. A was my first

3:27

love and in hindsight, What?

3:30

Happened as I think I

3:32

love that or thesis quest

3:34

to the unknown mystery of

3:36

the universe like physics allows

3:38

you to ask the craziest

3:40

quests that was like beginning

3:42

of space time, founder else

3:45

the universe, the smallest particle

3:47

of or matter and. Then.

3:51

in the middle princeton says

3:53

x i discovered even live

3:55

feather for themselves like albert

3:57

einstein and are resorting there

4:00

So they turned their

4:02

attention somewhat to an

4:04

equally audacious question. But that's

4:06

not a physics question. It's

4:08

about life. And

4:11

I became so enlightened

4:13

and enamored, I realized my

4:15

own audacious question that I

4:17

love the most is, what

4:20

is intelligence? What

4:22

makes intelligence? How do we

4:24

build intelligent machines? And

4:26

that shift in the middle

4:28

of end of my college year was

4:32

how I discovered AI. How

4:35

did you get from that to concentrating on

4:37

images? I think I'm

4:39

naturally a visual person because

4:41

even in my early childhood,

4:43

my dad takes me to

4:45

these natural excursions and

4:47

we look at butterflies, we draw

4:49

the pictures of mountains. And

4:52

there is this fascination of seeing. I

4:54

find that understanding visual

4:57

intelligence to be the most

4:59

fascinating aspect of intelligence. And

5:02

I think that is

5:05

kind of a combination of serendipity. And

5:07

I just got into vision. I

5:10

think I've heard you say that vision

5:12

is more than just a sense that

5:16

it's an experience. Vision

5:19

is intelligence. Vision is

5:21

experience. Vision is understanding.

5:25

And vision is planning.

5:27

Vision is decision making.

5:29

Vision is socialization. Vision

5:32

is a very

5:34

cornerstone piece of intelligence

5:36

itself. Faith

5:38

Ailey's fascination with both vision and AI

5:41

led her to try to build a

5:43

machine that could identify images. This

5:46

was at a time in the early 2000s

5:48

that AI research was sort of stuck. It's

5:51

often called the AI winter. Her

5:53

insight was to train a computer to recognize

5:56

images by showing it lots and lots of

5:58

images, millions and millions of images. pictures

6:00

from the internet. Fei-Hui

6:02

Li called her computer vision model

6:04

ImageNet. That's the

6:06

project that made people

6:09

call you the godmother of artificial intelligence

6:11

as we know it today. I

6:14

know you don't want to congratulate yourself

6:16

too much, but I've heard

6:18

that said about you and I'm trying to figure out in

6:21

what way was it a milestone? I

6:24

think it's best explained with

6:27

actually today's breakthrough in chat

6:29

GPT. Why is it

6:32

that we've seen the AI

6:34

breakthrough because we see powerful

6:37

algorithms trained on a

6:39

vast amount of data, the data

6:42

of the internet. And that's where

6:44

ImageNet came to play the pivotal

6:46

role, is that my students

6:48

and I recognize the

6:50

power of data. We hypothesized,

6:53

I guess

6:55

before most people, that AI

7:00

will have a paradigm shift

7:02

if we power it with

7:05

internet scale, giant amount

7:08

of data. It's a

7:10

data-centric, data-first approach.

7:14

And because of that, we were working on

7:16

vision. So we want to

7:18

make the biggest visual dataset. And

7:21

in order to make the biggest

7:23

visual dataset, we had this

7:26

crazy idea of downloading

7:28

almost all the pictures we

7:30

can get on the internet

7:32

back in 2007 and

7:36

organizing curated catalogues

7:39

in this completeness in terms

7:42

of visual objects. And

7:44

that's when we made, after three

7:46

years, between 2007 to 2009,

7:48

we made a dataset of 15 million images

7:51

across 22,000 categories. And

7:57

that's out of cleaning up a billion

7:59

images. What do you mean

8:01

by categories? Categories

8:04

are the natural way that

8:06

humans conceptualize objects. We tend

8:08

to conceptualize them as German

8:11

Shepherd's, microwave, a sport

8:14

car. Of

8:18

course, sometimes we think about

8:21

my German Shepherd, your microwave.

8:23

But in general, that

8:26

classification of visual concept is

8:28

a fundamental visual

8:31

intelligence problem

8:33

that humans have worked on

8:35

and solved, and it's very

8:38

foundational to our visual intelligence.

8:40

So if you collect a

8:43

great number of pictures under

8:45

the category of dog, and a great

8:47

number of pictures under the category of

8:50

cat, the machine is able to sort

8:52

through that and put a

8:54

name on it when it sees a picture. Yes,

8:57

and mind you, there are hundreds

8:59

of dog species. So you image

9:01

that, it's not just dog, we

9:03

actually had hundreds of different dogs.

9:06

Carrier, German Shepherd, Corgi,

9:08

we have even different kinds of

9:11

Corgis, so it's a lot more

9:13

than just dog versus cats. Right,

9:16

right. So you've got to

9:18

give subcategories as well? Yeah,

9:20

totally. I mean, a lot, most of

9:22

the image that is the subcategories, right,

9:25

right. Like I said, hundreds

9:27

of dogs, hundreds and

9:29

hundreds of birds and cats

9:31

and, you know, many

9:33

different kinds of cars, the

9:36

buildings, trees, flowers,

9:38

you know, it's a

9:40

very, very vast catalog

9:43

of the visual world. By

9:46

the way, Faith A. Lee did all this while

9:48

remotely managing her parents' dry cleaning business back in

9:50

New Jersey. Today, she's

9:52

a professor at Stanford, where she's the

9:54

co-director of Stanford's Institute for Human Centered

9:57

AI. I get the Impression that

9:59

she's a professor at Stanford. Is a big effort

10:01

on your part to make sure

10:03

that the incentives some motivation for

10:05

working on a I in developing

10:07

a I. Further is that it's

10:09

benefit humanity will. That's because is

10:11

a Tennessee. I guess for a

10:13

I to be considered something he

10:15

competes with humans, read it in

10:18

assisting humans. That's really

10:20

bothers me because I think we

10:22

need to be very clear what

10:24

our relationship with schools are. All

10:27

eyes a piece of tool. it's

10:29

a very powerful piece of to

10:31

was a humanity has had it's

10:34

struggle with. Britain. That

10:36

the relationship between often the

10:38

tools but it's important to

10:40

recognize. That we should

10:42

have. The narrative we

10:44

should have the agency. In

10:47

responsible in creating a using

10:49

and governing that tool. So

10:51

this thing about. Let

10:53

A I compete with us more.

10:55

Let A I take care of

10:57

us are let A I. Control.

11:00

Of a sub. Nautile. How

11:02

I see this technology is:

11:04

it's wrong to give agency

11:06

to a I. It's important

11:08

we actually take that agency.

11:11

So people like me I'm

11:13

a technologist. I should feel

11:15

responsible. For. What I build. And.

11:18

Or in the meantime, I

11:20

I hope that Bill as

11:22

leaders also feel responsible. I

11:24

feel I hope civil society

11:26

feels responsible. Wait, we have

11:28

to recognize that agency a

11:30

responsibility. I'll

11:38

be talking next with Eric Schmidt, who led

11:41

Google for decades and it's been a player

11:43

in all the big Ai developments. One.

11:46

Of the things we talked about

11:48

was a chat bots ability to

11:50

charmers into thinking we can trust

11:52

it when it's actually making about

11:55

rages lies is called hallucinating. shared

11:58

by to been designed to be appealing and

12:00

engaging to sound like a friendly person. But

12:03

just like a person who will say anything to

12:05

be liked, they sometimes say

12:07

things that are wildly untrue. That's

12:10

when they hallucinate. And

12:12

when they do this, they may be trying to make

12:14

me happy, but what they're really doing is driving me

12:16

crazy. Do we

12:19

understand anything about why

12:21

AI chatbots often hallucinate? Why do

12:23

they go nuts like that and

12:26

make things up? Do we have

12:28

any idea? Well, let

12:30

me give you a much simpler explanation

12:32

than you might imagine. A

12:35

chatbot is simply predicting the next

12:37

word. And

12:39

so it has been

12:41

trained on a million sentences and it

12:43

says, ah, the next word should be

12:45

this word. And

12:48

you as a human being, or me as

12:51

a human being, we think that it shows

12:53

great literary force and wisdom, but it's just

12:55

predicting the next word. It

12:57

just does it well. And

13:00

so you can fool it in all sorts

13:02

of interesting ways. Dr. Kissinger,

13:04

who recently passed away and I, wrote

13:06

an article and we

13:09

asked Chat GPT to give

13:11

us the citations from his

13:13

publishing, published work and

13:15

give it to us. It's a

13:17

straightforward computer retrieval question, right? Look,

13:20

try to figure out what he

13:22

wrote way back when and give us

13:24

the citations. And it produced five

13:27

outstanding articles with his name on

13:29

them with great titles, which did

13:31

not exist. All

13:34

right, so it's very good at making

13:37

you feel happy, but

13:40

it doesn't have a good model yet

13:43

of what is called groundedness or

13:46

fact-based. Now there are people, again, I can

13:48

describe how to fix that, but the

13:50

important thing is don't rely on this for

13:53

anything really important. It sounds like it's

13:55

so busy being engaging, convincing

13:57

you that there's something there to converse.

14:01

that it's too dumb to know that it's even

14:03

lying. Exactly. Remember, it's

14:05

not human. We're

14:07

using human terms, he,

14:09

she, so forth. And

14:12

we have a lot of evidence that people are

14:14

falling in love with their chatbots. I've

14:16

seen examples of that. That's scary. You

14:20

and Eric Schmidt explore several other scary

14:22

examples. The one that worries

14:24

both of you most are the avatars posing

14:26

as real people. Deepfakes. Right

14:29

now, deepfakes are so convincing.

14:31

I remember your description

14:34

of listening to a fake

14:36

avatar of Steve Jobs, who

14:38

was so convincing to you. Tell me about

14:40

your reaction to that. Well,

14:43

the particular example was that Steve Jobs,

14:45

who died more than 10 years ago,

14:47

was in conversation this year with Joe

14:50

Rogan, who's very much alive on his

14:53

podcast show. And

14:55

he just sent a chill down

14:57

my spine, because I knew Steve very well.

15:00

He sounded like Steve, Steve's mannerisms, and it

15:02

was plausible that he would say that if

15:04

he were still alive. I knew

15:07

his prejudices and his preferences well

15:09

enough to say, you know, that

15:11

seems reasonable. It's

15:14

just chilling. OpenAI

15:16

had a product, which

15:18

in 15 seconds could capture your

15:20

voice and cast it into

15:23

any other scenario. I heard

15:25

one demonstration where this

15:27

was cast into Martin Luther King's dream.

15:30

I have a dream speech, but the person

15:32

was not Martin Luther King. It was current time.

15:35

It just chills my

15:37

– I don't know how to describe it

15:39

any other way. It's a distortion in reality.

15:42

Now, for somebody who is not

15:44

as focused on history and

15:46

doesn't really pay attention to Martin Luther King

15:48

and so forth, they could easily be

15:51

swayed by that. OpenAI

15:53

did not release that product for those reasons. Before

15:56

we get into More

15:58

of the dangers – They did I

16:00

think will happen as a result of the things

16:02

you've already. Talked. About

16:05

is now. Let's. Not

16:07

forget. The. Plus is. The

16:09

things the drivers many of

16:11

us toward welcoming. New

16:14

work in artificial intelligence? The

16:16

good it a new medicine.

16:19

Climate. Science.

16:22

You've written quite a lot about that.

16:25

What? Is similar. Things are really going to

16:27

look forward to happening as it develops. But.

16:30

Let me give you two grand challenges.

16:32

That. I think are achievable and the next,

16:35

say five years. On

16:37

the first is the development of

16:39

an ai doctor. And.

16:41

This is an Ai doctor that

16:43

works with Er, nurse practitioner, a

16:46

health professional yeah developing country, for

16:48

example. And. Brings all

16:50

of modern medical knowledge to that

16:52

village or caretaker or whatever. There

16:55

are examples in the United States

16:58

of areas with relatively poor

17:00

health coverage. Where.

17:02

The set of the art that you and I have is

17:04

not available to them. Can you

17:06

imagine that? There. Is an

17:08

Ai. Doctor. That.

17:10

Worse is that they're not a doctor by

17:12

themselves. Of course there with a work with

17:15

the humans and the human becomes a much

17:17

better per can kinda practitioner. And furthermore it's

17:19

done in the. Language.

17:22

And culture of the of the

17:24

country and the person you're dealing

17:26

with. Their l example: an Ai

17:28

tutor. Which. Works with the.

17:30

Ah, the. Teachers.

17:33

And. Whoever is that?

17:35

A learning professional with students in any

17:37

language and any part of the world

17:39

to get them to learn in the

17:41

best way. They learn people learning different

17:43

ways in different languages. Boys are different.

17:46

Girls Here you've you go. Go to

17:48

the list. Some people want more games.

17:50

Some people have longer attention span, Some

17:52

people are shorter tenses. Spencer said August

17:54

and direct. Those. Two

17:56

alone. right? Of.

17:59

a bra improvement of healthcare

18:01

and a broad improvement of

18:03

education would have huge implications for

18:06

the next generation globally.

18:09

I'll give you some other examples. Any

18:13

scenario where

18:16

in science let's think of chemistry I

18:19

want a I have a long

18:21

one of these long chemical chains that

18:23

they as how chemistry works and

18:25

I want to make it more effective or less

18:27

dangerous or more dangerous or what have you I

18:30

can have the computer go

18:32

through millions of combinations and

18:34

then test which ones are better and

18:37

no human can do that even the smartest chemist

18:39

in the world and they are brilliant chemists can't

18:42

go through everything can't go through million scenarios at

18:44

once. So that ability

18:46

to sort through choices and then

18:48

choose the optimal outcome the technical

18:51

term is called reinforcement learning is

18:54

a very big deal. It applies in

18:56

physics, it applies in chemistry, it applies

18:58

in biology. There are many

19:01

many examples where predicting the

19:03

next word is

19:05

also a technique that you can use to predict

19:07

the next gene, the

19:10

next protein, next biological

19:12

sequence and it uses

19:14

the same principles that were invented at

19:17

Google in 2017 in the trend in

19:19

the famous now Transformers paper. So

19:22

what does this mean? How about

19:25

better batteries? How about more

19:28

efficient energy distribution? How

19:30

about better carbon management?

19:33

Climate change alone, one of the

19:35

greatest dangers to humanity in the

19:37

long run, will be materially improved

19:39

by this plastics,

19:43

paint, pollutants of

19:45

one kind or another. We're going to look

19:47

back on this period and say we were

19:49

so ignorant because we were using such simple

19:52

materials, components and so forth in

19:54

our built existence and this

19:58

is how progress goes on. It's great. And

20:01

all of these are happening at a

20:03

speed that is incomprehensibly fast compared to

20:05

what it was 20 years ago, 30

20:07

years ago. After

20:14

Eric Schmidt talked about both the good as well

20:16

as the bad and uglier AI, we

20:18

wanted to find out what can be done to try

20:20

to make sure the good outweighs the bad. So

20:23

we turn to an old friend of Ciaran Vivid,

20:25

the psychologist Paul Bloom, who'd just

20:27

written a terrific piece in The New Yorker

20:30

about Joseph's question. You

20:32

asked him straight out, why don't we

20:34

just tell AI bots to be good, to be

20:36

moral? Yeah, I'm

20:39

glad to be talking to you about this. I

20:41

agree with you. I think AI is the

20:43

biggest news that's come along in a very long

20:45

time. And it

20:47

could imagine it transforming the world

20:49

for the better in enormous ways.

20:52

It could also kill us all. I

20:55

have to go now. That's

20:57

one of the two. I

21:00

guess we'll find out. And you're right.

21:04

So one long-standing solution to the worries

21:06

people have about AI, either

21:08

worries that AI itself may turn

21:10

malevolent in some way or accidentally

21:13

cause harm, or that bad

21:16

agents could use AI to do terrible things,

21:18

is to make AI moral. And

21:21

this is sometimes called the alignment problem, which

21:23

is you want to give AI a

21:26

sense of morality, a sense of goal similar to

21:28

what people have, and in that way,

21:30

it will avoid doing harmful and terrible things. If

21:34

we just align AI with

21:36

our morality, which

21:38

morality are we going to choose? Oh,

21:40

that's such a good question. That's an immediate

21:42

problem here. Because

21:45

it's already somewhat aligned

21:48

In that if you go to chat

21:51

GBT or Bing or Clota or whatever

21:53

and ask it moral questions, it will

21:55

give you answers that kind of resonate

21:57

with our intuitions. Your

22:00

question of whose morality? Is.

22:03

A great one if I ask. Chatty be

22:05

the and I have done this. What do

22:07

you think of to man marrying. It

22:09

says it's say. There's nothing wrong

22:12

with it. What do you

22:14

think of a woman getting an abortion? It's

22:16

fine, does nothing wrong with it, but many

22:18

people around the world. It doesn't

22:20

match with their morale. They. Would

22:22

say that that gay marriage is more in

22:25

Iran Is a at a woman having an

22:27

abortion is morally wrong says the first question

22:29

which is whose morality and there's no way

22:31

around it. If is gonna line with your

22:34

morality is going to be a different morality

22:36

than somebody from ah raising a very different

22:38

culture. And Environment. And.

22:41

I think some extent I do, we

22:43

just skirt to promise. Okay, fine are

22:45

more hours. Let's list connected to our

22:47

morality. And.

22:49

Them And then we have various

22:52

problems that arise. It turns out

22:54

to be. A. Very difficult

22:56

to program a machine to be moral

22:58

and not have it you know choose

23:00

to satisfy as a goals instead. So.

23:04

The. Main worry one. A main worry

23:06

about ai is a sort of unintended

23:08

consequences. The standard example I think from

23:10

Nick Bostrom is you ask in a

23:13

I'd just make paper clips as many

23:15

paper clips as possible and then and

23:17

a fraction of a second it feel

23:19

it figures that wealth if it kills

23:21

everybody. And turns everybody the

23:23

paper clips that will satisfy problem. You.

23:26

Don't want to do that or would

23:28

a moral a I stop us since

23:30

act with cinnamon. From. Killing

23:32

billions of sense and creatures very

23:34

painfully for food? Would it would

23:36

intervene? When to stop us

23:38

from some doing war? one

23:41

of the one as a points of

23:43

have some stuff i've written is making

23:45

the argument that maybe we don't want

23:47

moral ai we want obedient as we

23:49

wanted to do what we want and

23:52

we don't want to kill us but

23:54

if it's tomorrow if my tell us

23:56

to stop doing a lot of things

23:58

were doing Can you imagine

24:00

what the military would think of military allies

24:02

which decide to be pacifists or decide, well,

24:04

this is an unjust war. I'm not going

24:06

to shut down the tanks and the airplanes.

24:09

I'm going to lower your security system because this

24:11

is an award we should be fighting. Or

24:13

just kill our enemy. Yeah. And the

24:16

AI decides what's the enemy. That's right.

24:18

That's right. Maybe the

24:20

AI is very smart and moral and decides,

24:22

you know, we're the baddies. I've

24:26

thought it over and you're it. Yeah,

24:28

yeah. You're the villains. So people

24:30

say they want moral AI. But

24:33

when push comes to shove, I think

24:35

both have a sort of global general

24:37

scale for military and industry and so

24:39

on. We don't want it. And even

24:41

at a personal level, I don't want

24:43

it. What would I do? What

24:45

would I think of tax software that's very

24:47

AI generated and won't let me exaggerate the

24:49

size of my home office? What

24:51

would I think of my

24:54

self-driving car that refuses to drive me

24:56

to a bar because I drink too

24:58

much? Go back home

25:00

and spend time with your family. You're talking

25:02

to your New Yorker article about

25:04

Isaac Asimov anticipating this discussion we're

25:06

having by decades. And

25:09

he had three rules that robots should

25:11

be programmed with. What are

25:13

those rules? How come they're not working? Yeah,

25:16

Asimov was first to struggle with the

25:18

alignment problem. He wrote these wonderful science

25:20

fiction stories, like

25:22

iRobot, which had these robots in

25:24

them. And he

25:26

assumed correctly that people would worry

25:28

about the robots being well-behaved. So

25:31

he thought up three laws. And

25:34

I'm doing this by heart. This is the

25:36

main idea. The first law is a robot

25:38

should not hurt anybody or kill anybody or,

25:41

through inaction, allow anybody to

25:43

come to harm. So if someone's drowning,

25:45

the robot can't just stand and watch them. Has

25:48

to ask to help. The

25:50

second law is a robot

25:52

must obey all instructions

25:54

unless it conflicts with the first law. So

25:57

he asked a robot to clean the room. It'll clean the room. robot

26:00

to murder your next-door neighbor it won't. And

26:02

the third law is a robot

26:05

should protect itself unless

26:07

it conflicts with the second or

26:09

first law. So if

26:11

somebody tells a robot go do this dangerous

26:13

thing it will do it, but

26:15

otherwise it'll try to stay clear of harm.

26:17

This is very clever. It

26:20

captures certain ideas. You know you want a

26:22

robot to be obedient but you don't want

26:24

it to be a murder machine. You want

26:26

it to help people. You want it to

26:28

not harm people. And you want to

26:30

protect itself. It's an expensive piece of

26:33

machinery. You don't want to just

26:35

walk off a roof for no reason. It's

26:38

really clever. But people

26:41

have looked at this. I'm not the first and

26:43

said but it doesn't really work. And

26:46

of course it wouldn't be strange if

26:49

all the morality could be you know

26:52

synopsized in three laws. So for instance

26:54

the first law says a robot

26:56

shouldn't through inaction

26:59

allow anybody to come to harm. But

27:01

if that were really true then if I owned a robot

27:04

it would run through the streets

27:06

of Toronto. You know helping people

27:08

giving food to the hungry helping

27:11

people you know out of

27:13

burning buildings and everything would never never come back. It

27:16

would be like a Superman spending all

27:18

this time helping others. What

27:21

about the prohibition against harm? Well would a robot

27:23

stop me if I would try to swat a

27:25

mosquito? Would

27:28

a robot stop me if I tried to

27:30

to buy a hamburger? They're saying no

27:32

you indirectly you're causing suffering to non-human

27:35

animals. It

27:37

turns out these all these and there's only

27:39

subtle moral issues that arise that

27:42

people struggle with and you just can't make

27:44

go away. This is

27:46

even an issue right now not science

27:48

fiction for self-driving cars. So

27:51

self-driving cars often face moral dilemmas.

27:55

What if what if it's on an icy

27:57

road and the brakes don't

27:59

work? and it's about to

28:01

slam into two people. Should

28:03

it swerve and slam into a brick wall

28:05

and kill the driver? Does

28:08

it matter if it was one person? Would it matter if

28:10

it's three people? These are hard moral

28:12

problems, and you can't make them go

28:14

away by just appealing to these general laws. So

28:17

what are we to make of this whole thing? How

28:21

do you feel personally as you face the

28:23

day when you sit at your

28:25

computer and you wonder what

28:27

it's going to turn into in a very

28:29

short time, part of a

28:32

network that's either malevolent or

28:35

beneficial or some unknowable

28:38

combination of both? What

28:42

do you think? What do you think about what

28:44

can be done to make the

28:46

story end well? What

28:49

can you do? What can I do? What can

28:52

ordinary people listening to this do? To

28:56

make it mostly beneficial? My

28:59

short answer is I don't know. I don't know.

29:01

You sort of asked two questions. I

29:03

don't know what's going to happen. And

29:05

it's sort of between an awful extreme and

29:08

a very good extreme. You could just average

29:10

them out and say things will remain the

29:12

same, but unfortunately that's not the way things

29:14

work. I don't know what's going to happen. And

29:17

I don't know what we can do to make

29:19

things happen better. I share

29:21

your skepticism about saying, okay, let's shut down

29:23

all AI research. I don't

29:26

think that's possible and could be counterproductive. I

29:28

do think it makes sense to sort of

29:30

tightly regulate it and tightly watch it. I

29:33

think we should be very

29:35

sensitive to the social

29:37

upheavals that are going to happen

29:40

due to AI. So we're talking about things like

29:42

it deciding to kill us all. But

29:44

a more mundane issue is it's going to put a lot of

29:46

people at work. A lot.

29:49

And it's funny because other technological advances

29:51

put laborers out of work. This

29:53

is going to put podcasters,

29:56

professors out of work. And

30:02

it's going to be interesting. I think people

30:05

who load trucks, they have a safe job.

30:08

The idea of a robot, they're very far

30:10

from that. People who write magazine articles,

30:13

I don't know. And so

30:15

we have to watch for that and try

30:17

to not necessarily stop it.

30:20

These changes will happen, but

30:23

deal with it. I feel

30:25

sometimes, you know, as

30:28

a concrete answer, we're coming up to an

30:30

election season. I don't think politicians

30:32

on the debate, doing their debates,

30:34

are going to talk enough about AI. I

30:37

think they're going to talk a lot about cultural

30:39

war issues, they're going to talk about foreign policy,

30:41

they're going to talk about budgets. But AI, we

30:44

should treat it as important as it is. It's

30:46

very important and we should treat it as such.

31:00

So that's just the first three shows of next season, and

31:03

there's a lot more to come. They

31:05

range from a voyage by yacht discovering billions

31:07

of ocean inhabitants that may lead to new

31:09

medicines, a fun conversation with

31:12

the husband and wife about why it may not

31:14

be such a great idea to imagine we can

31:16

colonize Mars, including the little problem

31:18

of having babies there. Another

31:20

old friend of Clearence Vivid, Robert Sapolsky, trying

31:23

to persuade you that you don't have free

31:25

will, as well as neuroscientist

31:27

Tali Sharat, telling you why you need to

31:29

shake up your assumptions every once in a

31:31

while. There's an

31:33

exploration of the surprising history of

31:35

punctuation and Tom

31:38

Hacks. And by the way, that's the real

31:40

Tom Hacks, not just a crummy, deep fake.

31:43

See you next time. This

31:51

has been Clearence Vivid. At least I hope so.

31:55

My thanks to the sponsor of this podcast and

31:57

to all of you who support our show on

31:59

Twitter. Patreon. You

32:01

keep Clear and Vivid up and running. And

32:04

after we pay expenses, whatever is left

32:06

over goes to the Alda Center for

32:08

Communicating Science at Stony Brook University. So

32:11

your support is contributing to the better

32:13

communication of science. We're very

32:15

grateful. For

32:18

more details about Clear and Vivid and to

32:20

sign up for my newsletter, please

32:23

visit alanalda.com and

32:25

you can also find us on Facebook

32:27

and Instagram at Clear and Vivid. Thanks

32:30

for listening. Bye-bye. Shopify

32:38

is the global commerce platform that helps

32:40

you sell at every stage of your

32:42

business. With the internet's best converting checkout,

32:44

36% better on average

32:46

compared to other leading commerce platforms,

32:49

Shopify helps you turn browsers into

32:51

buyers. In fact, Shopify powers 10%

32:54

of all e-commerce in the

32:57

US. Sign up for a

32:59

$1 per month trial period

33:01

at shopify.com/podcast free, all lowercase,

33:04

shopify.com/podcast free, shopify.com slash podcast

33:06

free.

Unlock more with Podchaser Pro

  • Audience Insights
  • Contact Information
  • Demographics
  • Charts
  • Sponsor History
  • and More!
Pro Features