Podchaser Logo
Home
Episode 6 – Shut it down?

Episode 6 – Shut it down?

Released Thursday, 21st March 2024
 2 people rated this episode
Episode 6 – Shut it down?

Episode 6 – Shut it down?

Episode 6 – Shut it down?

Episode 6 – Shut it down?

Thursday, 21st March 2024
 2 people rated this episode
Rate Episode

Episode Transcript

Transcripts are displayed as originally observed. Some content, including advertisements may have changed.

Use Ctrl + F to search

0:00

This is The Guardian. Today

0:11

on Black Box, two

0:13

visions for the future of AI. Why

0:17

humanity may be doomed

0:19

and why that may be

0:22

an overreaction. branch Finding.

0:29

Your perfect on was hard but thanks

0:31

to burrow furnishing it is never been

0:33

easier. Burroughs. Easy to assemble

0:35

modular sofas and sectional are made

0:38

from premium durable materials including staying

0:40

and scratch resistant fabrics. Plus

0:45

every single Buro order ships free right to

0:47

your door. Right now, get

0:49

15% off your first order at

0:52

buro.com/ACAST. That's 15% off

0:55

at buro.com/ACAST. This

1:03

is a story about the end of

1:05

the world, and it starts with

1:08

a joke. About

1:12

two years ago, the scientist Geoffrey Hinton,

1:15

one of the godfathers of AI,

1:17

was working at Google, tinkering with

1:19

a program most people haven't heard

1:21

of. There was a model in

1:23

Google called Palm, P-A-L-M. It

1:26

was basically a Google version of

1:28

ChatGPT, never released to the public.

1:30

It was just research. And

1:33

Hinton was checking it out when it did something

1:35

that terrified him. Palm could explain

1:37

why jokes were funny. Not all jokes, but quite a

1:40

few of them. Here's one of the

1:42

jokes it could explain. It's someone saying,

1:45

I was going to fly in to visit my family on

1:47

April 6th. And my

1:49

mum said, oh great, your stepdad's poetry reading

1:51

is that night. So now, I'm

1:53

flying in April 7th. I

1:56

know, Not a very good joke.

1:59

But The point is, when you ask... palm

2:01

to why this joke was funny. He

2:03

could tell you. The. Joke is that

2:05

the speaker his mother is trying to get them

2:07

to go to their stepdad poetry reading, but the

2:10

speaker. Doesn't want to go So they are

2:12

changing their flight to the day after the

2:14

poetry reading. Clerics

2:16

to hinson so this was

2:18

really frightening. He has a

2:20

really good sense of humor

2:22

and to him that something

2:24

innately human. To get a joke,

2:26

you need to understand so many

2:29

nuances of language, human nature, how

2:31

people think. He. Told him

2:34

these things he hopes to create.

2:36

They'd. Cross the barrier Another we I'd

2:38

always sort of just for myself use

2:40

that as a cartoon of whether these

2:42

things were really getting to human levels

2:45

of understanding that they. Didn't.

2:47

Just under some language. literally. the and as to

2:49

why joke was funny. So.

2:51

We may be were much close to human

2:53

level intelligence, and I thought maybe we're not.

2:55

So she's a hundred years away. maybe only

2:57

five to twenty his way. I

3:00

will always worried about I'm

3:02

things like fake news, corrupting

3:05

elections. hum. I surveillance things,

3:07

I bust the robots, things

3:09

like our replacing jobs. But

3:12

that's when I got really scared about

3:14

the existential threat. In

3:17

that moment he didn't realize

3:19

the technology he helped bring

3:21

to life is becoming catastrophic.

3:24

Six Getting smarter that he

3:26

ever sex. And

3:29

that was going to be a problem. That.

3:34

They will get a lot smarter than from they were take over.

3:41

Geoffrey Hinton was late to this

3:43

party. Someone. Had gotten a

3:45

much earlier two decades earlier. And

3:48

tried to warn the world's. The.

3:53

first episode black box with about

3:55

the godfather of ai the man

3:58

who spent his career trying to

4:00

bring machines to life. This

4:03

final episode is about Hinton's

4:05

shadow, the godfather of AI

4:07

Doom, a guy who

4:09

spent his entire career trying to

4:12

kill this technology, before he says

4:14

it kills us all, and

4:17

who now thinks it might just be

4:19

too late. From The

4:21

Guardian, I'm Michael Safi. This is

4:24

Black Box. By

4:31

the year 2030, human computers

4:34

can carry out the same amount

4:36

of functions as an actual human

4:38

brain. But

4:41

humanity is accelerating into a future that

4:43

few can predict. Well, is artificial intelligence

4:45

out of control? Open the plot bay

4:47

door, please, help me. You've spoken out

4:49

saying that AI could figure out a

4:51

way to kill humans? How

4:54

could it kill humans? I'm sorry, Dave. I'm

4:57

afraid I can't do that. We've

4:59

come quite far. We

5:01

need to rethink. We need to rethink.

5:12

Ellie Eiser Yudkowsky grew up in

5:14

an Orthodox Jewish household in California.

5:17

He never went to school. He was educated at

5:19

home. I remember being pretty

5:22

studious, much more wrapped

5:25

up in my parents'

5:27

book collection than the

5:30

rest of life, really. But

5:32

the kinds of things Yudkowsky was reading about as

5:34

a young kid were, I would guess, pretty different

5:36

from the kinds of things you and I were

5:39

reading at that age. Nuclear

5:41

power, space travel, asteroid

5:43

mining, the idea that the

5:46

solar system could support a lot more than

5:48

the number of people currently on Earth. How

5:50

old were you when you were getting into that book? I

5:53

think nine years old. Okay, that's pretty

5:55

young to be getting into those sorts of topics.

5:57

I mean, I think I turned out okay. By

6:00

11, he'd moved on to

6:02

even more serious stuff. Ideas like,

6:05

can you scan an entire human

6:07

brain and move the person into

6:09

a computer? Can you build

6:12

artificial intelligence? And this, he

6:14

realised, this is what he was into. So from

6:16

16th I knew that was what I was going

6:18

to be spending my life doing. Yudkowsky

6:21

was coming of age in the late 90s,

6:23

just as the internet was taking off, and

6:25

spending most of his time arguing with people

6:27

on online forums and email threads. And

6:30

that's when he first started to get noticed. He

6:32

mostly emerged from the internet, from the

6:35

very early internet. Tom Chivers

6:37

is a journalist. He's written a book about

6:39

Yudkowsky and his ideas. And there

6:41

was this one essay Yudkowsky wrote when he was

6:43

just 17 years old that caught

6:45

a lot of attention. It was full

6:48

of teenage angst about the state of the world.

6:50

I've got the quote here, like, I

6:52

have had it. I've had it with

6:54

crack houses, dictatorships, tortured chambers, disease,

6:56

old age, spinal paralysis and world

6:59

hunger. I've had it with a planetary death rate

7:01

of 150,000 sentient beings per day. I've

7:04

had it with this planet. I've had it with mortality. None

7:07

of this is necessary. We can end this. And

7:10

he said, there was a solution. One

7:12

day soon, humanity would create

7:15

super intelligent AI. His

7:21

point is, if you build something really, really

7:23

clever, then it can

7:25

solve these problems. The problems we have are the

7:28

problems of humans not being clever enough to solve

7:30

their own problems. That is

7:32

what I hoped for. Everyone lives happily

7:34

ever after, but literally ever after.

7:37

Up until the universe ends and I wasn't too sure about

7:39

us nothing got a way to get out of that. Super

7:43

intelligent AI isn't chat

7:45

JPT. It's not AI that

7:47

can recognise a face or see cancer.

7:50

It's something that can do everything we

7:52

can. Think faster than us, knows

7:55

more, knows everything every human

7:57

who's ever lived has known.

8:00

literally be beyond our comprehension,

8:03

like an ant trying to understand an

8:05

iPhone. A human being thinks

8:07

for a hundred years and then dies.

8:10

What if you have thought for ten thousand years? What if

8:12

you have read everything that's ever been written? And

8:15

then some. This is the

8:18

notion of super intelligence. Yudkowsky,

8:24

again just 17 years old,

8:26

was writing these essays and

8:28

discussing them online with philosophers,

8:30

professors, big time thinkers, and

8:32

holding his own. He

8:35

started gaining a following and persuaded donors

8:38

to give him money to start an

8:40

institute dedicated to solving all the world's

8:42

problems by bringing super intelligence to life.

8:46

Over the next six years he's doing this

8:48

work and begins to engage with another school

8:50

of thought. One that

8:52

says, actually super intelligence wouldn't

8:54

save humanity, it wouldn't usher

8:56

in some utopia. It

8:59

would be dangerous. And

9:01

these arguments that at first he

9:03

dismissed, now they start to nag

9:05

at him. As a young

9:07

lad, I thought that if

9:09

you built something very smart, it

9:12

would automatically be very wise and

9:14

thence very moral. But what

9:17

if super intelligence isn't

9:20

automatically nice? And

9:23

as a result of staring at that issue,

9:25

my entire false belief that very

9:28

smart and capable systems are automatically nice

9:32

started to fall apart. In

9:34

other words, the AI he thought

9:36

would save us all was much

9:38

more likely, in fact logically certain,

9:41

to kill everybody instead. So

9:44

in 2003 Yudkowsky decides he's going

9:46

to turn his life's work around.

9:48

He's no longer going to focus

9:50

on trying to bring super intelligent

9:53

AI into existence. Now

9:55

he's going to try to convince the world

9:57

this stuff shouldn't exist. collection

10:00

of essays and blog posts that

10:02

are collectively longer than war and

10:04

peace, they're just short of the

10:06

length of the Bible, and

10:09

they come to be called the sequences. The

10:14

core of Yudkowsky's message wasn't just

10:16

that super intelligent AI won't be

10:18

nice. It won't be nice,

10:21

but it won't be cruel or evil either.

10:28

He said, you need

10:30

to forget everything you

10:32

think you know about

10:34

killer AI. Forget about

10:36

the Terminator. This

10:38

won't be a scenario where the AI

10:41

becomes conscious, decides it doesn't want to

10:43

be ruled over by humans and launch

10:45

a war against us. Great

10:47

movies, but apparently very

10:49

inaccurate depiction of the

10:51

AI-pocalypse. If

10:54

you're going to use a film for the analogy, the film you should use

10:56

is Disney's Fantasia. The Mickey

10:58

Mouse movie. Yeah, the

11:01

Mickey Mouse movie. If

11:06

you haven't seen Fantasia, it's this old Disney

11:08

movie. Mickey,

11:12

as in the mouse, is some

11:14

kind of trainee wizard, and one of his

11:17

jobs is to fill a cauldron with water.

11:20

But that's hard work, and it's boring, it takes

11:22

a long time. So Mickey enchants

11:24

a broom and asks it to take

11:26

some buckets and fill up the cauldron

11:28

instead. And then the broom

11:30

grows little arms, it grabs

11:33

a bucket in each arm and bottles

11:35

off to the well. A few hours

11:37

later, the castle is completely flooded. A

11:40

broom had followed Mickey's instructions

11:42

perfectly, filled up the cauldron, but

11:45

once it was full, it just kept

11:47

going. Figured

11:52

Kasky, the key thing here, is

11:54

that the broom wasn't evil. The

11:56

broom had no opinions, it was just doing

11:59

what Mickey asked. What

12:01

Mickey asked was the problem. When

12:03

he said fill up the cauldron,

12:05

he expected the broom to understand

12:07

all of these unspoken nuances. Like

12:10

at some point you'll need to

12:12

stop. If the room starts flooding

12:14

there's a problem. But

12:16

why would a broom know that? The broom

12:19

just does exactly what you tell it

12:21

and so would an AI. It just

12:24

seems mad but it's because we have human

12:26

brains with human values and

12:28

human philosophies and all

12:31

these things which it simply doesn't have. So it's

12:33

sort of our model of

12:35

human wisdom does not apply to it. The

12:37

sort of fundamental idea of AI risk is that

12:41

the AI will do exactly what you tell it to do and

12:43

that it can only do exactly what you tell it to do

12:46

but what you tell it to do is

12:48

not what you want it to do. The

12:50

point is you can't just tell an AI

12:52

what you want it to do. You need

12:55

to tell it all the things it shouldn't

12:57

do to achieve that goal and

12:59

things can very quickly get out of control.

13:02

Let's take another domestic chore. Say

13:05

you build a super smart all capable

13:07

AI robot and say clean

13:09

my house. Simple. And

13:12

at first it does. It sweeps up, it

13:14

mops the floors and then

13:16

it decides. An even better way to achieve

13:18

this goal would be to have

13:20

thousands of versions of itself spread

13:22

out across the whole house cleaning

13:24

all day every day. So

13:27

much cleaning that eventually you can't even

13:29

live in your house anymore because every

13:32

surface is being cleaned all the time.

13:35

You just have to leave lock the door and

13:37

find somewhere else to live. That's

13:39

not the end of it. At some point

13:41

this robot realizes at this rate the world

13:44

is going to run out of cleaning products

13:46

and if that happens it can't achieve its

13:48

goal. So it goes and

13:50

strips the planet's resources and puts

13:52

them all into creating a nearly

13:54

endless supply of Mr. Muscle and

13:57

the last evidence of human civilization.

14:00

on this earth, the

14:02

last thing standing is your

14:04

astonishingly clean house. I

14:14

know, this all sounds a bit mad, but

14:17

the idea here is, it's really

14:19

hard to align what you want

14:21

with what AI actually does. And

14:24

once you create something incredibly

14:26

smart, incredibly powerful,

14:29

you'd only get one go at getting it

14:31

right. So

14:34

maybe you're thinking, okay, that's

14:36

not great, but why don't we

14:39

just give it an off switch? If

14:41

the AI starts to get a bit crazy, pull

14:43

the plug, turn it off. But

14:45

Elieisen and others say, no, that

14:48

wouldn't work. It is very hard to fulfill

14:50

your goals if you're switched off. So if

14:53

the AI has reason to think that you will be

14:55

coming to switch it off, then it will take steps

14:57

to avoid that. If I decided that you were dangerous,

15:00

and I should switch you off, you would

15:02

try and resist my doing that, I

15:04

imagine. And for the same

15:07

reasons, an AI would try and

15:09

resist me switching it off if I thought it was going to be dangerous, and

15:11

it would be better at doing it than you are. The

15:13

point is here, if you're in a position where this thing's

15:15

trying to outsmart you, it's going to outsmart

15:18

you. I mean, by definition, it's super intelligent. Yeah,

15:20

yeah, that's exactly it. Maybe

15:23

its next step is to make copies of itself around

15:25

the internet. If it's really, really powerful and has control

15:27

of nuclear codes, maybe its next step is to nuke

15:29

everybody. Okay,

15:31

so you can't count on being able

15:33

to turn it off. But

15:36

everything Tom just said assumes that the

15:38

AI is somehow connected to the real

15:40

world. It wouldn't be able

15:42

to copy itself to the internet or access

15:44

the nuclear codes. If you just kind of

15:46

stick it in a box, put

15:49

this Oracle-like super intelligence on

15:51

a single computer, lock

15:53

it in a room, and disconnect it from the

15:55

internet. And if you wanted to help in

15:57

solving a problem, you just go into the room and have

15:59

a- a conversation with it. Sorted.

16:02

Right? Apparently not.

16:04

There's a danger in just talking

16:06

to a thing that understands you

16:08

far better than you understand yourself. His

16:12

argument here boils down to one question.

16:14

Do you really think the

16:17

smartest entity that's ever existed

16:19

couldn't convince you, manipulate

16:22

you into setting it

16:24

free in the world? Really.

16:26

It's not like there's a sort

16:28

of mechanical lifeless mind trapped in

16:30

a computer. It's like there's

16:32

an alien civilization that thinks a thousand times

16:35

faster than Earth trapped inside a

16:37

computer. If you were

16:39

in those shoes, how would you get into the

16:41

real world? In any case,

16:43

Elieizer says, the boxing is probably a

16:45

moot point because that's not how things

16:48

are going. Open AI does

16:50

not keep GPT-4 in a box unconnected

16:52

to the internet. They trained it

16:54

connected to the internet from the very start because

16:56

the computers that train things are connected to the

16:58

internet. And then what

17:01

was profitable was to connect GPT

17:03

to the internet and sell it for $20 a

17:05

month for everyone. You're describing

17:07

a world where humanity

17:10

can somehow just decide

17:12

en masse. This technology is so

17:15

dangerous that needs to be kept in a tiny

17:17

box. We do not have that

17:19

capability at the moment. We should, but

17:21

we don't. I

17:23

don't know. Maybe all of this sounds

17:25

a bit far fetched. But Elieizer says, if

17:27

you sit down and think about it, you'll

17:30

see it's all rational. And

17:33

over the years, a lot of people

17:35

have taken Elieizer's ideas about how

17:37

to think rationally about AI, how

17:39

to think rationally about every problem

17:41

in the world very seriously. They

17:44

spawned a whole movement. People

17:47

who call themselves rationalists, devoted to

17:49

the ideas laid out on his

17:51

website. It was read by

17:53

people very much in Silicon

17:56

Valley or in computer

17:58

science names who were to

18:00

be involved in DeepMind or who wrote

18:02

AI textbooks and things like that. It

18:05

was an influential thing. One thing that

18:07

I find amazing about Elieizer's story is that

18:10

when he was outlining these ideas, warning

18:12

the world about AI, AI

18:14

itself was a bit of a joke,

18:17

barely able to tell the difference between a dog

18:19

and a cat. But he

18:22

was absolutely convinced and

18:24

worried that much smarter AI

18:26

was coming. And

18:29

on that at least, he turned

18:31

out to be exactly right. There's

18:34

no person who made these

18:36

captions. It was machine made.

18:38

Yes, it's true. Gradually.

18:41

And then rapidly. A computer program has

18:44

just beaten the 9-9 professional. With

18:47

Yudkowsky warning at every step,

18:50

slow down. This is bad news.

18:52

That you can actually screw up

18:55

and the entire human species stands

18:57

at risk. Jeffrey Hinton's

18:59

vision came to life. This now

19:01

has spoken conversations and have been

19:03

translated between two languages almost instantly.

19:06

Neural networks started to work. Alexa,

19:08

what do you do? I

19:10

can play music, answer questions, get the

19:13

news on weather. Powerful AI

19:15

systems capable of deep learning

19:17

and like the human brain,

19:19

too complex for us to

19:21

understand. Black boxes. While

19:25

Elieizer watched on in horror.

19:33

Elieizer, in the years that you were

19:36

writing about these topics, gaining followers and

19:38

influence, we also started

19:40

to see significant progress when it

19:42

came to AI, especially with neural

19:44

networks. What was it

19:46

like for you to watch the

19:48

field start to progress? Pretty

19:51

horrible. Deep learning

19:53

is pretty much a worst case scenario

19:55

from the perspective of being able to

19:57

align or even understand AI systems. That

20:00

was a death sentence back when I

20:02

was getting into this field There

20:04

nobody knew that it was going to be giant

20:07

inscrutable matrices that were going to win all the

20:09

prizes But there was still a

20:11

hope back then that by the time you

20:13

got to something like chat GPT You'd

20:15

have some idea what it was thinking of there and

20:17

that is not what we have what we have is AI

20:20

that is You know barely a

20:22

step above how natural selection built humans completely

20:24

blind We do not have a eyes

20:26

that are built. We have a eyes that are grown and Then

20:30

at that point it's hard to control

20:32

because it's hard to understand a bit

20:34

over a year ago All of

20:36

a sudden since the world started

20:39

talking about AI and right now

20:41

chat GPT is rattling the AI

20:43

world Chat GPT arrived

20:45

they're ready to laugh cry Maybe

20:47

even throw your computer at the

20:49

window because the latest AI technology

20:51

chat GPT is here to shake

20:54

things up And just like AI that could

20:56

have a conversation past the

20:58

bar exam Discussed the plot

21:00

of any book Artificial

21:02

intelligence that actually seemed intelligent It

21:05

is writing better than most of

21:07

my students write at this point

21:09

You know college freshmen and as

21:11

people started to understand we had

21:14

entered a new age Yudkowsky

21:17

kept trying to raise the alarm now to

21:19

much bigger audience I tried to get this

21:21

very important project started early so we'd be

21:24

in less of a drastic rush later I

21:27

consider myself to have failed Nobody

21:31

understands how modern audiences didn't

21:33

always know what to make

21:35

of his apocalyptic message But

21:37

that they would why would they want to go

21:39

in that direction? Like

21:42

that AIs don't have our

21:45

feelings of sort of envy and jealousy

21:47

and anger and but he had one

21:49

high profile person Who last

21:51

May announced he had come over

21:53

to Yudkowsky's way of thinking Jeffrey

21:56

Hinton says he wants to be free

21:58

to talk about the possible data of

22:00

technology that he helped to develop. Not

22:03

about everything, but about the bigger

22:05

point that AI could

22:08

be, and in fact was likely

22:10

to be, extremely dangerous. Given the

22:12

rate of progress, we expect things

22:15

to get better quite fast, so

22:18

we need to worry about that. And in

22:20

the months since then, fears around

22:22

AI have become just as prominent

22:24

as hopes of what it could

22:26

one day do. We face a

22:28

genuine inflection point in history. One

22:31

of those moments where the decisions we make in the

22:33

very near term are going

22:35

to set the course for the next decades. The

22:39

Biden White House in October issued a

22:42

major executive order that will force the

22:44

biggest AI companies to share safety data

22:46

with the government before they release their

22:48

models to the public. A

22:51

few months ago, Rishi Sunak gathered global

22:53

leaders at a summit dedicated to the

22:55

existential risks of AI. Get

22:58

this wrong, an AI could

23:00

make it easier to build chemical or biological

23:02

weapons. Terrorists

23:05

could use AI to spread fear and destruction

23:07

on an even greater scale. Criminals

23:10

could exploit AI for cyber attacks,

23:13

disinformation, fraud or even child

23:15

sexual abuse. And

23:17

in the most unlikely but extreme cases,

23:19

there is even the risk that humanity could

23:22

lose control of AI completely. Last

23:30

year was this like watershed year

23:32

in AI safety. The things you'd

23:34

been talking about for two decades

23:36

suddenly became mainstream. Was

23:38

that gratifying to you as someone who'd

23:41

advocated for those issues or not gratifying

23:43

at all? It is not

23:45

particularly gratifying in the way that I think you're

23:47

thinking. People

23:51

who need the gratification of seeing

23:53

others agree with them will

23:55

not go into a field like

23:58

AI alignment and work on it. being

24:00

largely ignored for 20 years. They

24:02

will pick something that delivers more of that

24:04

gratification earlier. I have a low

24:06

need for that form of gratification. It

24:09

was, in a way, humbling. Because

24:11

I had written off humanity's response to this

24:13

issue. I thought that everybody was going to

24:16

just sort of never think about it or

24:18

talk about it sensibly at all, the way

24:20

that people in Silicon Valley and the companies

24:22

building these things had never thought about

24:24

it at all. I thought

24:27

that humanity was going to do worse

24:29

than Silicon Valley. Humanity did better. And

24:32

when you're cynical and you're wrong, I think that's meant to

24:34

be a bit humbling. For

24:36

Yudkowsky, this moment that his

24:38

ideas broke through to the mainstream, it

24:41

was bittersweet. Because he

24:43

had come to the conclusion that by now, it

24:45

was too late. He thinks

24:48

we've nearly run out of time. He

24:51

says it's hard to say when AI

24:53

might reach a level, when it's as

24:55

smart as a human. I have

24:57

a sense that our current remaining timeline looks more like

24:59

five years than 50 years. But

25:01

once it does reach that point, he

25:04

says it won't stay there for long. Superintelligence,

25:09

at the very least, should be better than any

25:11

human at building the next

25:13

generation of AI. And

25:16

then that improved version

25:18

will improve itself again,

25:20

and again, and again, and again. An

25:25

intelligence explosion. Yudkowsky

25:30

says we can avoid this.

25:32

There is a way. And that

25:34

is shut it down. Pause

25:38

all AI development right

25:40

now. Sign treaties, pass

25:42

laws, drop bombs on rogue data centers

25:44

if you need to. Whatever

25:47

it takes. Do

25:51

you have any hope that we

25:53

will survive this, that

25:55

we will develop super

25:57

intelligent AI in a way that does not

26:00

not result in the worst outcome

26:02

for humanity. Yeah, that we

26:04

back off and don't do it. And

26:07

the time to do that is running out

26:09

as we build more chip factories as the

26:11

technology proliferates. But if

26:14

we woke up tomorrow and just like decided

26:16

we'd rather not die this way, we

26:18

could do it. You say nobody's allowed

26:20

to train something more powerful than GPT-4. Humanity

26:24

could decide not to die and it would not be that hard.

26:27

The difficulty is that people don't realize

26:29

they're about to die. If

26:31

people knew that, we could just

26:34

stop. It

26:36

wouldn't be trivial, but we could. Coming

26:51

up, don't panic. There are

26:53

other views of how this night all

26:55

play out. Maybe rather

26:57

than AI killing us all, it

27:00

could be really good. Finding

27:16

your perfect home was hard, but thanks

27:18

to Burrow, furnishing it has never been

27:20

easier. Burrow's easy to assemble modular sofas

27:23

and sectionals are made from premium durable

27:25

materials, including stain and scratch-resistant fabrics. So

27:27

they're not just comfortable and stylish, they're

27:29

built to last. Plus every single Burrow

27:32

order ships free right to your door.

27:35

Right now get 15% off

27:37

your first order at burrow.com/Acast.

27:39

That's 15% off at

27:42

burrow.com/Acast. Introducing

27:44

Wondersuite from bluehost.com. Website

27:47

creation is hard, but now with Bluehost,

27:49

you can answer a few simple questions

27:51

about your business and get a unique

27:54

WordPress website or store right away. From

27:56

there, you can customize your design, colors,

27:59

and content. And Bluehost

28:01

automatically helps you get found

28:03

in search engines like Google

28:05

and Bing. From step-by-step guidance

28:07

to suggested plugins, Bluehost makes

28:09

WordPress wonderful for everyone. Go

28:12

to bluehost.com/Wondersuite. After

28:19

speaking to Elieizer Yudkowsky,

28:22

we were terrified. And

28:26

throughout this series, whenever we've hit a brick

28:28

wall, come across some idea

28:31

we can't untangle. There's

28:33

one person we've gone to for

28:35

advice. The Guardian's

28:37

UK technology editor, Alex Hearn.

28:41

For most of the time we've been working on this

28:43

series, he's been on paternity leave.

28:46

But we told him there's something we really

28:48

need to ask him. Are

28:51

we all going to die? No.

28:57

That's really true. Yeah, he says,

28:59

well, I say not necessarily. It's

29:02

not the total dismissal that you

29:04

might hope. I

29:06

think his fears are worth taking

29:08

seriously. I don't

29:10

think it is imminent, and I don't think it's guaranteed.

29:14

So where do you

29:16

stop short of going all

29:19

the way with him? What is it that gives

29:21

you hope that maybe this scenario he's laying out

29:23

is not exactly how it's going to play out?

29:26

I think the stuff that stops

29:28

me short is just that ultimately,

29:30

for the everyone's going to die

29:32

end, for the proper doomerism argument,

29:35

there is still an element of, if

29:38

I'm being mean I say magical thinking. And

29:41

sometimes those things involve like it will

29:43

engineer a nanovirus and seize 3D printers

29:45

around the world and distribute

29:49

an AI engineered deathbot

29:52

that takes out the whole of

29:54

the human race. And that sort of thing

29:57

isn't possible based on what we're doing.

30:00

we know about biology and

30:02

physics, there are, we

30:05

think, limits to what you can achieve

30:07

through pure reason alone. And

30:10

that is a very real

30:12

limit on what superintelligence can

30:14

do out of nowhere. I

30:17

think there are lesser versions of the

30:19

existential risks that I take more seriously.

30:22

The big one is this idea of

30:24

it being so good that

30:27

we slave ourselves to its power

30:29

voluntarily, that we wake up in 10 or 20

30:31

years time and we realize

30:33

that all meaningful power in

30:36

the world has been effectively voluntarily devolved

30:38

to one or a number of

30:40

AI systems, because if you

30:43

as the prime minister of a small or

30:45

medium-sized country don't let the AI tell you

30:47

exactly what to do, then your country's outcompeted.

30:49

If you as the CEO of a business

30:51

don't hand over almost all your

30:53

decision-making power to an AI, then your businesses

30:56

aren't competed. Like, that's

30:58

not an existential risk in

31:00

the classical sense of things, but it's

31:02

what I worry about if these systems

31:04

get too good. So

31:06

for you, in the short term, the

31:09

concern is less the end of humanity

31:11

in some extraordinary explosion

31:13

of intelligence. It's

31:15

the threats that seem more

31:17

foreseeable. AI taking control

31:19

of our workplaces, of our personal

31:22

relationships, dictating the way that we

31:24

live our lives. And

31:26

even taking is perhaps the

31:28

wrong word, right? Being handed control

31:30

of that, because if you create

31:33

a tool that is that good at

31:36

doing the jobs it's handed, then not

31:38

using that tool becomes ridiculous. Alex

31:41

told me that a lot of

31:43

this kind of conversation about whether AI

31:45

will be nice or mean, good

31:48

or bad, it's too simple.

31:51

It all hinges on another

31:53

question, and that's how

31:55

intelligent will this stuff end

31:57

up being? Whether...

32:00

AI will be super smart, all

32:02

powerful, the stuff of

32:05

Yudkowsky's nightmares. Or

32:07

maybe something less than that. What

32:10

we get are pools that

32:12

are tremendously useful, perhaps even

32:15

transformatively useful, but

32:17

not changing society

32:20

in a way that is

32:22

different from how the railroad,

32:24

internet and electricity change society.

32:28

Just noting here, even the

32:30

lower end scenario for Alex is

32:32

that AI is as transformative as

32:35

electricity. And that's bad enough that

32:38

we destroy our healthy media

32:40

ecosystem. That we destroy any

32:42

sense that you can trust, anything

32:44

that you hear that isn't from the mouth of

32:46

someone you already trust. That's

32:49

one where we have mass

32:51

unemployment as huge swathes of

32:53

easily automatable jobs that

32:55

get removed, cut down on importance. That's

32:59

a world that is the industrial

33:02

revolution, but more so

33:04

and faster. But lots

33:06

of people argue that as well as

33:08

causing upheaval, this kind of AI, powerful

33:11

but not super intelligent, could

33:14

be a really good thing for the world. Actually

33:18

that's where I

33:20

sit. That's the world where things

33:23

get slightly better, faster than they

33:25

used to, forever. And

33:28

it is because that's how the

33:30

world has worked. In

33:32

general, the path of

33:36

the last two, three hundred years of humanity

33:38

has been that while there

33:41

may be transition

33:44

problems, that life has gotten,

33:46

continues to get overall better

33:49

for lots and lots of people based in

33:51

large part on technological improvement. I

33:54

wouldn't want to live in the

33:56

Britain of 50 years ago. just

34:00

differences in things like child mortality. My

34:03

daughter has Down Syndrome. The difference in

34:05

the quality of life for her now versus 50 years

34:07

ago is unspeakable.

34:11

There's just wild improvement.

34:15

And I think that has to be your default for

34:18

what technology can do. What

34:21

kind of world do you think one

34:23

that is dominated by this kind of

34:25

powerful but not super intelligent AI will

34:27

actually look like? I

34:30

think you can

34:33

start looking through almost every segment of

34:36

society and go, okay,

34:39

well, what happens if everyone who

34:43

works here is able

34:45

to think to the best of their ability for

34:51

eight hours a day with a full team of personal

34:53

assistance and

34:56

research assistance beside them, right? There's

34:58

a world where scientific research,

35:00

say, gets

35:02

supercharged. If you have

35:05

every biochemist, every geneticist,

35:09

every industrial chemist, all of them working

35:12

through the absolute top of their ability, then

35:16

suddenly perhaps science becomes easier or

35:20

faster or more efficient. And it's not just

35:23

science. It's every segment of society where you

35:25

could have that. The

35:27

world is full of people not

35:31

working to their full potential

35:33

because we are tired, we are unsupported, we

35:36

are fuzzy headed. Alex

35:38

reckons in this scenario, AI would touch just

35:41

about every aspect of the way we live

35:44

and just make it better. In

35:47

education, a personalized tutor for

35:50

every child on the environment, finding ways to make

35:52

renewable energy even

35:54

more efficient, maybe even discovering new. nuclear

36:00

fusion, a source of unlimited,

36:02

clean energy. On

36:04

health, finding new antibiotics,

36:06

predicting the structure of

36:08

every known protein, both

36:11

of which, by the way, AI has

36:13

already done. And

36:15

then the thing Alex says we should

36:17

really keep an eye on this year,

36:19

which is the fusion of AI and

36:21

robotics. Just days before we

36:23

recorded this, a company called Figur published

36:26

a video of an open

36:29

AI powered humanoid robot that it's been

36:31

working on. Hey, Figur1, what do you

36:33

see right now? I

36:36

see a red apple on a plate in

36:38

the center of the table, a drying rack

36:41

with cups and a plate, and you standing

36:43

nearby with your hand on the table. Great,

36:46

can I have something to eat? It

36:48

is a machine

36:50

that can be instructed

36:53

with normal words to do

36:56

everyday domestic tasks, and

36:58

explain what it's doing and why it's doing

37:00

it. The dishes on the table like that

37:02

plate and cup are likely to go into

37:05

the drying rack next. Great,

37:07

can you put them in? I mean, come

37:09

on, it is science

37:11

fiction. And it's

37:13

easy to see the ways that could lead to

37:15

a terrifying world, but I think, I

37:19

don't think it's wildly optimistic

37:23

to think that that

37:26

would be nice to have, wouldn't it? Alright,

37:28

so how do you think you did? I

37:31

think I did pretty well. The

37:33

apple found its new owner, the trash is

37:35

gone, and the tableware is right where it

37:37

belongs. If

37:39

you can start getting help

37:42

from robotic systems, then

37:46

there is an optimistic world there, one where

37:48

people like my great grandmother, who

37:51

don't need round-the-clock

37:53

care to survive,

37:56

but need round-the-clock care to thrive,

37:59

she's not going to be... stuck alone deprived

38:01

of her one human contact,

38:03

she's still going to be going to see

38:05

her friends, to play bridge, to go on

38:07

social calls, but she's also no longer going

38:09

to be at risk of forgetting to eat. She's no

38:12

longer going to be at risk

38:14

of falling over and having no

38:17

ability to summon help for a

38:19

day or two. And that

38:21

being opened up to everyone who needs it,

38:25

I'm really optimistic that that

38:28

would be the sort of thing that

38:31

can transformatively change lives for literally

38:33

millions of people. And

38:36

that's one use of one technology

38:39

that this revolution is

38:43

creating, right? I genuinely

38:45

think even with

38:48

the technology that exists today, it

38:50

is already getting hard to come

38:52

up with people

38:55

whose lives couldn't in some way be helped

38:57

by this. And that's

39:01

that's a staggering press supposed to be standing

39:03

on the edge of. How

39:09

much power do you think we have as

39:11

people, as citizens, over

39:15

the way AI ends up shaping

39:18

our lives? How much of it is down to politics

39:21

and the way that we choose to

39:23

order our societies? I think we

39:25

have a lot of power. I think ultimately

39:27

we can order society and

39:29

go, you know what, actually, we're not going

39:31

to allow ubiquitous

39:35

facial recognition and behavioral monitoring. We're

39:37

not going to become an authoritarian

39:39

society the EU has

39:41

just passed laws that allow it

39:44

to poke and prod frontier AI

39:46

models independently of their developers.

39:48

That's huge. That means that there will be

39:50

an independent

39:52

safety check. You no longer have

39:54

to trust that open

39:57

AI or Google or Microsoft are doing

39:59

this stuff. off well, you

40:01

have a government putting a second pair of eyes on

40:04

things. Alex,

40:08

one of the things this series has been about is

40:10

the realization that we've crossed from a before

40:13

into an after, into this

40:16

world where machines can learn

40:18

on their own, and that

40:20

is an extraordinary and profound

40:22

transformation. For people who

40:24

are trying to understand what it all means

40:26

for them, where they might fit into it,

40:29

what would you tell them? I

40:33

would say if you

40:35

are dipping your toe into the world, sort

40:38

of persist. It's

40:41

an incredible thing that has been

40:43

created, and it's

40:45

going to get better. I

40:48

guess there is a bit of fatalism in that if you don't

40:50

embrace what it can do

40:52

for you, then you're just going to be affected

40:54

by what it's doing for others. The

40:58

only way you can seize agency in this is

41:00

by seizing it, and not

41:02

just burying your head in the sand and hoping

41:05

the world won't change. Because

41:08

I think it's going to. Thanks

41:30

very much to Alex Hern, the Guardian's

41:32

UK technology editor. You can read more

41:34

from him by signing up for his

41:36

newsletter called Techscape. Thanks

41:38

very much also to Tom Travers. His

41:40

book is called The Rationalist's Guide to

41:42

the Galaxy, and it's available right now.

41:46

Black Box is produced by Alex

41:48

Attach. The executive producer is Judge

41:50

Kelly. The commissioning editor

41:52

is Nicole Jackson. All

41:55

music and sound design by Rudy Zagatlo.

41:57

The music supervisor is Max Sanderson.

42:00

This series took us six months to make and we're

42:02

so glad we can bring it to you for free.

42:05

If you'd like to support projects like

42:07

this one, consider becoming a supporter of

42:09

The Guardian, which you can do by

42:12

going to support.theguardian.com. And

42:14

otherwise, if you enjoyed this series, if you

42:17

thought it was worthwhile, please do leave us

42:19

a rating and a review. It'll help other

42:21

people find us. Black

42:23

Box will return sometime in the

42:26

future. This

42:42

is The Guardian. Tired

42:53

of ads barging into your favorite news

42:55

podcasts? Good news! Ad-free listening

42:58

is available on Amazon Music for all

43:00

the music plus top podcasts included with

43:02

your Prime membership. Stay up

43:04

to date on everything newsworthy by downloading the

43:06

Amazon Music app for free or

43:09

go to amazon.com/news ad free.

43:11

That's amazon.com/news ad free to catch

43:14

up on the latest episodes without

43:16

the ads. Hackers

43:19

and cyber criminals have always held

43:21

this kind of special fascination. Obviously,

43:23

I can't tell you too much about

43:25

what I do. It's a game. Who's

43:27

the best hacker? And I was like,

43:29

well, this is child's play. I'm

43:32

Dina Temple Rustin, and on the Click Here

43:34

podcast, you'll meet them and the people trying

43:36

to stop them. We're not afraid of

43:38

the attack. We're afraid of the creativity and

43:40

the

Unlock more with Podchaser Pro

  • Audience Insights
  • Contact Information
  • Demographics
  • Charts
  • Sponsor History
  • and More!
Pro Features