Podchaser Logo
Home
Martin Rees: If Science is to Save Us, Part 1

Martin Rees: If Science is to Save Us, Part 1

Released Wednesday, 12th April 2023
Good episode? Give it some love!
Martin Rees: If Science is to Save Us, Part 1

Martin Rees: If Science is to Save Us, Part 1

Martin Rees: If Science is to Save Us, Part 1

Martin Rees: If Science is to Save Us, Part 1

Wednesday, 12th April 2023
Good episode? Give it some love!
Rate Episode

Episode Transcript

Transcripts are displayed as originally observed. Some content, including advertisements may have changed.

Use Ctrl + F to search

0:07

Hi, and welcome to The Origins Podcast.

0:10

I'm your host, Lawrence Krauss. I was lucky

0:12

enough to have a conversation with my friend, the distinguished

0:15

astrophysicist Lord Martin Rees, a

0:17

few years ago on our podcast. But

0:20

he more recently came out with a very interesting book

0:22

about saving the world with science and

0:24

I thought it was a great opportunity

0:26

to have him back to talk about the

0:28

subjects in the book and to have a wide-ranging conversation

0:31

far beyond astrophysics and its own

0:33

background about the

0:34

areas where science can

0:38

impact on our lives and our future.

0:41

And it was, as always, a very

0:43

informative and lovely discussion.

0:45

He's a remarkable scholar, human being,

0:47

and a real pleasure to talk to. And

0:50

I hope you enjoy it as much as I did. The

0:53

conversation was so comprehensive that

0:55

we actually are dividing it into two

0:57

podcasts. and we're releasing

0:59

the first half now. So I hope

1:01

you'll enjoy this first half of the Origins

1:03

podcast with Martin Reis talking

1:06

about basically saving the world with science. And

1:09

you can watch it ad free on

1:11

our Substack site if you're a Substack

1:14

subscriber to Critical Mass and I hope you'll consider

1:16

doing that because those funds

1:18

support the Origins Project Foundation. If

1:21

you're not a subscriber you can watch

1:24

it on YouTube eventually if you're

1:26

a subscriber to our YouTube channel, or of course

1:29

listen to it on any podcast listening

1:31

site. No matter how you watch it or listen

1:33

to it, I really hope you'll be informed

1:35

and educated as much as I

1:38

am every time I talk to Martin

1:39

Reis. So enjoy this Origins

1:41

podcast with Martin Reis.

1:51

Well thank you Martin for

1:53

once again agreeing to do the podcast. It's always such

1:56

a pleasure to talk to you. for being with

1:58

me today. We're great to be in touch.

2:00

you. It looks very cozy

2:02

where you are in your study in England. Is

2:04

the weather okay there? Or is it you haven't had any blizzards

2:07

or that sort of thing there, I assume? Nothing on the scale

2:09

they've had in North America, but it's

2:12

quite sunny today. Oh, excellent.

2:14

Well, it's snowy today here, but not a bad

2:16

day. Kind of nice, pleasant kind of snow. We have

2:19

had the,

2:20

I've

2:23

had the privilege of already having been

2:25

with you in England for one of the earliest

2:27

podcasts when we began these

2:29

podcasts. And so I don't have to go over

2:32

that territory. As you know, normally it's

2:34

an origins podcast and I like to talk to people

2:36

about their origins, but we talked about

2:38

that, your origins as a scientist

2:42

in the last time we had a podcast.

2:45

But since the purpose of this particular

2:48

podcast, I want to focus

2:50

on your new exciting book, If Science Is

2:52

To Save Us. You and I have had some discussions about

2:54

it, and it's a very important set of topics, so

2:56

I thought it'd be nice to come back. You

2:59

and I talked a lot about cosmology and to some

3:01

extent religion. I thought this is

3:03

a chance to talk about some of the important

3:05

ideas that you're raising when

3:08

it comes to science and public policy. And

3:10

I was thinking about it, and I think there have been

3:12

few scientists, you can correct me no

3:14

doubt, as you often do, if I'm wrong,

3:18

there have been few scientists in

3:20

the United Kingdom that have had your

3:23

level of

3:27

experience and as

3:31

well as acknowledged experience

3:34

across a wide area of

3:36

scientific and science and public policy.

3:39

I don't know if there's anyone who's held

3:41

as many honorific and substantive

3:44

titles as you had, and

3:46

it really hit me when you talked in the

3:48

middle of the book about the Longitude prize. You

3:51

talked about, I

3:52

think that three of the, there are eight people

3:54

that are supposed to to be involved in that prize. And

3:57

three of them include the Astronomer Royale, the President

3:59

of the Royal Society, and the

4:00

and the professor of

4:02

astronomy at Cambridge. And all three of

4:04

those people were you. And it really

4:06

hit me that, I

4:09

don't think there's any precedent for

4:11

that kind of experience

4:13

that you hold. You really have a unique.

4:15

Oh, I think so. And there are huge numbers of people

4:18

who are more sophisticated in

4:20

the politics and the popularization, but

4:22

may not have been so active

4:24

academically. So I think I sort

4:27

of try to straddle the academic

4:29

world and the popular world, but

4:32

we are very lucky in Britain just think

4:34

of this late lamented Colin Blakemore

4:37

and people like that. Yeah, yeah, that's

4:39

true. No, England's had its share

4:41

of

4:42

exceptional scientists and scientific

4:44

communicators, but people who are in a position

4:47

to be able

4:49

to

4:51

not just voice their views, but perhaps have

4:53

those views have an impact through their

4:55

substantive roles like President

4:57

of Royal Society and now Lord in

4:59

the UK House of, I

5:02

think it's the whole thing, it's called the House of Commons, but the House

5:04

of Lords. So

5:07

to have that combination of interest,

5:10

which you have, which a lot of exceptional

5:12

British scientists have had that, but also the

5:14

ability in principle to kind of implement that

5:16

interest, that seems to me almost, you

5:19

may not be unique, but I bet there's less than a handful

5:21

of people who've had that kind of opportunity. my influence

5:24

is sadly limited, I'm afraid, but I do my

5:26

best. Well, you do your best, and I appreciate

5:28

that, and one of the many reasons I admire you, and

5:31

there are many.

5:33

But I thought in the context of

5:35

origins, I would at least talk about that aspect

5:38

of your career path as

5:40

a choice or opportunity. People

5:43

take advantage of opportunities

5:47

that they don't, but often it's because of their predilections

5:49

at the same time. You

5:52

took on these roles from President of the Royal

5:55

Society, Master of Trinity College, and

5:57

the other roles that you've taken on.

6:00

Why? Well,

6:05

I took them on in later life. I

6:08

had a very fortunate career,

6:11

starting in the

6:13

1960s, when the rapid changes

6:16

in astronomy cosmology, first

6:18

evidence of the Big Bang, black holes, etc. and

6:22

I was very fortunate to be in a

6:24

strong research group and to make many

6:26

international contacts and to be

6:29

able to spend much of my career at

6:31

Cambridge University, which was an outstanding

6:34

centre. So I was very lucky indeed,

6:37

and I

6:38

developed a wide international contacts, and

6:40

I worked over a fairly wide

6:43

spread of topics

6:45

with a lot of collaborators and a lot of students.

6:48

and I think I made

6:50

a number of modest contributions. No significant.

6:52

Oh, come on. Don't be too modest. Yeah, I agree.

6:55

But as I say in my

6:57

book, when I got to the age of 60, I

7:00

thought I should perhaps think about

7:02

whether I should do something of

7:05

more direct public relevance. And

7:07

also I was motivated by noting

7:10

the ways in which scientists grow old.

7:13

And there are three different ways.

7:16

One common way is they just

7:19

become

7:20

torpid and don't do very much, or

7:23

nothing very exciting. That's

7:25

one thing that can happen. And there are

7:28

many examples of that. And I had

7:30

some in my

7:31

university, did my department

7:33

who were like that. I didn't want to follow their

7:35

example. I want to do something else. There

7:38

are some

7:39

who of course just go on doing

7:41

what they're good at and

7:44

have a career extending

7:47

into their 70s and 80s even. But

7:50

I think it's interesting that

7:52

most scientists do their

7:54

best work when they're young. It's

7:57

a platitude, people say

7:59

this, but...

8:00

There's a lot of truth in it. And the reason for that

8:02

is that as you get older,

8:05

you become less good

8:08

at adopting new ideas and

8:11

learning new techniques. And

8:14

therefore, if you are

8:16

going to go on make a contribution

8:20

in your later years, then

8:22

the best you can do is to be on a plateau, doing

8:24

what you're good at, etc. And

8:28

incidentally, this is rather interesting

8:30

contrast with the arts,

8:32

because if you think of great

8:34

composers, most of them did their best

8:37

work in their last years. And

8:39

there aren't very many scientists who would say that. And

8:41

I think the difference is that if you are a composer,

8:45

you're influenced by the modes

8:48

and styles when you were young, but

8:50

thereafter, it's just internal development.

8:53

You don't need to absorb any external influences.

8:55

Whereas sciences are more interactive

8:57

and social

8:58

activity, therefore to stay on the

9:00

frontiers, I'm sure you'd agree, you've got to

9:02

really be alert to what's going on and

9:04

understand new things. And that's what we get

9:07

less good at as we get older. Absolutely. Adopting

9:09

new new new techniques. You

9:12

know, graduate students are notably

9:14

adept. They're required to. I used

9:16

to have, I know a very distinguished colleague

9:18

who said, well, you know, do you read everything? No, he says, but

9:21

you know, I have graduate students who read everything

9:23

and then they can

9:26

educate me. Let me just, I know

9:28

before you get to the third thing,

9:30

what about the other aspect? And maybe this

9:32

doesn't, isn't true because composers this way, but

9:35

the other thing that I wonder about older scientists is,

9:37

is science does require

9:40

generally intense

9:41

energy and periods of concentration

9:43

working intensely for

9:46

a long time, years perhaps. And

9:48

I'm wondering if that willingness

9:50

perhaps also subsides as you get

9:52

older to devote such

9:55

intensity to a single

9:57

problem as you get older.

9:59

think most people's academic careers tend

10:02

to gather a lot

10:04

of extraneous duties,

10:07

administration, etc. And so not

10:10

very many manage to have careers

10:12

where they can be as dedicated in

10:14

their later years and that. But of course,

10:17

it's not at once part of concentration decline,

10:20

because think of composers, think of the concentration

10:22

at Wagner needed to be

10:25

the full score of Goethe Demmerung. Yeah,

10:27

yeah, yeah, absolutely. In

10:29

any case, I just wonder because sometimes I think of

10:31

projects and I think, boy, do I have the energy to do that

10:33

project now? And earlier on, I would have had

10:35

the energy. But but most of us do have

10:38

less energy. So we do have to conserve it, obviously. And

10:40

then the third one, which I which the

10:42

third track. Well, the

10:45

third one is one which is

10:47

followed

10:47

by some of

10:49

the most outstanding scientists. Yeah, absolutely.

10:52

And these are people who

10:55

still think they're doing science. They want to

10:57

understand the world, but they get

10:59

bored with doing the same stuff as they did in

11:01

their early career. And they overreach

11:03

themselves by entering

11:05

fields in which they have no expertise and

11:09

often embarrass their admirers by doing

11:11

this. one quote, example, well,

11:13

let me quote some.

11:16

Two of the previous

11:19

holders of my chair, actually, Arthur

11:21

Eddington and Fred Hoyle, two

11:24

really outstanding people with a greater

11:26

even earlier career than me, but they

11:29

both became rather

11:31

eccentric in their old age.

11:33

Eddington had his fundamental theory,

11:36

a sort of numerology, where he thought he

11:38

could predict the exact number of particles in the universe,

11:40

etc. and was

11:42

really out of the mainstream in his last two years,

11:45

even though incidentally he was only 64 when

11:47

he died. He wasn't

11:50

really old by most of our commanders. And

11:53

Fred Hoyle, who again,

11:56

over a 25-year period, was

11:58

probably the most inventive. productive astrophysicist

12:00

in the world, in my opinion, lots

12:03

of ideas. He, in his later

12:05

years, became rather isolated

12:07

and took up rather

12:10

crazy ideas like thinking

12:13

that pandemics

12:15

came in on comets, etc., and

12:18

that some of the key fossils

12:21

in the natural history museum indicated

12:24

the origin of birds and dinosaurs were forgeries,

12:26

etc., and questioning Darwin.

12:29

And thereby, although he was

12:32

always inventive and worth listening to, he

12:34

rather diminished his reputation,

12:37

although he was

12:40

always lively to talk to. So

12:43

that's the third way. Certainly underappreciated. Perhaps

12:46

one of the most underappreciated great British

12:48

scientists of recent time in my opinion. But

12:51

anyway, oil. Go on.

12:53

I mean, you know, it's interesting,

12:55

by the way, when you were thinking about this, I thinking

12:57

of the contrary. I was thinking of someone, an

13:01

example of someone who at least questioned

13:03

himself enough to know

13:06

was Richard Feynman. You know, Richard Feynman,

13:08

you know, I wrote a book about it, but it's

13:10

fascinating because he often talked about how as

13:12

you became more famous, people would ask for your

13:14

opinions on things and eventually it'd start to give them.

13:17

And then he realized that he had no idea what he was

13:19

talking about and giving him opinions. And

13:22

there was a while when he got bored. I remember there was a period

13:24

he went into to try and learn some

13:27

genetics and in a molecular

13:29

biology laboratory, he spent

13:31

a summer, and I'm sure he was an interesting graduate

13:33

student in that sense, a Nobel

13:36

Prize

13:36

winning graduate student, but nevertheless,

13:38

and then I think he just realized that he

13:40

couldn't make good contributions, you know, the

13:43

kind of contributions there that he could in physics, and

13:45

he stepped back. So it's rare, though,

13:48

that people are willing to self-analyze

13:50

enough to know that they're Yes.

13:52

Well, of course, some do

13:55

make a switch in mid-career, don't

13:57

they? Yeah. Because someone I

13:59

know from the UK.

14:00

And

14:02

of course, let's take another

14:04

example. We have to be both know it and my

14:06

prima Dyson. Yeah. Great

14:10

mathematical physics in his 20s. And

14:13

he sort of consciously said that

14:17

young

14:17

people should write

14:20

papers. Oh, people should write books.

14:22

Yeah. And he wrote his first book when he was

14:24

I think in his late 50s in mid

14:26

50s. Remember he made that transition. Yeah, absolutely.

14:29

Yeah. Yeah. And he went on, of course,

14:31

and I mentioned

14:35

him in another memoir

14:38

that I recently wrote my life story.

14:40

And

14:40

he remained lively

14:43

and interesting until his mid-90s.

14:45

Absolutely. And I used to communicate him right until

14:47

about two weeks before his death, as a matter of fact. Yeah, he was

14:50

certainly

14:51

still the most interesting person to talk to at the Institute

14:53

for Advanced Study when I spent my time there. And he was, that

14:55

was when he was eight eighties. However, some people

14:57

would say that he then began to pontificate on issues

14:59

like climate change in areas where he,

15:02

he, he perhaps, well, I mean, as

15:04

I've already talked about this in other contexts, Freeman

15:06

had the attitude that most people weren't

15:08

as smart as him and he didn't just, and

15:10

he did, and which is, which was a true statement

15:13

and if, and if he didn't trust work

15:15

that other people had done, I think since he hadn't done the

15:17

climate change work, he naturally distrusted

15:20

it. But that unfortunately, it's

15:22

not true as, as I think

15:24

my late friend Sydney Coleman told Feynman

15:26

once, it's not that everyone else is an idiot. Just

15:29

that's a wrong assumption. They're actually making sure

15:31

other people know what they're doing. Yes, and

15:34

having studied something for years should

15:36

give you a bit of an edge. Yeah, exactly. And

15:39

I think that was a disservice he did. The only disservice

15:41

I know was his attitude about climate change in that

15:43

sense. He raised interesting questions. He

15:46

always was a

15:47

contrarian. In any case, you decided

15:49

to choose none of those paths, I guess. Well,

15:52

what I did was I thought I should do

15:55

something else of

15:58

a wider nature.

16:00

And I rather overdid

16:02

it

16:02

because within four years, I

16:05

was a master of 20 college,

16:07

which is the biggest college in Cambridge. And

16:10

I was a member of the House of Lords and

16:12

I was president of the Royal Society. And

16:14

so for a decade of my sixties,

16:18

I was quite heavily involved in

16:20

quite serious administration and public

16:23

outreach, et cetera. But fortunately,

16:26

that was all over

16:28

when I was 70. And I've

16:31

lucky to have been able to go

16:33

on for another decade because

16:35

I'm just 80 now. And during

16:38

the recent decade, I've

16:40

worked just as hard, but pacing

16:44

myself as it were, because although

16:46

I've done a variety of things and helped to set

16:48

up new organizations

16:50

and written a lot, I've

16:54

not been responsible for any

16:56

major organization

16:58

or committee or structure. And

17:01

so I feel I don't

17:03

have to be quite so concerned

17:06

if things go wrong, because I'm the only one who will

17:08

suffer.

17:09

You've learned from that. You've learned from that experience.

17:12

I admit I understand it

17:14

too. It's really nice not to have

17:16

to run an organization. Let me

17:18

ask you, had you been

17:20

literally insulated from that, but I

17:22

mean, had you not had a tendency?

17:25

Was it really only when you turned 60 that you looked,

17:28

I mean, I'm sure you're such a responsible

17:30

individual, I'm sure you must have been part of committees and

17:32

you were head of, weren't you head

17:34

of the British, what was it called, British Association of

17:37

Science or something earlier on? I

17:39

was head of that and the Royal Institute of South Air

17:41

was president and indeed I was chairman

17:44

of the European Space Agency Science Committee for

17:46

a few years. This was before you were 60

17:48

though, right? Oh yeah, yeah. Yeah,

17:50

so it's not as if this suddenly, you know,

17:53

this response, this sense of responsibility

17:55

to the scientific community or whatever, emerged

17:59

spontaneously.

18:00

you were 60, you obviously felt the

18:02

need to.

18:03

Yes, the change

18:05

was up to 60. I was

18:07

involved in lots of committees and etc.

18:10

but they were all in the astronomy of space. Ah,

18:12

okay. And then beyond that,

18:15

I felt I would engage

18:17

in broader topics.

18:19

Okay. And

18:21

then after you stop that, you say

18:23

you

18:24

do things just for yourself, although you have been involved

18:27

and we'll talk about setting up perhaps setting

18:29

up a number of interesting organizations or being involved

18:31

in their neonatal

18:34

stages anyway.

18:37

But I will say that I think you've gone

18:40

in that other route a little bit like

18:42

Freeman's route and you've been more prolific

18:44

in your writing I think in the last 10 years

18:47

than before if I'm not mistaken.

18:49

Yes, I don't need to care what

18:51

people think. Yeah, that's excellent.

18:54

Good. Okay, good. Well, that's a perfect, perfect

18:57

segue. Well, I thought that, you know, it's interesting to,

19:00

let me ask you one of the questions in that regard. I'm

19:02

spending more time than here, but I think these are important lessons

19:04

for young scientists anyway to

19:07

learn from and for others.

19:09

So was the example of others,

19:12

when you looked up at people you admired, did

19:17

your decision to some sense, I don't want

19:19

to call it a statesman of science, but something

19:21

like that.

19:23

Were you influenced by looking at

19:25

the people you had admired earlier who had agreed to do

19:27

that? I

19:29

think I was, but also I was influenced by

19:31

those who I think made the wrong decision. I mean,

19:34

there was one person, well, a fellow called Ray

19:36

Littleton, who was a

19:39

professor in my department.

19:41

He worked with Hoyle, but

19:44

in his old age, he'd become rather sad

19:46

and embittered because he had espoused

19:49

various theories which should be

19:51

discredited, but he really went

19:54

on just

19:55

defending them when they were becoming

19:57

indefensible, etc. and I

20:00

just didn't want to end up like that. Okay,

20:02

so it was more to avoid the pitfalls than

20:05

to rise to the peaks of the... Well,

20:08

it was a bit of both of us, I suppose. No, no, I'm just

20:10

wondering, because I'm sure both of us have been

20:12

influenced in writing by the great writers, scientists

20:15

who've been wonderful writers. I'm just wondering

20:18

if there were any scientists who'd

20:20

taken the track of being involved

20:22

in public. Well, I know of one, and

20:24

we'll talk about it, Joseph Rotblat, of course, But

20:28

other British scientists have played a role

20:30

in influencing the government, people

20:32

you knew, that you hadn't had no personal

20:34

experience. I know quite a few. I mean,

20:37

I would

20:39

say Bob May was one,

20:41

who I knew quite well, and

20:44

many of the

20:45

pioneers of molecular

20:47

biology.

20:48

Any physicists? Sorry?

20:51

Any physicists?

20:54

Well, of course, there was the earlier generation who

20:57

were involved in the war.

20:59

And of course, that

21:01

generation, of

21:04

course, got huge responsibility young. And

21:06

they continued. People like

21:09

Cockroft and

21:10

Penny and people like that were

21:13

involved in making the bomb and

21:16

those who were involved in radar. And people

21:19

like Lovell

21:21

and Ryle, the pioneers of radio astronomy, they

21:24

did radar during the war. And

21:28

Lovell, who

21:28

built this huge telescope

21:31

in Dauphin Bank

21:33

in Manchester in the 1950s, he

21:36

was fairly young, but

21:39

he was very enterprising and ambitious

21:41

because he'd done a lot when he was in

21:43

his 20s, joined the war. And

21:47

he was, in my opinion, a really great man

21:49

because he built this big dish

21:52

and it's been upgraded and it's 65

21:54

years old now and it's

21:57

still doing work,

21:58

which couldn't have conceived

22:01

of. It's done some of the best

22:03

work on looking for the

22:05

evidence for

22:06

gravitational waves from binary

22:08

pulsars, all that's been

22:11

resurfaced. But at the same time

22:13

as doing

22:15

work on projects

22:16

he couldn't have conceived of 65 years

22:19

ago when he built it, it's also become part

22:21

of British heritage,

22:24

a designated World Heritage Site,

22:26

rather like Stonehenge. Oh wow. I'm

22:29

showing

22:30

Stonehenge and Jockrell Banks.

22:32

I wonder whether five thousand years from now whether

22:34

people will unearth it and wonder what its purpose

22:37

was like. I guess so. Well

22:39

I hope not in a way that'll apply something

22:41

about the end of our civilization but maybe and

22:43

we'll get there. Speaking of the end of our civilization,

22:46

the title of the book is If Science

22:48

is to Save Us. That

22:50

in some sense presumes that

22:52

we need saving. Do we

22:54

and why?

22:58

Well, I think as

23:01

I discussed in the first half of the book, we

23:04

are under threats

23:06

of various kinds, which are

23:09

at least in direct consequences of

23:11

the advance of science. We are

23:14

subject to climate

23:18

change and environmental

23:20

despoilation, etc., because

23:22

of a larger and more demanding population

23:25

using more energy, etc. And

23:27

that population would never got so large,

23:30

had it not been

23:31

for the benign effect of biomedicine,

23:34

allowing people

23:36

to live for longer, etc. So

23:39

the

23:40

stakes are getting higher,

23:42

because science provides great benefits.

23:45

But also, along

23:46

with those, there are very severe

23:49

downsides. And so that's really

23:51

the theme. Of course, the first

23:54

example of this was the

23:57

nuclear bombs. the

24:01

1950s, depending on

24:03

technology of the 20th century. But

24:06

the 21st century sciences

24:08

of bio and cyber,

24:10

they are going to have a similar effect

24:13

which needs great

24:16

prudence in order to apply

24:19

safely and ethically. And

24:21

so that's really what

24:24

I meant. And there are these contexts

24:26

in which science could

24:28

destroy us in ways which are the

24:30

downsides of its benefits. So the

24:33

aim has to be to harness

24:35

the benefits and minimize the risk of

24:37

the downsides, which are getting very serious.

24:40

And so the first half of the book outlines

24:43

the topics. And

24:45

the second part of the book discusses

24:48

more of the scientific communities, ethical

24:50

responsibilities, education,

24:53

and understanding science

24:55

by the public.

24:56

Yeah, in fact, that's a wonderful

24:59

summary. I was going to go into that. I think you sort of summarized

25:01

it nicely at the beginning. You say, my focus will

25:03

be on instead on

25:06

how the sciences impinge

25:08

on our lives and on the hopes and fears

25:10

for the future. I

25:11

shall offer thoughts on what distinguishes science

25:13

from other intellectual activities, how the entire

25:16

scientific enterprise is organized nationally and globally,

25:18

and how to ensure that scientists and their innovations

25:21

mesh into society so that applications

25:23

are channeled in accordance with citizens' preferences

25:26

and ethical judgments. And

25:29

you say right after that, I think the important

25:31

point that you've just made, but you say it beautifully, I thought the

25:33

stakes had never been higher. The earth has existed

25:36

for 45 million centuries. I love

25:38

that. I'm going to use that again. The earth

25:40

has existed for 45 million

25:41

centuries, but this is the first century

25:44

in which one dominant species can determine for good

25:46

or ill the future of the entire biosphere.

25:49

And so yeah, the book is organized,

25:51

and I want to go into that, and I want to discuss each of

25:53

those things. But I think I want to jump in in

25:55

a way to one of the,

25:58

you know I I tend to be a little contrary to what. the

26:01

examples

26:01

that we know you use

26:03

a pandemic at at the very beginning

26:05

in fact one of the first sentences in

26:08

in fact the first sense of your book is in our

26:10

response a covert nineteen were told

26:12

to quote follow the science

26:16

i'm

26:16

and there was never such

26:18

a time as you say where i went when when

26:20

when when experts as

26:22

as chief such prominence i'm

26:25

yeah in fact that accents there's never been a time and excerpts

26:27

of said such public prominence that's

26:29

true but her the i guess the question i

26:31

have is when in retrospect has

26:34

that helped

26:37

a it's now become almost a taunt

26:39

the politicians use when they

26:41

say follow the signs because they keep pointing

26:43

other people's our your when they when they criticize

26:45

mass as a look they follow the science

26:48

they claim to follow science but they weren't they were

26:50

just part of a herd of

26:52

sheep and and then when when the public

26:55

learns that you know that

26:58

something that was claimed not work

27:00

might work or something that was named to work didn't

27:02

work the question i have

27:04

is in the end by achieving such prominence

27:07

did

27:07

it ultimately producer of distrust

27:09

of science among politicians and the public

27:12

that some of them didn't have before because for the precise

27:14

reason that

27:15

this was the first time the public saw how science

27:18

really works which is your tentatively

27:20

with quit and and at the forefront they're always

27:22

things wrong but it's self correcting and

27:25

and all that it's kind of little too

27:27

subtle for

27:28

the headlines and the net result is

27:30

sometimes negative so i wanted to ask him about

27:32

well

27:33

it doesn't trump's negative

27:36

tabloid headlines is no doubt about

27:38

that but i think it was an example

27:40

where as you say the public

27:43

did gets

27:44

a feel and impression of

27:46

how science is actually done and

27:49

things were on certain that no idea

27:51

of what's the bars

27:53

was

27:54

like and what

27:56

the prospects were dealing with his and

27:59

didn't know how

28:02

it was spread which

28:03

how to protect yourself whether masks really

28:06

weren't very possible to anyone also whether

28:08

we should watch surfaces know all at all

28:10

those things were caught on certain and they they guys

28:12

reaffirmed up and certainly

28:14

in england at the top

28:16

scientists appeared regularly on

28:18

television along with the prime minister cetera

28:21

and i think they were respected

28:23

because they them they they

28:26

did emphasize the uncertainty but

28:28

it was most important of all i'm

28:30

vaccines were developed within

28:32

a year just which is which is unprecedented

28:35

john fortunately nonsense give you have club

28:37

level of london for hiv after

28:39

forty years it was remarkable that to the

28:42

program to were actually i'm design

28:45

and manufacturing

28:46

a mass scale

28:48

appropriate

28:49

vaccine is was a t was

28:51

an ear so i think this indicated

28:54

that scientists can't do something for us

28:56

yeah no in fact in some sense though i will also

28:59

were but that i think i've i've written about read about

29:01

this i've written about in my or way back

29:03

in the physics in star check box and said the biggest sir

29:05

scientific fallacies star trek produce

29:07

was an ocean the you have this huge problem

29:09

in the within two hours you can solve it and

29:12

an area where that's just not the way science normally

29:14

works at ten only takes decades to

29:16

solve difficult problems and and

29:18

gives

29:19

people and i think that kind of tv science

29:21

fiction mentality has given people

29:23

both up a faith and sciences

29:26

and technologies building saw problems but

29:28

false expectations about how

29:30

quickly or well those problems can

29:32

be solved and so i'm

29:34

not expecting scientists to produce a vaccine

29:36

right away in expecting him to know whether mass

29:38

worked or not or or whether some particular

29:41

group antiviral drug worked or not and

29:43

then being disappointed when they found out that

29:45

the natalie while

29:46

first mother we didn't know and

29:48

secondly that the opinions varied

29:51

overtime and and at

29:53

in it maybe in england their isis

29:55

i don't always see united states it's definitely

29:57

produced a backlash the governor

30:00

of Florida,

30:01

who was an educated person, he went to, I mean,

30:03

presumably educated, he went and did a degree

30:06

at Yale and then a law degree at Harvard. So

30:09

therefore, in principle has had some exposure

30:11

to thinking,

30:13

said, you know, all the experts

30:15

told us that vaccines would protect

30:17

us against

30:18

COVID. We

30:21

wouldn't get COVID when we took the vaccines, but look,

30:23

they're wrong. People get COVID have taken the

30:25

vaccines. A complete misunderstanding of the

30:28

fact that increasing your level

30:30

of protection is not the same as

30:32

being 100% immunity.

30:35

And they use that as of course a

30:37

political tool and the public then,

30:39

and people like Mr. Fauci you talk about

30:41

is as much a source of derision in

30:44

the US

30:45

as pride, I suppose. Yes,

30:48

well, I mean, I think it

30:50

isn't the case that although America has made

30:53

its world's best science, It has the

30:55

largest segment of anti-science

30:59

and denial

31:01

of

31:01

people among its population. So I think

31:04

scientists have a harder time in the US than

31:07

in Europe. I think it's only recently

31:09

that more than 50% of the American public

31:12

have accepted Darwinism. So

31:14

it is way behind Europe and

31:16

there is stronger anti-science or

31:19

science. Yeah, yeah, I guess it's true. I wonder.

31:22

But but but to go back to it, I mean,

31:25

I don't believe everyone can

31:26

even America can believe that we can

31:29

have an instant answer because everyone

31:31

knows that Nixon

31:34

tried to get

31:36

a cure for cancer in the 1970s

31:39

by throwing money at it. And he

31:41

didn't realize it wasn't quite like the Apollo

31:43

program, where the principles were known

31:46

by throwing money at it, you could achieve a

31:48

marvelous success. It wasn't like that, because

31:50

people didn't know where to start or how

31:52

to spend the money. And I think

31:55

everyone is interested in cancer

31:57

and they must realize that focus

31:59

has been made.

32:00

but it's a very long haul indeed.

32:03

Yeah, well, I would like, yeah,

32:06

that's a great example. Use it in the book. And

32:09

it's just, I'm not sure how

32:10

much that has sunk in. But you're absolutely right

32:12

in terms of the challenge. And as a

32:14

personal thing, I think I talked to you at the time when

32:16

I was considering moving to England to take

32:18

a position at Oxford in the public understanding

32:20

of science. And

32:22

one of the reasons that

32:24

I didn't end up doing that is that I felt that

32:26

if you're interested in the public understanding of science, like me

32:29

and also an American as

32:31

well as a Canadian,

32:32

that I should spend, my proposal

32:34

at the time was to spend half the time in the US because

32:36

I felt if you'd talked about public understanding

32:38

science and you ignored the US, you were doing

32:40

a disservice. In the end, I didn't know. Great

32:43

to leave that, yes. But

32:46

let me just point out, I was just reading in the news

32:48

this morning that there are now

32:53

not epidemics but close to that in certain parts

32:55

of the United States regarding measles and

32:58

chickenpox in Ohio and other places,

33:01

because of this notion, this question,

33:03

the whole question of vaccination

33:06

as personal freedom versus public responsibility

33:09

has really now in the US

33:11

at least, and I see it the same in Canada,

33:14

I don't know if it's the same in England,

33:16

has become an issue where people feel that they, it

33:19

used to be that children were

33:21

forced to have certain vaccinations before they

33:23

could enter public school. as

33:25

a public safety measure against

33:28

childhood disease like measles and chickenpox and

33:30

things like that. And

33:32

now there are apparently huge

33:34

numbers of people who are refusing to do that. They say, look, we

33:36

have the freedom to not

33:39

vaccinate our children. And

33:42

in some sense, the whole public discussion

33:45

over vaccination associated with COVID

33:47

has led to that. I'm wondering if that's a

33:49

step backwards as a result of

33:51

the successful creation

33:52

of vaccine. raise these questions

33:55

just because I don't know the answer actually.

33:58

those republic

34:01

is more aware of the issues now that's

34:04

i think they have balance

34:06

the risks incorrectly and

34:08

as contacts and agreed agree with that oh

34:11

yeah of course and of course incidentally one of the

34:13

note the main problems in conveying

34:16

scientific issues to

34:17

the public without practical implication

34:20

is to ensure that probability of

34:22

properly understood because it's very easy

34:24

to rub ugh but

34:26

misunderstand is under realize

34:29

that often if you do it as false

34:31

positives cannot number the real cases

34:35

but nonetheless the test is a good thing to do

34:37

so it's not completely straightforward

34:40

but this is just one of the issues were

34:42

one does he have to or try

34:44

and educate people on this leads to

34:46

a separate question which is

34:47

science education of young people

34:50

yeah which will get her white racial laws we

34:52

just yeah no i'm in a credibly interesting thing

34:54

and importantly an important issue which at

34:56

you're aware of i think i just recorded podcast of

34:58

appearance on been with your odds are calling tim

35:00

palmer about about the importance

35:02

of uncertainty and x and probabilities

35:04

him and and it was a a fun and detail

35:07

part that discussion i think i'm

35:09

by the way didn't do you agree i

35:12

let me just ask you as a question of public policy

35:14

and you've made the point and i've tried

35:16

to make a to what comes to public policy

35:19

scientists are just citizens they're not

35:21

sought they they don't win army special at

35:24

you know we can we we we need we have expertise

35:26

it's relevant for

35:28

the determining public policy but all the factors

35:30

as you go into great detail that affect public

35:32

policy are affected by i she's well

35:34

beyond science and therefore we can you

35:36

know we don't know we shouldn't necessarily be taken

35:39

our views in that regard are not nestle

35:41

special but having said that should

35:43

do you think chip children

35:45

should be a requirement enter

35:47

public schools should be that children are vaccinated

35:49

against childhood diseases that quip that

35:52

was there by protect their or their

35:54

their peers

35:55

i'm when live in

35:57

peace will maybe a little bit because as you said

36:00

there was a trade-off between freedom

36:03

and the safety of others. And I

36:05

think that is just the

36:07

kind of decision which politicians

36:10

and the public have to make.

36:12

But in making it

36:14

they've got to be aware of the genuine

36:16

scientific evidence

36:18

or at least the best

36:20

estimates we have of what the risks are.

36:23

And they've got to accept that the scientists

36:25

are genuine experts. I mean, if they if they

36:27

get ill, they can just discriminate

36:30

between the kind of medic who

36:32

can help them and someone who is just a quack. And

36:35

in the same way, one would hope they

36:37

can distinguish

36:39

the views of someone who

36:41

is a genuine expert from someone

36:43

who has no credentials. Yeah, you know,

36:47

yes, and my my editor, one of my editors

36:49

once said that, you know, when when the aliens come, everyone will

36:51

turn to the scientists and people, there is an inherent

36:53

faith in the sciences. But it's

36:56

clear you've had a public role for a long time

36:58

because you managed to, you turned

37:00

my question around and gave a very relevant

37:03

answer but didn't give your own opinion. Which

37:05

was just wondering whether, I

37:07

think, just I'm

37:09

of the opinion for example that people,

37:13

I think it comes from having grown up in Canada that people

37:16

should be required to wear helmets when they drive

37:18

motorcycles, not because I care whether

37:20

they

37:21

kill themselves because ultimately

37:24

their impact, it's a social responsibility

37:27

in some sense. And so we are

37:29

born free,

37:31

but we do live forever in chains. And

37:35

so I think we have a social responsibility

37:38

to some extent to ensure that the children we send to

37:40

school basically are not threats to other children

37:42

in some ways. Yes, no, we do. But of course,

37:44

contrast those, I mean, I mean, no

37:47

one claims any downside wearing a helmet.

37:50

Yeah. Oh, no. Oh, and I

37:52

don't know. There's a long-term downside to vaccination. Oh,

37:54

no. You haven't lived in the United States. I lived

37:56

in Arizona where

37:58

you don't have to wear a helmet. and

38:00

And everyone

38:02

claims there's a downside to wearing a helmet. It

38:04

reduces the pleasure of riding a motorcycle,

38:06

the breeze in your face and all of

38:08

these things. Anyway, all of these things,

38:11

as you point out, there

38:13

may be, one of the great senses we'll

38:15

get to is something like understanding risk

38:18

is different than deciding

38:20

how to address it or something like that. Because

38:22

in some sense that is personal, but it's also societal

38:25

and it's up to politicians and the public ultimately

38:28

to weigh those risks. And the

38:30

role of science, which I think you stress

38:32

over and over again, as I do, is to provide

38:34

the information to allow you to at least make a more

38:36

intelligent assessment of the risks. And,

38:39

but it does, but I did ask, but I did

38:41

leave a question to myself and I wanna move on, but, you

38:44

know, to this whole COVID experience, it's

38:46

caused me to think about this issue of, can

38:48

there be informed public debate

38:51

about scientific results when the very

38:53

nature of science is not understood? Can

38:56

we have an informed public debate before people know about

38:58

probabilities and

39:00

self-corrections and the fact that,

39:02

you know, there's never, we don't necessarily know everything 100%.

39:06

So can we have that kind of public

39:08

debate? Well, I think

39:10

we can. I mean, maybe some people will

39:13

be easily bamboozled, but

39:16

I think even though there

39:19

are some people who we call experts and

39:22

some people who are completely sort of lay, as it were,

39:25

I think

39:25

one could expect that

39:28

among opinion leaders and politicians,

39:31

there are some who are

39:34

fully attuned to what the risks are. They understand

39:36

the argument.

39:37

And that's why in all

39:41

these issues, it's important to

39:44

have politicians who

39:46

can explain the issues clearly. And

39:48

it's important also that scientists

39:50

should have their voices amplified

39:53

by charismatic individuals

39:55

who have wider traction with the public than

39:57

the scientists do themselves. I mean, I discuss

39:59

this. in

40:00

the context of climate change. Yeah,

40:03

but it's, I guess, a danger is that the politicians are

40:05

charismatic individuals as well by

40:07

virtue of the fact that they've been elected. You

40:09

did have a prime minister recently who was well educated,

40:12

but nevertheless seemed to often promote

40:15

nonsense.

40:16

That's right.

40:21

And did it very charismatically, I would argue.

40:24

This education was in the classics. Ah,

40:26

there we go, okay. Well, that'll

40:29

produce a lot of letters now, Martin, that you'll have to answer.

40:32

Not me, I hope. But

40:35

let's go now to the substance

40:37

more in detail, the substance of the book. As you

40:40

point out, the first part of the book is really to

40:42

talk about threats. Then you talk about the organization

40:44

of science and the scientists themselves

40:46

and ultimately education. So I wanna divide things

40:49

in those areas and spend a

40:51

fair amount of time on the threats. But I don't want it to be a boom

40:54

and gloom discussion because

40:56

the latter part of your book is really, really important

40:58

about how science is organized. But

41:01

let's talk about them. As far as I can see, there's where

41:03

you mentioned three, really the three greatest, the

41:05

three

41:05

big sort of

41:08

technical threats that in some sense

41:10

science can save us from and in some

41:12

sense science is relevant for

41:15

are climate change, sort of pandemics and biomedicine

41:18

and terror as one item. So climate

41:20

change, sort of biomedicine and

41:22

then artificial intelligence as the three,

41:25

three sort of chief things that you discuss in

41:27

the book in any case.

41:33

Climate

41:36

change, in those regards, you

41:39

make a statement that I also want to parse because

41:44

it raises questions in my own mind,

41:47

which is really great. That's one of the wonderful things about

41:49

your book and our discussions, as you

41:51

often cause me to rethink things. you

41:53

say make the statement it sounds good on the on

41:56

the surface but I wonder whether. Anyway,

41:58

scientists have an obligation.

42:00

to promote beneficial applications

42:02

of their work in meeting these global challenges. Well,

42:04

who could argue with that? Except

42:08

for the questions,

42:10

how is it clear

42:12

that we know what's beneficial? What are beneficial

42:14

applications of our work? Especially if

42:16

those applications may be 50 years down the road

42:19

and we have no ideas at the beginning. But

42:21

also, what if we think applications

42:23

are beneficial but

42:26

until they're tested, we really don't know where they

42:28

are. For example, you raised this question, let

42:30

me give an example later on.

42:33

Genetic

42:36

engineering that basically engineers

42:39

mosquitoes, malaria producing mosquitoes out

42:41

of existence and makes them extinct. Something

42:44

that seems to me, since I hate mosquitoes,

42:46

seems like a lovely thing to

42:47

do. But

42:49

you do raise it under a different context. You say,

42:51

well, should we be doing that? But

42:53

on the face of it,

42:56

ending malaria for poor children and

42:58

people in what you would call the global

43:01

South, I'm trying to not use the word developing countries

43:03

anymore because I

43:05

read that you use global South and maybe we'll talk about

43:07

that. But I mean, on the surface,

43:10

it seems incredibly beneficial

43:11

or, you know, I mean, and just like

43:14

people who thought putting cane toads into

43:16

Australia might be incredibly beneficial.

43:19

And so

43:20

there's this question of how can we, do

43:22

we really have an obligation, both beneficial obligations,

43:24

but in advance of knowing what's beneficial

43:27

And sometimes when we think what's beneficial

43:29

is in fact not beneficial? Yes.

43:32

Well, that's always good. It's

43:34

a trade off, isn't it? And one does have

43:36

to decide, is

43:39

the risk small enough to

43:41

go ahead nonetheless

43:42

because there's

43:44

an obvious benefit. I think this

43:47

is true in all the cases, it's

43:49

true of vaccines, but it's certainly

43:51

true in these cases. I think in

43:53

the case of the mosquito, I would

43:56

agree we should go ahead with Gene Drive.

43:59

But on the other hand... one is aware

44:01

that are a rock really change

44:03

to the ecology could

44:05

have a downside which

44:07

outweighs the benefit of so when

44:10

nice we're reminded much and one nice

44:12

considers medicine eyes as possible and

44:15

present day options to the politicians

44:17

to

44:17

okay to present the up what about

44:19

you what about saw a geo engineering which

44:22

again seems potentially beneficial

44:24

but as you point out i'm

44:27

we

44:27

don't really know the effects of of of

44:29

blocking visible sunlight as a

44:31

way of of of of reducing

44:33

the infrared now well

44:36

that was it really does more serious

44:38

because sir the effects would then be

44:41

global rather silly

44:43

although the alex more serves that sense although i've i

44:45

admit less serious in another sense

44:48

if you make a genetic change the population as

44:50

you point out i mean even human population in

44:52

it can exist for not eternity

44:54

but for now aqualung time where soldier

44:57

engineering

44:57

aerosols you put in the atmosphere will be gone

44:59

within a year so it's

45:01

you know it they're global impact but they're but

45:03

they're shorter term yes

45:05

yes i'm well

45:07

i'm in on geo engineering

45:09

in the sense of them know putting

45:12

stuff in the upper atmosphere

45:14

ah i

45:15

think as you say it

45:17

will be very dangerous to start

45:20

doing this on a big scale yeah

45:22

to

45:22

we had much more detail double

45:24

reliable climate models above

45:27

what would i see do our to hims cloud

45:29

cover it cetera and were far from

45:31

having that i'm actually i don't think

45:33

they we're anywhere near being

45:35

a position where we should be done and of

45:37

course incidentally the to worry then

45:40

is that it could be done by one nation

45:43

it's the worry and a benefit and sense because

45:45

to solve to release of climate change we have

45:47

to have a global consensus and i think

45:49

you i think you come through in the book as i

45:51

am a somewhat pessimistic

45:52

about whether out will ever get

45:54

that global said your consensus so

45:57

that's it the positive of geo engineering

45:59

is you don't need it global consensus, but it's also the negative.

46:02

Because one country is doing it. Indeed, that's

46:04

true. That's the worry. And that's

46:07

why I think we should

46:10

try and avoid any

46:13

implementation. But nonetheless, I do

46:16

think it's worthwhile to explore

46:18

the technology of how you

46:20

can change the albedo of clouds

46:23

and how efficiently you can launch

46:25

these particles and how long they

46:28

do stay the upper atmosphere, etc. And

46:30

I think it's a pity that there

46:33

are some people who object even to that. I know that in

46:35

Cambridge, in my Cambridge, there

46:38

was a very modest experiment being proposed.

46:40

And there was some Canadian

46:43

campaign group that

46:45

persuaded the funders

46:47

to take the money away from that, even

46:49

though it was trying to do was to see what what happened if you

46:51

had a balloon one mile

46:54

high. Yeah. Yeah,

46:56

no, I think one

46:58

ought to do the research. But of course, the

47:01

word geoengineering

47:03

is used in two different contexts, isn't

47:05

it? I mean, what we've been talking about just now

47:07

is

47:09

modifying the upper atmosphere,

47:11

like an arthritic volcano, as it were.

47:14

And that's something which is dangerous. The kind

47:16

which

47:17

is in principle benign is

47:19

sucking CO2 out of the atmosphere.

47:24

It may be never very economic. It's

47:27

not practical but it's not. It's very hard to incentivize

47:30

but if that could be done in

47:32

a cheap and effective way, then

47:35

I think that could

47:37

achieve a global consensus that it was worth

47:39

doing.

47:40

Oh, absolutely. I mean, it would be hard to imagine

47:42

a global consensus argument wasn't worth doing. economics

47:46

at this point the economics and the logistics

47:48

seem incredibly impractical. I visited we visited

47:51

a remarkable facility in Iceland

47:53

that's

47:53

doing this run by astronomers

47:56

actually who I met when I was going

47:59

to give an astrology. a lecture, public lecture on astronomy

48:01

there, but they were actually had moved to become

48:03

involved in this incredible facility

48:06

near the thermal facility of capturing carbon and

48:08

putting in rocks. But of course it's very,

48:11

it's great but ineffective in the

48:13

global sense. But that's

48:15

been on a huge scale. The

48:18

other problem is that unlike

48:21

adaptation where a

48:24

country benefits from the money it spends,

48:26

in the case of

48:28

this kind of mitigation.

48:32

Your country doesn't benefit from having

48:34

these things on your land. Yeah, well,

48:37

the whole world does, but yeah. But that's not the

48:39

same thing. That's why it's gonna be very hard to

48:41

incentivize. Yeah, unless, well,

48:44

I suppose there'd be money to be made by selling

48:46

these things. And companies, private companies

48:48

might therefore get benefit in the-

48:51

Well, but they're- Why are they going to use them?

48:54

They'll sell them to other countries. I don't know, yeah, but anyway,

48:56

yeah, no, it's true. It's a- It's

48:59

a hard to incentivize the-

49:02

But let's, you know, I guess

49:04

I want, yeah, let's go

49:06

into this a little more detail also, cause I can't

49:09

resist, is that,

49:10

so solar geoengineering, the normal

49:12

kind of geoengineering, as we talked about putting

49:14

aerosols and artificial volcanoes,

49:17

you, it makes perfect sense

49:19

to say, we really need more research before we should

49:21

do it. I think that's a very

49:24

unbelievable, you know, You can't argue with that in

49:26

my opinion. However, it's

49:28

risks and rewards. And at some point,

49:30

some people have argued that,

49:34

because even

49:37

if the world comes

49:39

together to reduce its carbon footprint

49:41

on a time scale,

49:43

even remotely approaching

49:45

what the governments claim to try and do by 2050, that

49:48

there'll be an overshoot

49:50

and that overshoot will be dangerous. So would

49:54

one, I guess the point to demonstrate that this

49:56

is the science, the

49:59

social issues are. are sometimes as important

50:01

as the scientific ones. Yes, research is

50:03

needed now more, but if in 10 or 20 years,

50:06

there's much more, the impacts

50:09

of climate change are much more severe, but

50:11

the research has not yet been done,

50:13

we would probably have to reassess whether we

50:15

should just go ahead without knowing exactly what's

50:18

going to happen, because the rewards might

50:20

be more beneficial than the risks. Do

50:22

you agree?

50:24

Yes, I do, because I think the

50:27

bigger the temperature rise is, the

50:30

more worrying it is, because even

50:33

if we consider the benign

50:35

kind of geoengineering sucking the CO2

50:37

out, then if

50:40

the change has got beyond a certain threshold,

50:43

by no means obvious,

50:45

that it will reverse and come down. Yeah.

50:48

If you cross the tipping point, then it could

50:50

be that once the temperature rises got

50:53

about, say, two degrees, then

50:56

even if you suck out the carbon

50:59

dioxide down to the present level,

51:02

the atmosphere may find a different equilibrium

51:04

at four degrees, something like that. So

51:07

that is the reason for trying to minimize

51:09

the change to avoid that sort of irreversibility

51:12

coming in.

51:13

Also, I forget where in the book, because

51:16

I made a note, because it resonated with me,

51:19

that you can't know everything before you do anything.

51:21

And at some point, while it's, you

51:23

know, And that's something I try and instill

51:25

in my graduate students I used to because when I

51:28

was a graduate student I wanted to know everything before I started

51:30

a problem, you know, and then I realized you eventually

51:33

have to do something.

51:36

But it's true globally at some point

51:38

political

51:38

decisions are always going to be based

51:41

on

51:41

incomplete knowledge. And

51:44

we have to accept that fact that

51:47

we don't know, that we

51:49

can recommend that this may be

51:52

useful but we don't know for sure and at

51:54

some point someone has to make a decision and

51:56

politicians almost will never be able to

51:58

make it.

52:00

politics would be too easy if you could always

52:02

make decisions where you knew what the results would

52:04

be. Yes, but

52:06

I think in the context of climate, there

52:08

are

52:10

courses of action which are

52:14

unambiguously positive. Yeah.

52:16

Okay. That is to move towards

52:19

carbon free energy generation and

52:22

storage and all that goes with it. But

52:24

then another point I emphasize in my book is that

52:26

it's not enough for the global north

52:29

to achieve a net zero by 2050,

52:33

which I think is feasible.

52:35

The point is that the Global South

52:37

by 2050 will have 4 billion people,

52:40

and they are now using

52:43

less energy per capita than

52:45

we are by a big factor, and they're going to need more

52:47

energy per capita if they are to develop

52:49

in the way we hope they will. And we've

52:52

got to make sure that they can leapfrog

52:54

directly from

52:56

smoky stoves to clean

52:59

energy, just as they've leapfrogged

53:01

directly to smartphones,

53:04

never had landlines. And

53:06

so the reason why we want to accelerate

53:08

R&D into

53:10

all kinds of clean energy is not only

53:13

for nations like yours and mine to

53:16

aim for NetEo by 2050,

53:19

but to ensure that it is

53:21

going to be possible for the global south

53:24

to do the same thing because if we and

53:26

the global north do this then those

53:29

in the global south may well be producing at

53:32

least half as much co2 as the world is now

53:34

today and that will

53:37

not be enough to stop the continuing

53:40

rise. So the crucial thing is to ensure

53:42

that

53:44

the global south has

53:46

the resources and the technology

53:49

to do the same as the northern

53:51

countries can and develop

53:55

but using

53:56

carbon-free energy.

53:57

I

54:02

agree. Let me again

54:04

to ask, parse that a little more

54:06

carefully. First

54:07

of all, just to make it clear, because I think global south

54:09

is not a word that it's a word that's

54:11

that is the word that's being used now

54:14

to what we would have called developing or third world

54:16

countries and now called global south is that that's more

54:18

or less right. less right?

54:23

Southeast Asia

54:25

and Africa which

54:27

owns anyway the population is rising fastest.

54:30

Yeah, yeah okay but now having said that

54:33

yes absolutely we have that obligation

54:35

but that obligation is probably not going to be met

54:38

and then some people would argue it's

54:40

nice to image the question the reason

54:42

we have smartphones

54:44

and we don't have

54:47

simple ways to leapfrog in climate

54:49

is is that it's easier to make a smartphone. And

54:51

so, and you know, and so

54:54

the question is, can we expect the

54:56

global South to really do that? And

54:59

if we can't, do we have any right

55:01

to say, no, you can't produce hydraulic dams

55:03

or burn coal or whatever, that you can't work

55:05

as to,

55:06

because it's too late, because we already screwed

55:09

things up, you don't have a right

55:11

anymore to take the old fashioned technologies

55:14

and try and improve your quality of life

55:16

or your standard of living in your country?

55:18

Well, I think that's too pessimistic because

55:21

we know that there are technologies

55:24

that can

55:26

provide net zero

55:29

for us. Yeah, well, look, I've had this argument

55:31

with... They may be, certainly,

55:35

Sun and wind plus lots and lots of storage

55:37

plus long distance, smart bridge, etc.

55:40

And it's a technology which is feasible. And

55:43

there's only economic

55:45

limits on that being deployed

55:48

globally. And

55:49

so- Yeah, maybe, although again, I had this

55:51

debate with a guy named Michael Schoenberger, who

55:54

has said, and I think it's probably reasonable to say,

55:56

it's not just energy production, energy intensive.

56:00

And the land area

56:02

and

56:02

the energy intensiveness you get

56:05

from a hydroelectric dam or a nuclear

56:07

plant, for example,

56:10

taking much, you know, can produce much more power

56:12

in a much smaller area

56:14

than having a distributed wind farm

56:16

or solar farms. So we can't, if

56:18

we want to bring those people up quickly, you

56:20

need energy intensiveness as well as overall

56:22

energy, you know, production. And therefore we have

56:24

to... And therefore we have to-

56:26

That's not clear. It's not clear it's more expensive

56:29

to get energy from in

56:31

Africa from a solar energy

56:33

than for nuclear.

56:35

The question

56:37

is one of land use and land area. That's all I was

56:39

thinking about. I mean, you can, you know,

56:42

I'm quite sympathetic to what you just said. And I agree.

56:45

We need to try and look away, select leaf frog. I

56:47

just don't know, not clear to me. It's not clear to me. We

56:49

have that technology yet to allow them

56:52

to leapfrog at a level that would bring their

56:54

populations up to be able to even

56:56

adapt to climate change, to have the

56:58

fresh water and energy access.

57:04

Well, they need resources, that

57:07

means they need economic development and

57:09

we've got to collaborate with them. But it's

57:11

hugely in our interests. Of course it's

57:14

hugely in our interests. But- So

57:17

even if it has to be heavily subsidized by the North,

57:20

as a mega-martial plan as it were, then

57:23

we should still do it. Whereas otherwise,

57:25

there's going to be disaster for

57:28

all of us, and also incidentally,

57:32

massive migration on a scale we can't cope

57:34

with.

57:34

Exactly, I was gonna

57:37

bring up migration. All of these things

57:39

make ultimate sense, but doesn't mean that people, it's

57:42

in our own interest to be

57:44

benevolent, it's not altruism.

57:46

But I see no evidence that

57:49

that level of understanding that it's in

57:52

our interest is causing the

57:55

global North, as you call it, to take

57:57

the necessary actions.

57:59

I mean my. Migration

58:00

is a clear example. It's obvious

58:02

given that the,

58:03

what's just happened, with the Sudan or you

58:06

pick your favorite recent country, Syria, the

58:08

impact of a relatively

58:11

small number of migrants, namely only a few million

58:14

instead of a few hundred million,

58:16

it's caused a social

58:18

discord and instabilities.

58:22

On the left, you have it on the level of a hundred million, then

58:24

it's a national security issue,

58:26

yet countries don't seem to

58:28

care.

58:29

Well, I mean, you can accommodate

58:32

that in Canada, certainly, and that's really

58:34

the prime destination because it

58:36

is connected by global warming. So that's

58:39

the case. But I

58:41

think you're right. But there

58:44

are examples. I mean, the Marshall Plan after

58:46

World War II was an example

58:48

of, well, enlightened

58:50

altruism, let's say. Yeah,

58:53

do you have any, this again comes down,

58:55

I was gonna talk about Sputnik moments, but

58:57

let's leave that to later. But

58:59

do you have any, after the,

59:03

you know, after the destruction

59:05

and devastation of a world war, the need

59:08

to bring the world back is

59:11

clear. Do you have any suggestions for how,

59:13

what might, or any ideas about what

59:15

might motivate

59:17

or prompt or get the kind

59:19

of political will to produce a global Marshall

59:21

plan?

59:22

I know that it's, you know, it's a very difficult

59:24

question,

59:25

Well, I think to make

59:29

politicians

59:31

care about what may happen 30

59:34

or more years ahead. I mean, the main problem is short-termism.

59:37

Politicians think about the next election.

59:40

And the only

59:43

way in which they will

59:45

care more about what happens 30 years

59:48

ahead is if voters clamor

59:50

for this. And that's why I

59:53

said in my book, we should welcome

59:55

the demonstrations by young people who

59:57

will be alive at the end of the century. we

1:00:00

should welcome the influence

1:00:03

of charismatic figures who have an appeal

1:00:05

to Lars Nubbers. And I mentioned in my book

1:00:08

four very different people. Pope

1:00:10

Francis, who has a billion followers in

1:00:13

Latin America, Africa, and East Asia. And

1:00:16

his

1:00:18

encyclical got an understanding of Asia

1:00:20

at the UN and was a very important

1:00:22

development. So he's won. David

1:00:25

Attenborough, our secular pope, influence

1:00:28

people to take this seriously. Bill

1:00:30

Gates, I think, is a wider-respected

1:00:33

figure who's talked a great deal of sense about

1:00:35

what the technology will allow us to do, and

1:00:37

Greater Thorneburg, a symbol

1:00:40

of the younger generation. And we

1:00:42

want more people like that who

1:00:44

will influence the public.

1:00:46

And if the public cares

1:00:50

about what happens in the lifetime

1:00:52

of their children and grandchildren, then

1:00:55

they were votes for politicians who respond to that.

1:01:01

Yeah,

1:01:03

no, but it's also I want to reinforce that

1:01:05

another way. I don't think the vote is necessarily the case.

1:01:08

I think it's more even more even dictatorships.

1:01:11

There's there's well established

1:01:13

social sciences just when 3% of the population

1:01:16

in a become actively engaged

1:01:18

in an issue, then

1:01:19

it causes a societal change.

1:01:22

And that's

1:01:24

true whether you have a democracy or dictatorship.

1:01:27

Dictatorships have more control. And the

1:01:29

virtue of a dictatorship, especially in the enlightened one, is

1:01:31

the ability to think longer term.

1:01:33

Think Singapore, where they're

1:01:35

already planning what roads they have and need 20 years. But

1:01:39

at the same time, dictatorships

1:01:40

can't function if

1:01:44

the public ultimately at

1:01:45

a. And Iran is in the

1:01:47

process of maybe observing

1:01:50

that. If once a significant enough fraction

1:01:52

of the public say, no,

1:01:54

this is the line we won't cross,

1:01:57

it doesn't matter whether there's democracy or dictatorship. No.

1:02:00

So what I'm saying is that

1:02:02

the public opinion you

1:02:04

matter long term and people have to care about

1:02:06

their children and grandchildren. Yeah. If

1:02:08

that happens, then I

1:02:11

think there could be the political will, whatever

1:02:13

the government is, to do

1:02:16

these things and ensure that net

1:02:19

zero can be achieved by the world and not just the

1:02:21

prosperous world. I guess what I was

1:02:23

the question I was asking, and I've often I've had this

1:02:25

discussion for 40 years with colleagues. the first

1:02:27

paper I think I even got involved in this

1:02:30

when it was 1970s was that,

1:02:33

you think it's likely to come from

1:02:35

a few charismatic individuals or

1:02:38

a

1:02:39

global impact? I mean, is it likely

1:02:41

that something will happen, something

1:02:47

in the physical world will happen that will cause

1:02:49

people to be enough, afraid

1:02:51

enough?

1:02:52

One would have thought of maybe when New York flooded

1:02:56

with the subways or

1:03:00

is it likely there'll be a natural phenomena that you

1:03:02

think that would cause people to be able to finally

1:03:04

say enough is enough or not?

1:03:07

I think that might happen. I think we've seen this in a

1:03:10

slightly more modest way with

1:03:14

pollution of the oceans.

1:03:17

This wasn't at all on the agenda, but I think this

1:03:19

is where David Attenborough's

1:03:21

programs I think they're

1:03:23

even seen in the US, but they're certainly seen very widely

1:03:26

in the world. They have made people

1:03:28

aware

1:03:29

of the effect on

1:03:31

marine life and all the rest of it of

1:03:33

plastic pollutions. That has certainly

1:03:36

in England led

1:03:38

to some legislation which wouldn't have happened

1:03:41

had the politicians not realized

1:03:43

that the public was mindful of this issue.

1:03:46

That's an example. I think

1:03:48

if the public

1:03:50

is known to care, then

1:03:53

these issues will be prioritised

1:03:55

and this may need more

1:03:58

elaborate R&D.

1:04:00

order to bring down the cost of clean energy

1:04:02

or something fundamentally new, or perhaps

1:04:06

a system of smart grids

1:04:09

on the transcontinental scale. That's another question.

1:04:11

Yeah, you talk about the need to. That will be a need,

1:04:14

certainly when it comes to Global South. Energy

1:04:17

production may happen in one place, but be able to transport

1:04:19

it to places that need it. It's

1:04:21

a non-existent ability right now, but

1:04:24

something that would be a game changer in

1:04:26

in terms of global cooperation

1:04:28

and the global need to address carbon.

1:04:30

And the global side can make money sending

1:04:32

the energy to

1:04:35

places like Canada and

1:04:38

Britain. Yeah, yeah, yeah, absolutely.

1:04:40

And but even when it doesn't involve making

1:04:43

money, it involves self-interest, as you say, if

1:04:45

you can send that energy.

1:04:48

And yeah, but I think this desire,

1:04:50

of course, and we'll get to it, it's the last part

1:04:52

of your book to scientists

1:04:55

interacting with the government, with the public, I mean. And

1:04:57

it's one of the reasons both you and I do some

1:04:59

of the things that you and I do, including what we're

1:05:02

doing right at this instant.

1:05:05

Let me pick up

1:05:07

another statement

1:05:09

early on, and we will get through

1:05:11

this at some point. I have,

1:05:14

we're on page two of 12 pages, so you know,

1:05:17

but there's so much I wanna talk to you about. But

1:05:19

I do wanna ask this question. You say, when it comes to things

1:05:21

like AI, You say, we will need the insights

1:05:24

of social scientists to help us envisage

1:05:27

how human society can flourish in a networked

1:05:29

and AI-dominated world. We've been talking

1:05:31

about climate change, but this statement

1:05:33

was made early on in your book, and I, do you really

1:05:36

trust social science scientists

1:05:38

to have those insights?

1:05:39

Do you really think that

1:05:41

they can provide those kinds of insights at the

1:05:43

current time?

1:05:46

Well, maybe somewhat better than the

1:05:49

layperson, But I think

1:05:51

if I was to talk a bit about AI,

1:05:54

I'm not one of those

1:05:56

people who believes that's a

1:05:59

super intense. will take over the world. But

1:06:03

I worry about two things. First, the

1:06:07

fact that's clearly already

1:06:10

not AI, but

1:06:12

automation and

1:06:16

similar things is changing very

1:06:18

much work patterns. And

1:06:21

this can be benign

1:06:23

if the

1:06:25

resources are redeployed. To

1:06:27

take an example, if

1:06:30

those who work in Amazon warehouses

1:06:33

and in

1:06:35

telephone call centres can

1:06:37

be replaced by machines, which is

1:06:39

quite feasible, then

1:06:43

that's a

1:06:45

plus-plus, provided that

1:06:47

jobs could be found for those displaced. And

1:06:50

the kind of jobs that are needed

1:06:53

where you need to be a human being, not a machine,

1:06:55

and where currently there are far too few people

1:06:57

who are underappreciated

1:07:00

and underpaid is in being

1:07:02

carers for young and old and

1:07:04

teachers assistants, custodians

1:07:07

in public parks and things like that. So if

1:07:09

the

1:07:10

mega companies that

1:07:13

make money from

1:07:14

AI

1:07:17

and all that are properly taxed,

1:07:19

and that's of course hard because they're multinational,

1:07:22

if that can be done, and if those resources

1:07:25

can be hypothecated for workers

1:07:28

in socially valuable

1:07:31

enterprises like the

1:07:34

caring profession, etc., that's a plus

1:07:36

plus. So that's an example

1:07:38

where one can actually

1:07:42

develop these things. So my view is

1:07:44

that we can

1:07:45

benefit from AI by

1:07:48

using it to supplement

1:07:50

human expertise in things like radiology,

1:07:53

et cetera, and to replace humans

1:07:57

in the mind-numbing jobs like...

1:08:00

in a warehouse. But

1:08:03

I think we've got to be careful because

1:08:07

I think as Rodney Brooks, the inventor

1:08:09

of the Baxter robot said, he's

1:08:12

not worried about AI taking

1:08:14

over, but he thinks for a long time

1:08:17

we have to worry more about humans about

1:08:21

human stupidity. The artificial intelligence.

1:08:26

But we also, we also, I think, have to worry about

1:08:29

just malfunctions and

1:08:30

bugs because the

1:08:32

worry is that people are using

1:08:35

AI

1:08:36

to replace human judgment

1:08:39

in medical diagnosis,

1:08:42

deciding whether you deserve parole if you're

1:08:44

in prison and things of that kind. And

1:08:47

this may be appropriate

1:08:50

in some senses, you can perhaps show that on

1:08:52

average, the

1:08:53

AI makes better decisions

1:08:55

than a human does. But there's

1:08:58

always a worry that there's some bugs in the

1:09:00

system which we don't know about. And

1:09:02

so one should keep a human in the

1:09:05

system. And so what is

1:09:07

worrying

1:09:08

is if a machine

1:09:11

has bugs

1:09:13

which aren't read

1:09:16

it out soon enough and

1:09:18

therefore cause social damage, or

1:09:22

if there's a breakdown, which

1:09:23

is very hard to repair. I mean, suppose there

1:09:25

was some breakdown

1:09:29

which affected the

1:09:31

internet globally,

1:09:33

something like that. I think

1:09:35

how much worse would it have been if the internet

1:09:37

had failed during the COVID

1:09:39

lockdown? So I think to

1:09:42

be over-dependent

1:09:44

on interlinked

1:09:47

technology on a global scale

1:09:49

is very risky. And so those are the

1:09:51

sort of downsides I worry about, not the

1:09:53

machine becoming super- Not

1:09:57

terminator. Yeah, and of course we

1:09:59

did jump in.

1:10:00

I do want to go back, but

1:10:03

I couldn't resist that question of whether social scientists

1:10:05

really can help us. I'm

1:10:08

more dubious. But when it comes to

1:10:10

this question of... I don't

1:10:13

want to leave it. You're absolutely right. The

1:10:15

point is that...

1:10:17

And I think this goes back to maybe

1:10:19

even one of your old John Maynard Keynes,

1:10:24

who

1:10:24

argued that

1:10:26

in principle, capitalism, or at least industrialization

1:10:29

would be wonderful because it would take

1:10:31

all these boring jobs factory, you know,

1:10:33

these and people would

1:10:36

have more free time

1:10:37

to, you know, to have leisure

1:10:39

and do and listen to music. And so,

1:10:42

and so in principle, it'd be a wonderful

1:10:44

thing. And of course, it hasn't necessarily

1:10:47

been directed that way. But

1:10:49

I think I would amplify what you're saying. And I think I

1:10:52

maybe it was Jeffrey Sachs, who I first

1:10:54

heard say this in a way, not

1:10:57

necessarily just taking mind-numbing jobs

1:11:00

and moving them

1:11:03

into other jobs that are maybe

1:11:05

more beneficially useful, but

1:11:08

no jobs at all. That if we

1:11:10

can produce more resources with fewer people,

1:11:13

and if everyone benefits from that,

1:11:15

we'll all be able to spend time at coffee shops and

1:11:17

listen to music, and we may just have lives

1:11:19

where we can also just enjoy

1:11:22

cultural things without necessarily

1:11:24

that namely take

1:11:28

the goal that Keynes talked about, which

1:11:30

is to more or less have technology

1:11:33

make the average human's life more

1:11:35

pleasant.

1:11:37

Yes, but of course a crucial

1:11:40

limitation on freedom comes from lack of money.

1:11:44

And that's the big problem now. Well,

1:11:47

that's, I mean, I think what's satisfying... Those who are working

1:11:49

are not getting enough money to enjoy the

1:11:51

kind of life you mentioned. Well, I think we agree

1:11:54

maybe on the danger and the necessity.

1:12:00

I'm probably pretty pessimistic,

1:12:02

but

1:12:04

AI and, you know, we've

1:12:06

already seen high technology has produced vast wealth.

1:12:09

And the question is, will that vast wealth

1:12:12

and AI will be another example of that, those

1:12:14

companies that control AI will have vaster

1:12:16

wealth, will that be progressively

1:12:19

funneled into fewer and fewer

1:12:21

individuals become thereby more

1:12:23

rich and more powerful? Or will that

1:12:25

vast wolf just make the

1:12:27

world better for everyone? And I think the example

1:12:30

thus far is that the former is more likely

1:12:32

than the latter. Well,

1:12:34

I mean, I think this

1:12:36

leads to general politics and

1:12:39

being in Britain,

1:12:44

we've got a deplorable government

1:12:46

at the moment. And one of the most deplorable

1:12:48

features of it is it wants

1:12:51

to learn more from the United States than from

1:12:53

Scandinavia. My view is that we

1:12:55

ought to learn more from Scandinavia, which

1:12:58

accepts high taxation in return

1:13:00

for greater equality and

1:13:02

a better welfare system.

1:13:04

And so, as you say, it's

1:13:06

possible to have this, but it

1:13:09

requires political attitudes rather different

1:13:11

from those which prevail in your

1:13:13

country and indeed in your

1:13:17

country. Although I hope not for much longer.

1:13:20

Well, I could say the same thing. Actually, by the way,

1:13:22

I'm in Canada now. So, yeah, so, so

1:13:25

maybe a little, little less extreme then, but

1:13:27

I, but I have, yeah, I know what you're saying.

1:13:30

Okay, look, I wanted to hit those and I

1:13:32

want to go back. We will talk about AI because again,

1:13:35

at some point as

1:13:37

well, but I wanted to go back. I don't want to leave. There's

1:13:40

several important issues you talk about. Once

1:13:42

again, climate change, sort of, and then

1:13:44

biomedicine and then AI. So going

1:13:46

back, we've, we talked about some general aspects

1:13:48

of climate change, but something you

1:13:50

point out, which is an issue

1:13:52

that really is discussed so much is population

1:13:54

growth and biodiversity loss.

1:13:56

Um, one.

1:14:00

But one

1:14:03

rarely sees, maybe because it's politically

1:14:06

incorrect, to see population

1:14:09

growth

1:14:10

tied into the problems

1:14:12

associated with climate change and

1:14:15

energy, as

1:14:16

you don't hear them discussed.

1:14:18

You certainly didn't hear them discussed by

1:14:21

one of your heroes, Pope Francis, for whom

1:14:24

he's not one of my heroes, who,

1:14:27

as we talked about in the last thing, in one hand

1:14:29

gave a wonderful encyclical about climate

1:14:31

change, but at the same time refuse

1:14:34

to discuss the possibility or

1:14:36

encourage family planning in Africa,

1:14:39

which is an essential part of that. And so

1:14:41

it seemed to me to be hollow.

1:14:43

But population growth

1:14:45

is an issue, and it's an issue not just

1:14:49

for the drain on the world, on

1:14:51

what a world with 10 billion

1:14:54

people will be, but

1:14:55

But as you also point out, a drain on biodiversity.

1:14:57

So I wanted to talk about that a little bit,

1:15:00

since you talk about those in your book, and ask you to comment

1:15:03

on this issue of should, of

1:15:06

is,

1:15:08

there's this divergent attitude,

1:15:10

very divergent impact. In the global north,

1:15:12

population growth is decreasing. The

1:15:15

rate of population growth is decreasing. And

1:15:17

it's also becoming negative in certain places in

1:15:19

the global north. And the global south, it's

1:15:22

increasing.

1:15:23

So what do we do? Well,

1:15:27

it's

1:15:29

not just we in the North. I think it's

1:15:31

very important that it's

1:15:33

a matter for the conscious themselves. Yeah.

1:15:36

I mean, what we don't, what

1:15:38

clearly is unacceptable is

1:15:41

people in the North.

1:15:45

Imposing their working. Thinking statement about what should

1:15:48

be done in these other nations. But I think

1:15:51

it's clear that many

1:15:54

nations which are impoverished,

1:15:56

as they are in

1:15:59

parts of India.

1:16:00

and rural parts of

1:16:02

Africa, they would be

1:16:04

able to develop more quickly if

1:16:07

the population stabilized.

1:16:10

And of course, the question is, will it stabilize?

1:16:13

We know that urbanization, women's

1:16:15

education, and things like that make

1:16:17

the population stabilize.

1:16:20

And that may

1:16:23

happen in Africa. It may not, because of

1:16:25

course, it could be that even if people have the choice,

1:16:27

they want have big families. But then of

1:16:29

course that that will lead

1:16:32

to a huge

1:16:33

conurbation in West Africa,

1:16:37

100 million people,

1:16:40

several hundred million people, etc. And

1:16:43

Nigeria

1:16:46

having a population equal to

1:16:48

that of Europe and North America combined.

1:16:52

The question is that a good thing for for

1:16:54

Africa. And if

1:16:57

the view in Africa is they don't want that, then

1:17:00

let's hope that they can stabilize

1:17:02

the population. And I think that

1:17:05

as you know, there's some UN projections

1:17:07

say that although

1:17:10

there'll be a continuing rise,

1:17:12

partly because of the lifespan extended

1:17:15

by

1:17:15

medical techniques

1:17:19

still

1:17:20

in the middle of a century. By 2080,

1:17:23

the world population may peak.

1:17:25

And that's maybe

1:17:27

a good thing. And of course, let's bear in mind

1:17:29

that the doom-mongers

1:17:32

like Paul Ehrlich 50 years

1:17:34

ago. A club of Rome and Livistic

1:17:37

Rome, which I read as a kid and really impacted

1:17:39

me. Well, of course, the population

1:17:41

was less

1:17:44

than half than what it is now, and

1:17:46

they predicted doom

1:17:49

and massive starvation in

1:17:51

the 70s and 80s, which didn't come about.

1:17:54

So, of course,

1:17:56

using

1:17:57

sustainable, intensive agriculture.

1:18:00

It's quite possible for this population to be

1:18:02

fed. So it's not necessarily

1:18:05

a disaster if the population

1:18:07

rises. And also,

1:18:10

as regards biodiversity, I think

1:18:13

we all do depend on the natural capital.

1:18:16

We don't want to deplete that. And there's

1:18:19

an ethical issue here.

1:18:21

geologist, E.O. Wilson, who says that if

1:18:25

this generation's actions need

1:18:27

to mask extinctions.

1:18:30

It's the sin that future generations will need to forgive

1:18:32

us for, because irreversible destruction

1:18:34

of the beauty of nature, as it were. And

1:18:37

here, I think we can all agree with the Pope.

1:18:40

And that therefore means

1:18:42

that we want to ensure that

1:18:45

the food is provided in the same intensive

1:18:47

way, which may mean that we should encourage

1:18:51

artificial meat and things of that kind.

1:18:53

Well, okay. You shouldn't be sensational

1:18:55

about the problems rising population at all.

1:18:58

I'm not saying so. And of course, it's not for

1:19:00

the Global North to to pronounce

1:19:03

on these things anyway. Well, maybe

1:19:05

for the research in the Global North to talk about

1:19:07

the implications will be to

1:19:09

study do research on what the implications will be, which

1:19:11

will provide the necessary

1:19:15

perspective that in principle,

1:19:17

people could then use to make decisions on

1:19:19

their own. So along the lines.

1:19:21

So so that's what we can

1:19:24

join in that research. Yeah exactly

1:19:26

and your point and we'll get to it is that we want to encourage

1:19:29

centers for excellence in africa and other other

1:19:31

places that'll be four or five hours

1:19:33

down the road here when we get to that point in our

1:19:35

conversation. But

1:19:38

but but you do say let me let me say

1:19:40

that you know maybe maybe you're saying this is going to

1:19:42

be out of day two you say there's a well known estimate

1:19:45

from the world wildlife fun for nature that

1:19:47

the world is already to spoiling the planet by consuming

1:19:49

natural resources at about one point seven times

1:19:51

the sustainable

1:19:53

level. So that's

1:19:55

a statement that we already are past sustainability.

1:19:58

Do you think the technology will... just,

1:20:00

I mean, like, like has happened

1:20:02

at the Club of Rome or limits to growth or

1:20:04

whatever, that that may be true now,

1:20:06

but technology will allow us to have a sustainable

1:20:08

level at 10

1:20:09

billion

1:20:11

people. I just thought so. Yes, because

1:20:14

that will wildlife fund estimate

1:20:17

is based on knowledge of the rate

1:20:20

in which they're cutting down the Amazon forest and all

1:20:22

that. Yeah.

1:20:23

Yeah, yeah, absolutely. And but

1:20:26

but are there as you quote, Swedish environmentalist

1:20:28

Johann Rokström. There

1:20:31

are undoubtedly some, maybe there are no planetary

1:20:33

boundaries. Maybe it's always a moving target.

1:20:35

Maybe what seems like a planetary

1:20:38

boundary now won't be 20 or 30 years because

1:20:40

of technology.

1:20:43

Do

1:20:46

you have optimism in that sense that there

1:20:48

are, at some level, there are no irreversible,

1:20:53

that technology will keep pushing those boundaries

1:20:55

out as long as we need them to?

1:20:57

I think it could, but still, we can ask the question,

1:21:01

ideally, what should the world population be? I

1:21:04

mean, if everyone is to have a beachfront

1:21:06

property, then the world population

1:21:08

has to be cut to 1% of

1:21:10

its present size. Yeah, yeah. That's

1:21:12

obviously extreme. But the

1:21:14

question is, what is the

1:21:16

population we'd like the world to have? And

1:21:20

I would have thought most people

1:21:22

would say probably not much more than 10

1:21:25

billion. Well, I'd say now

1:21:27

because that's what's going to happen. If

1:21:30

you ask people maybe 50

1:21:32

years ago, they might say not much more

1:21:34

than 4 billion or 3 billion. That's

1:21:36

right. When there's 10 billion, the question is, will people say,

1:21:38

well, not really much more than 15 billion? Well,

1:21:41

they might. But I'm saying that

1:21:44

they might realize that

1:21:46

the quality of life would be greater

1:21:49

with a lower population density. Well,

1:21:52

but the question I guess I have is, isn't it obvious

1:21:54

that the population of life in the world would already be better

1:21:57

if we had less than 8 billion people.

1:21:59

And

1:22:03

I'm not sure. Okay, interesting.

1:22:06

Well, I hope in your pontifica, when you

1:22:09

go down to the Vatican that maybe you can talk,

1:22:12

since you do indirectly, at least talk to the Pope,

1:22:15

you might talk a little bit of a population in the global

1:22:17

South and maybe help, at

1:22:19

least that conversation move forward in

1:22:22

one of the, as you say, in one of the people who probably

1:22:24

has a greater following in the global South than

1:22:27

anyone else might have. But

1:22:30

anyway, one of the things you talk about

1:22:32

is natural capital,

1:22:34

which comes to my question of

1:22:36

when I talk about social sciences or sometimes

1:22:39

despairingly, the science

1:22:42

I'm most disparaging of is economics.

1:22:47

And you make the point that

1:22:49

we don't,

1:22:51

we don't calculate

1:22:53

necessarily, or at least traditionally economics hasn't

1:22:55

calculated They calculate capital,

1:22:58

but not natural capital. It doesn't

1:23:00

feature national budgets. For example, as you say,

1:23:03

a forest is cut down. It's

1:23:05

cut down instead of seeing as the economic

1:23:08

benefits that come from the sale of the products and stuff,

1:23:10

which it should be recorded as a negative

1:23:12

contribution to a national stock

1:23:14

of natural capital. It's been urged by

1:23:16

your colleague, your colleague,

1:23:18

Prathad Dasgupta at Cambridge.

1:23:21

But currently in most countries did not happen.

1:23:23

Isn't, is this an example

1:23:27

of really the failure of economics of us

1:23:30

to, I mean, economics has led us astray in this

1:23:32

regard that, that we, if, if

1:23:34

we continually think of capital in terms of monetary

1:23:37

resources alone, then,

1:23:39

then we miss.

1:23:42

When it comes to reaching the,

1:23:44

it didn't matter when we weren't at the global limits,

1:23:46

when we could move on, when we just spoiled an environment,

1:23:49

you could move on to the next one, or we didn't

1:23:51

care if you just spoiled the environment of some

1:23:53

poor country because you were England and you had a

1:23:55

big empire and you could move on.

1:23:58

But...

1:23:59

Doesn't

1:24:02

it mean really that economics has failed us in that regard?

1:24:05

Well, I think it's only

1:24:07

in recent decades that people have taken this seriously.

1:24:10

I mean, in their 1951 paper, Ehrlich

1:24:13

and Holdren did address this sort of issue.

1:24:16

And you mentioned my

1:24:18

colleague and old friend, Partridge Gupta, wrote

1:24:20

a 500 page report, which

1:24:24

was input to the Montreal

1:24:26

conference, which took place in early

1:24:29

December this year. And this

1:24:32

is, I think, leading people to realize

1:24:34

that

1:24:35

natural capital is something which is

1:24:38

under threat with the greater pressure

1:24:40

from larger numbers of people,

1:24:43

but more demanding populations.

1:24:46

And so I think it's being taken

1:24:48

on board. And I think we

1:24:52

shouldn't be too despairing of economists.

1:24:54

Okay, good. Yes, you're always

1:24:57

more generous in this regard than me. It's

1:24:59

a very hard subject. Yeah, it's a. Yeah, yeah,

1:25:02

yeah, it's a hard subject. I agree. It's

1:25:05

very hard subject and therefore hard

1:25:07

to know. Hard subjects, it's

1:25:09

hard to know when to trust the results from

1:25:11

hard subjects. I

1:25:13

guess that's the point I would say. And I'll

1:25:15

leave it at that. And when it's hard, that's why you and

1:25:18

I are do the simple stuff. That's

1:25:20

why you and I do the simple stuff. We

1:25:23

do the astronomy and the cosmology gets so much easier.

1:25:27

I agree.

1:25:29

The,

1:25:34

I guess, to leave

1:25:36

this area,

1:25:38

you talk about the IPCC

1:25:40

as a very important group. And

1:25:43

I think you say there are three major findings

1:25:45

ultimately that are sort of uncontestable.

1:25:48

humans are unequivocally responsible for global warming.

1:25:51

Some climate-induced

1:25:53

changes, such as continued sea level rise are irreversible,

1:25:56

at least for centuries. And it's very late,

1:25:58

but here's where you more... Optimistic

1:26:00

and some very late, but thankfully not

1:26:02

too late to avoid the worst impacts of climate

1:26:04

breakdown and

1:26:07

When it comes to This

1:26:11

the key questions you raise which are interesting I

1:26:14

don't know the metaphysical questions certainly philosophical

1:26:16

ones in some sense is Risk

1:26:19

this wonderful statement of risk

1:26:21

risk assessment is different than risk management

1:26:24

and they're very different And we

1:26:26

can make, and we can do the kind of calculations

1:26:29

that

1:26:30

insurance companies do, which is to multiply probably

1:26:33

risk by the nature of its impact and

1:26:36

decide then whether to act or not.

1:26:38

But, and we don't even do

1:26:41

that yet globally and probably we

1:26:43

should, especially given the uncertainty, certainty

1:26:45

of certain tipping points, such as the global

1:26:47

impact of potentially the ice sheet in

1:26:50

Greenland melting and raising sea levels by 21

1:26:52

feet, which would change

1:26:55

the world, it

1:26:59

make the world completely different than it is now.

1:27:02

That may have a small probability,

1:27:05

but a large impact. And maybe she'd consider

1:27:07

that as an uncertainty, which

1:27:09

instead of causing us to lead

1:27:11

us to inaction because it's uncertain, should

1:27:14

leave us to action because we want

1:27:17

to, and there's another good phrase somewhere

1:27:19

to get in your book somewhere about, we want

1:27:21

to think not just in time, but just in case. That's

1:27:24

the phrase used.

1:27:26

But you talk about weighing

1:27:29

long-term because climate change is a long-term

1:27:32

issue

1:27:33

and it's weighing future generations

1:27:35

versus the present. And that's an interesting

1:27:38

conundrum and you discuss it. And

1:27:40

I thought maybe I'd ask you to elaborate on that a little

1:27:42

bit.

1:27:43

Yes. Well, of course,

1:27:46

climate change is something which

1:27:49

is getting more serious, but

1:27:51

the very serious issues like the melting of all green

1:27:54

ice. That wouldn't happen

1:27:56

on less than a few centuries. And

1:27:58

so the question is to what? extent in

1:28:00

our calculus of risks,

1:28:04

we should discount

1:28:06

the far future. And of course,

1:28:08

most politicians are happy to discount

1:28:10

the future. Completely. Yeah,

1:28:13

for years. Yeah. And

1:28:17

I think, as we said earlier, we need

1:28:20

to

1:28:21

persuade

1:28:23

the public and politicians that they should

1:28:25

think about what will happen in the lives of their children

1:28:28

and grandchildren, be alive at the end

1:28:30

of a century. And so we ought

1:28:32

to worry about that.

1:28:34

And of course, that is the reason

1:28:36

why

1:28:37

it's very sensible to have the target

1:28:39

of keeping the

1:28:41

mean global temperature rise

1:28:43

below two degrees, 1.5

1:28:45

ideally, but certainly two, because

1:28:49

that will give

1:28:51

us less

1:28:53

chance of encountering some tipping

1:28:55

point that would make the changes irreversible,

1:28:58

and as it were, by time

1:29:00

for clever ideas to

1:29:03

make it easier for us to

1:29:05

depend on carbon-free

1:29:07

energy generation. So I think that's

1:29:10

very sensible that we should value

1:29:12

the

1:29:13

long term to that

1:29:15

extent.

1:29:17

But it

1:29:22

did cause me to think about this. The point is

1:29:24

that the other aspect is

1:29:26

the longer term out you go, the

1:29:28

more uncertain you are about what's going to happen.

1:29:31

And so therefore it's harder to weigh

1:29:33

them the same amount because you don't know all of the variables.

1:29:35

For example, you could have looked

1:29:38

long term for 40 years ago about food

1:29:40

production and made the decision that people

1:29:42

will be starving today at a level they're not because

1:29:45

you didn't know about the Green Revolution.

1:29:47

Yes, no, but

1:29:50

that's precisely the case. And so you got

1:29:52

to ensure that we make realistic predictions.

1:29:55

predictions and

1:29:57

I think where some of the

1:29:59

other predictions... in my book about the chance of some

1:30:02

bioweapon lead

1:30:05

to a pandemic, those are very

1:30:07

hard to estimate and

1:30:10

the chance of getting larger year by year

1:30:12

nonetheless. But I think in the case of climate

1:30:14

change, we do know enough

1:30:17

to know that we're heading for a temperature

1:30:19

rise of say

1:30:22

three degrees this century. At

1:30:24

least maybe four. means to visualize and

1:30:27

that could be dangerous for many parts

1:30:29

of the world and that's

1:30:31

not an improbable scenario, that's

1:30:33

the likely scenario. Most

1:30:35

likely, well yeah they most... If

1:30:38

we don't change course and so but

1:30:41

I think when we talk

1:30:44

about several centuries

1:30:46

into the future I fully agree with you that

1:30:48

we don't know enough to make

1:30:51

predictions and therefore it's

1:30:54

not

1:30:55

reasonable to make great sacrifices now

1:30:58

for people

1:31:00

in several centuries in the future, which

1:31:02

we have no idea what their preferences and tastes are.

1:31:05

And especially because as I discuss

1:31:07

later in the book, human beings themselves

1:31:10

may have changed in the next few centuries

1:31:13

in the ways they haven't over the last 50,000 years.

1:31:15

And so for all these reasons, we

1:31:18

don't know what

1:31:19

the

1:31:21

representation will be that far

1:31:23

ahead. And so I would say that to plan

1:31:27

for eventualities which

1:31:30

are plausible within the lifetime

1:31:32

of some people already living, which

1:31:34

means by the end of a century is

1:31:37

prudent and would have public support.

1:31:39

But beyond that, I completely agree

1:31:42

that we have to be

1:31:44

cautious in how much we

1:31:46

wait uh,

1:31:48

wait at the arguments in favor of the partisan

1:31:50

future. Yeah, yeah, and that's

1:31:53

right. Although of course one of there always, there's

1:31:55

always a wrench in the works and there are people who say then

1:31:57

even at 1.5 degrees here have a reversible

1:32:00

tipping points, but as

1:32:02

you say, as you say, we don't, well, okay, so we

1:32:04

have a few centuries to deal with. We have a few

1:32:06

centuries to deal with. And

1:32:09

there's mediation, there's adaptation, there's, you know,

1:32:12

and so it is true that, you know, even if Greenland

1:32:15

ice melting is inevitable, we

1:32:17

have three or four centuries at least

1:32:19

to deal with the worst parts of that. a

1:32:21

lot that can

1:32:23

be done in a way, you know, humanity

1:32:25

can respond in principle. And there is, we

1:32:27

both you and I have great optimism in science and technology.

1:32:30

There's not so much optimism in politics.

1:32:33

But to leave that, I think I'll leave that.

1:32:36

But I will read a quote

1:32:39

from you, which I wanted, which we've already

1:32:41

said. And it's where you use this wonderful

1:32:43

sentence, but it's still crucial, however,

1:32:45

to keep clear water between the science on the

1:32:47

one hand and the policy response on

1:32:50

the other. Risk assessment should be separate

1:32:52

from risk management. And I think what

1:32:55

you don't say, but it's implicit, is risk

1:32:57

assessment

1:32:58

is in some sense the province of scientists

1:33:00

and researchers. Risk management is

1:33:02

the province of the public and politicians.

1:33:05

And I think that's very important. We'll come back to that

1:33:07

over and over again. But

1:33:10

you and I would argue it's the role of scientists

1:33:12

to at least provide

1:33:14

that input

1:33:15

of

1:33:17

risk assessment. Because

1:33:19

without that, you can't make risk management

1:33:22

is silly.

1:33:23

And it comes back to a debate you and I

1:33:25

had earlier, where really I think we're

1:33:27

on the same side here, but this

1:33:30

question of whether you can get off from is, which

1:33:33

we both agree you can't, I think.

1:33:35

But

1:33:35

my point was that without is, you really

1:33:38

can't get off, it seems to me. And

1:33:40

I think that's really the, science

1:33:42

gives us the is and the rest

1:33:44

is the odd. And

1:33:47

but without it. It gives the maybe

1:33:49

anyway, if not the. Yeah, the maybe. But without it, then

1:33:52

the odd is just unrealistic. And that

1:33:55

was part, I guess, that's part of my problem

1:33:57

with religion. But anyway, we won't get there right

1:33:59

now.

1:34:00

But I do want to go

1:34:02

to bio one or two things. You

1:34:04

make a point that people are, actually

1:34:10

one of the areas where England and

1:34:13

certainly Europe is unfortunately

1:34:14

gone in the wrong direction compared to the United

1:34:16

States is genetic

1:34:18

modification of foods and things like that. Where

1:34:23

really decisions are

1:34:25

made, political decisions are made that

1:34:27

really don't make sense from my

1:34:29

scientific perspective.

1:34:31

But you agree? When

1:34:34

it comes to genetic, GMOs,

1:34:37

the fact that- Yes, yes, yes. Well,

1:34:40

I mean, I would agree that probably

1:34:43

Europe has been too cautious.

1:34:46

That's of course what's

1:34:49

happening now is that the limited

1:34:52

kind of genetic modification involving

1:34:55

CRISPR,

1:34:56

where it doesn't involve trans species

1:35:00

changes, is less

1:35:03

risky. And certainly

1:35:06

one of the

1:35:07

only things which is a benefit

1:35:10

of Brexit for the UK

1:35:13

is that we are now legalizing that

1:35:15

kind of gender

1:35:18

verification, where it's not anything

1:35:20

trans species. And

1:35:23

I think that's probably reasonable. I

1:35:25

think we're right to be cautious about trans

1:35:28

species.

1:35:29

Yeah, it's, well,

1:35:31

as you point out, the gulf between what medical

1:35:33

science may enable us to do and what is prudent

1:35:36

or ethically

1:35:37

actually able to do will shift and

1:35:40

widen in many cases in ways that'll be

1:35:42

difficult to cope with. Because biotechnology

1:35:44

really is the area

1:35:46

of greatest and most rapid growth

1:35:49

in terms of science right now. this.

1:35:53

And you point out something interesting to me that I hadn't

1:35:55

really hit is that people are much more

1:35:57

hesitant to deal with.

1:36:00

with genetic

1:36:01

modifications that address

1:36:04

problems,

1:36:05

then they are

1:36:07

much more hesitant to deal with

1:36:09

genetic modifications that suggest enhancements

1:36:12

than they are to deal with genetic modifications

1:36:14

that address problems. Could

1:36:17

you discuss that a little bit? Because that's an important point, I think. Well,

1:36:20

I think it's true, isn't it, that the

1:36:23

obvious case where there's just one gene that gives

1:36:25

you Huntington's disease and that. And if

1:36:27

by CRISPR you can eliminate that gene, I

1:36:29

think everyone would say that was a good thing. But

1:36:34

human enhancement, making people better

1:36:36

looking or more intelligent, everyone

1:36:38

knows that that

1:36:41

would involve understanding the interaction

1:36:43

of many thousand genes. And

1:36:45

so you couldn't even start until

1:36:48

you've had an AI to analyze

1:36:51

millions of genomes to find out which was the

1:36:53

optimum combination. And then you've

1:36:55

got to

1:36:56

have the ability to synthesize

1:36:58

the genome with that optimum

1:37:00

combination. And even then, you

1:37:02

won't know if you haven't introduced a lot

1:37:05

of small negative effects that will

1:37:07

outweigh the benefit. And so

1:37:10

the idea of human enhancement

1:37:12

in a serious way

1:37:13

does look

1:37:15

very, very far in the future. And

1:37:18

then of course, if it

1:37:20

were realistic, then you have to ask, would

1:37:23

it be something which we should

1:37:25

encourage? And one,

1:37:28

I would say, if it's something that everyone could have,

1:37:30

that's great. But if it's going to lead

1:37:32

to some sort of elite, then

1:37:35

I think one would be slightly

1:37:37

worried about it. And of course, this has come

1:37:39

up in a sort of semi serious

1:37:42

way now, with these three

1:37:44

labs, two in California, and one

1:37:46

here Cambridge, called Altos

1:37:49

Labs, bank row by billionaires,

1:37:51

which are going to focus on

1:37:55

aging, and extending the healthy lifespan. be

1:37:58

rather pessimistic.

1:38:00

about the prospects

1:38:02

of any drastic success. But

1:38:05

of course, if there were to be drastic

1:38:07

success and there could be some small

1:38:11

elite that could live twice as long as the rest

1:38:13

of us, then we'd have to ask, is that

1:38:15

something we should want to happen? I don't know,

1:38:17

but certainly there are these

1:38:19

labs and the way I put it in my book

1:38:22

is that

1:38:24

these billionaires, when they were young, they

1:38:26

wanted to be rich, No, they're rich, they want

1:38:29

to be young. It's not quite too easy. Yeah.

1:38:32

And should we encourage them? I don't think we should. Well,

1:38:35

I have no, well, it's their

1:38:37

money, but, and at some level, you

1:38:39

know, yeah. I mean, I don't mind

1:38:41

them wasting their money, primarily

1:38:43

because

1:38:45

if something you say somewhere in the book, I think you

1:38:47

say it, you do. When we,

1:38:50

even when we, we're both disparaging about human

1:38:52

space exploration, but the one thing that often

1:38:54

happens is when you throw money at technology that you

1:38:56

often find useful things on the side.

1:38:59

And so maybe there'll be something useful that will come of this

1:39:03

aging, that may be useful for everyone,

1:39:05

the unexpected results. So whenever

1:39:07

they spend a lot of money, I have less worries

1:39:09

about billionaires spending money on new technologies, because

1:39:12

often this, it'll result in something that

1:39:14

might actually be useful for others. No,

1:39:17

I agree, because they can't target their work.

1:39:19

It's rather like cancer

1:39:20

research in the 90s, where

1:39:23

I didn't know what to do directly, but

1:39:25

they indirectly understood cell biology much

1:39:27

better. And this will lead to understand

1:39:29

the way in

1:39:32

which the chromosome

1:39:34

changes their age and all that. So it's a good thing.

1:39:37

Yeah, to get back to that intelligence

1:39:40

thing, I would argue it's even, you

1:39:42

presented all of the concerns and issues

1:39:45

that make it both

1:39:46

logistically

1:39:49

difficult to imagine and also

1:39:51

ethically questionable.

1:39:55

talked about that basically the disparity

1:39:57

of access to whatever enhanced

1:39:59

resources are available. I would argue

1:40:01

that in the current world it's even another thing, you'd

1:40:04

have people arguing about what intelligence

1:40:07

is and whether it's really fair to

1:40:09

argue that more intelligent

1:40:11

really has any absolute meaning because

1:40:13

people... Yeah,

1:40:15

that's the thing, yes. Yeah,

1:40:18

so it'd be a lot of that because you'd see that right now and say,

1:40:20

well, people have a right to be emotionally intelligent

1:40:22

and not in whatever. So you'd

1:40:25

have that huge social issue. But

1:40:27

the

1:40:27

The other aspect of biomedicine that

1:40:30

is a worry, you point out, is the bioterrorist aspect.

1:40:36

Now,

1:40:39

both you and I have been involved

1:40:41

at various levels at

1:40:43

times in the building of the Atomic Scientists. And

1:40:45

as you know, I was chair of the board of sponsors

1:40:47

for a long time. Interacting with you was one of the sponsor

1:40:50

members.

1:40:52

And

1:40:55

I used to be more worried about bioterrorism.

1:40:57

And we had several meetings with biological

1:41:00

experts who argued to us to

1:41:02

be not as concerned

1:41:04

globally as much as locally. I mean, that you could

1:41:06

create local disasters, but

1:41:09

that the robustness of life would be very

1:41:11

difficult to create a new virus.

1:41:14

You know, the bodies had 4 billion years of

1:41:17

opportunities to fight viruses.

1:41:19

And so it would be difficult

1:41:22

from scratch to create a totally new virus that

1:41:24

would be able that totally defeat the body's

1:41:26

defense mechanisms globally.

1:41:31

And that it isn't as, while

1:41:34

it is true that you can get, that hacking

1:41:37

is now a tool

1:41:38

for almost anyone who wants to in

1:41:41

their garage or MIT undergraduates,

1:41:45

that actually to really

1:41:47

do sophisticated

1:41:50

that genetic manipulations is still

1:41:52

a rather difficult art still a

1:41:55

rather difficult art

1:41:58

and requires a great deal of scientific infrastructure and

1:42:00

therefore it isn't as much of a worry as some

1:42:02

people would suggest. What do you think about

1:42:04

that? Well,

1:42:05

I agree that it needs

1:42:08

sophisticated expertise and

1:42:11

it may be done in

1:42:13

some lab which specializes 60

1:42:18

labs around the world which are sort of great for

1:42:20

security and it's

1:42:22

not clear how good the security is in

1:42:24

all of those. But of

1:42:27

course, it has been possible

1:42:29

for the last 10 years to

1:42:31

make the influenza virus more

1:42:33

virulent or water-transmissible, vice versa. The

1:42:37

same could be done for the coronavirus now.

1:42:40

And so I think it's by

1:42:42

no means

1:42:43

implausible that

1:42:47

viruses

1:42:48

like

1:42:50

the Zika virus or others could

1:42:52

be made more virulent or water-transmissible

1:42:54

or have a long latency period. many other

1:42:56

things that make them more dangerous to the

1:42:59

world, by

1:43:01

the application of

1:43:03

techniques. And

1:43:07

this is such a catastrophic

1:43:10

threat potentially that

1:43:11

one

1:43:13

has to be very, very concerned about

1:43:16

the security and allowed to do this sort of thing.

1:43:19

And you made a bet to that. you in fact in 2003, way

1:43:21

in advance of the pandemic, you made

1:43:23

this bet that,

1:43:26

and you quote, bio-terror or

1:43:28

bio-error will lead to 1 million

1:43:30

casualties in a single event within a six month

1:43:33

period starting no later than December 31st, 2020.

1:43:36

And the interesting point is,

1:43:38

I actually think you probably won the bet based

1:43:40

on what I know, but we don't know that

1:43:42

it's quite possible that coronavirus was

1:43:45

a bio-error due to these gain of function

1:43:48

activities and other...

1:43:50

In Wohand, yes. Well, of course, Stephen

1:43:54

Pinker took me up on this bet and he

1:43:56

wrote an article, which I summarized in my

1:43:58

book.

1:44:00

that we weren't going to settle the bet.

1:44:03

But the reason you mentioned that I

1:44:06

said that I

1:44:09

would win if the pandemic was

1:44:11

caused by bio-error or bio-terror. And

1:44:15

if it was a lab leakage, I would win. But of

1:44:18

course, as you know, the bansel

1:44:20

opinion is that it wasn't a leakage,

1:44:22

but it could have been. It's not a crazy hypothesis.

1:44:25

And so, in fact,

1:44:27

Stephen and I Rosa Martin New Statesman

1:44:30

nearly 18 months ago, said

1:44:33

we weren't going to settle a bit because of that uncertainty.

1:44:35

And we went on to say that

1:44:37

if it turned out that it

1:44:41

had been a leakage from the lab,

1:44:44

then it's better if we never know definitely,

1:44:47

because then the tragedy would

1:44:49

have a villain. and

1:44:50

if it

1:44:52

could be blamed

1:44:53

on the Chinese,

1:44:56

that would aggravate the already

1:44:58

disastrously bad relations between

1:45:01

China and some Western countries.

1:45:03

And so, if we're better, if we never knew. Wow,

1:45:06

that's interesting to hear. A

1:45:08

scientist say it's better that we never

1:45:10

know. I understand politically.

1:45:12

So it's true, as a political issue, it's

1:45:15

important, but one would also argue, and

1:45:17

I think Matt Ridley did in the book he

1:45:19

wrote on this subject, and I talked about it, that

1:45:22

the benefit

1:45:23

of knowing is so we don't repeat it.

1:45:25

And the question is, which is better to know

1:45:28

what went wrong

1:45:30

and therefore not repeat it, or

1:45:32

to not know so we don't exacerbate fools

1:45:35

who like to

1:45:36

foment

1:45:37

hatred? So let me, I

1:45:40

don't see that argument at all, because there's no reason,

1:45:42

I mean, we shouldn't tighten up security.

1:45:45

And as I say, I worry very much about there

1:45:47

being 60 labs they could do this sort

1:45:49

of thing. I was supposed to be grateful for

1:45:51

security. And I think it's

1:45:53

very, very important to ramp

1:45:56

up the security and I also

1:45:58

think that we are going to have have

1:46:00

to have fairly intrusive

1:46:02

surveillance of people with this expertise

1:46:04

because one person

1:46:06

doing this sort of thing

1:46:08

is too many.

1:46:09

I think whether

1:46:11

or not Wohan

1:46:14

was caused by some leakage

1:46:17

rather than being natural, it's

1:46:19

a wake-up call.

1:46:20

It's sort of a wake-up call. Yeah,

1:46:23

it'd be neat to know which kind of

1:46:25

techniques are most dangerous. I mean, I'd like

1:46:27

to know that. That's why I guess I'd like to know

1:46:30

which because there could be some techniques

1:46:32

which may appear to be dangerous but not and are easily

1:46:35

controlled that may be beneficial. Once again,

1:46:37

this question of what's beneficial. And

1:46:39

so that's why I guess I fall in the favor

1:46:41

of knowing and hoping we can deal with the

1:46:43

hatred and ideal

1:46:45

law. I think

1:46:48

you make any difference to what we ought to be doing.

1:46:50

Yeah. Never happened in Wohan.

1:46:53

Okay, whether we can target, but anyway,

1:46:55

that's a question.

1:46:57

That's a detailed question. Given

1:47:00

limited resources, you know, what

1:47:02

we should target is an interesting question,

1:47:04

but you're absolutely right. We

1:47:07

need to be more prudent,

1:47:08

and this has been a wake-up call in that regard. Speaking

1:47:11

of wake calls, the last thing is this question

1:47:13

of a demented loner that you point out. I

1:47:16

don't know if you're as

1:47:19

into movies as I am. I don't think I've

1:47:21

ever talked to you about movies, But

1:47:24

I kind of really am into popular culture

1:47:26

movies and they influence me a lot. So

1:47:29

do you know, did you ever see the movie 12 Monkeys?

1:47:32

By actually Terry Gilliam, who's, you know, anyway,

1:47:38

but there are a number, but it's

1:47:41

not new at all. It's a

1:47:43

common theme in science fiction and

1:47:45

it was in also in the Kingsman, I think,

1:47:47

and then even in the most recent James Bond movie,

1:47:49

which you may or may not have seen. Have you seen any

1:47:52

James Bond movies? Yes. Oh, good. I

1:47:54

think the recent one was an example,

1:47:56

where this is a theme of some people saying, look,

1:47:58

We need to get rid of a fair fraction.

1:48:00

the world's population. And the 12 Monkeys

1:48:02

was exactly that, was a bioterror that

1:48:04

actually got out of control. But it was around

1:48:06

someone's idea that, hey, we should just introduce

1:48:09

a new virus that will solve the problem

1:48:11

for us. I think this

1:48:14

wouldn't be done by a terrorist group

1:48:16

with limited aims, nor

1:48:18

by governments in warfare,

1:48:20

which is under the consequence. But it would be some crazy

1:48:23

person who thinks there are too many people in the world and

1:48:27

doesn't care who they kill.

1:48:28

Yeah, and so you are worried about that? You

1:48:30

think that's...

1:48:31

Yeah, well, I mean, I think it's

1:48:34

something that's improbable, but

1:48:37

it could be so catastrophic,

1:48:40

especially if the techniques of

1:48:43

gain of function become more widely disseminated

1:48:46

or more efficient. It's

1:48:49

certainly my number one worry of all these things.

1:48:52

Okay, interesting. Again, one comes

1:48:54

back to this question, and I'm not a biologist, but I come

1:48:56

back to this question of even if again

1:48:59

a function thing it would cause a disaster.

1:49:01

There's no doubt it would cause

1:49:05

and global economic issues

1:49:07

but it's hard to imagine anything that really

1:49:09

is going to efficiently wipe out a

1:49:11

fair fraction of the world's population that could be

1:49:13

done

1:49:14

just because of the way

1:49:16

you know even the pandemic you know because

1:49:19

things evolve to become less

1:49:21

generally less virulent even if they are more

1:49:23

initially and also,

1:49:25

anyway, you can imagine disasters

1:49:28

that are global or that are national or even

1:49:30

international, but

1:49:32

I guess I'm less worried about, I'm

1:49:34

more worried about creating a global

1:49:36

catastrophe in terms of its

1:49:40

national, international geopolitical

1:49:42

repercussions and economic repercussions than

1:49:45

I am

1:49:46

maybe of getting rid of a third of the world's population

1:49:48

or something like that. Well, I think you

1:49:50

ought to worry with more because... Okay, good. Any

1:49:53

time I learn that I'm supposed to worry more, it's better. COVID-19

1:49:57

had a fatality rate of less than

1:49:59

a million.

1:50:00

than one percent. There

1:50:03

are other viruses

1:50:05

that have a fatality rate of 70 percent. If one

1:50:10

could modify one of those to

1:50:13

be as transmissible

1:50:14

as COVID-19, it would

1:50:17

be a mega global disaster. Oh yeah, it would.

1:50:19

I agree. I guess the

1:50:21

question is that it's a

1:50:23

big if, and the question is, you

1:50:26

do that? Well, I mean, even

1:50:28

releasing a natural one, you know,

1:50:30

we know the Zika virus or Ebola

1:50:33

virus, Ebola

1:50:35

isn't transmissible except by

1:50:37

touch. But if you could,

1:50:40

even without tinkering with these things,

1:50:42

a release could be... Oh, then you produce a

1:50:44

disaster, but it's always local. The big problem

1:50:47

is also disseminating a global... No, no, no. Well,

1:50:49

well... You'd have to have a much longer latency

1:50:51

period, as you point out. You'd have to design it to.

1:50:54

Well,

1:50:54

no, but we why

1:50:57

shouldn't what happened with COVID, COVID

1:51:00

happened to this other one? And so it's

1:51:04

already, I mean, there's been, I

1:51:06

guess because it's a, well, it

1:51:08

could look let's, I'm not one

1:51:11

to say it can't, I guess the point I'm

1:51:13

saying is that Degas and Ebola, there have

1:51:15

been leaks of that. And generally, because

1:51:18

they're not so transmissible, they've been controlled.

1:51:24

And so I guess I view it as a danger but not

1:51:26

a global danger yet. But if

1:51:28

one could... Yeah,

1:51:30

it

1:51:30

is something once again, it's best

1:51:32

to think of the... I just watched a movie

1:51:35

where someone said that it's best to think of the worst

1:51:37

in advance because

1:51:41

someone should be thinking of the worst in advance in any

1:51:43

case because if you don't,

1:51:45

then you might be

1:51:47

surprised. I would say it's not that implausible and

1:51:49

the probability is going up year by year because

1:51:53

the techniques are becoming more

1:51:55

understood and

1:51:58

more widely disseminated.

1:52:00

Now, here's Martin. I

1:52:03

actually, I do want to, I wanted

1:52:05

to go a little bit more into,

1:52:13

well, I want to, we're

1:52:15

almost at the end of the first part of your book, okay,

1:52:17

and what I'm going to suggest to you, if it's okay,

1:52:19

I'm enjoying the discussion. I think people

1:52:21

will enjoy it a lot. I think

1:52:24

what I'd like you for maybe I'd like to go over

1:52:26

AI just one more question and then if we

1:52:28

could Could

1:52:29

we can then take a break and then another

1:52:31

day continue this for the second half

1:52:34

of your book? Would you be willing to do that?

1:52:37

Yes, or I mean easy for me to

1:52:40

take a to take a break for just

1:52:42

half an hour and then continue

1:52:44

I

1:52:46

can do it. I just didn't want to tire you out if

1:52:48

you're willing to do it No, it's just that this week

1:52:50

is particularly sort of free of other... Because it's

1:52:53

the holiday week, I agree. Because

1:52:55

I think we... I will... There's enough substantive

1:52:57

issues, I think, to go for at least another hour. And

1:53:00

if in my opinion... No, that's fine

1:53:02

by me.

1:53:11

I hope

1:53:11

you enjoyed today's conversation. This

1:53:14

podcast is produced by the Origins Project

1:53:16

Foundation, a non-profit organization

1:53:19

whose goal is to enrich your perspective

1:53:21

of your place in the cosmos by providing

1:53:24

access to the people who are driving

1:53:26

the future of society in the 21st century

1:53:29

and to the ideas that are changing

1:53:31

our understanding of ourselves and

1:53:33

our world. To learn more, please

1:53:36

visit originsprojectfoundation.org.

Unlock more with Podchaser Pro

  • Audience Insights
  • Contact Information
  • Demographics
  • Charts
  • Sponsor History
  • and More!
Pro Features