Podchaser Logo
Home
Daniel Dennett: Do We Have Free Will?

Daniel Dennett: Do We Have Free Will?

Released Sunday, 21st April 2024
Good episode? Give it some love!
Daniel Dennett: Do We Have Free Will?

Daniel Dennett: Do We Have Free Will?

Daniel Dennett: Do We Have Free Will?

Daniel Dennett: Do We Have Free Will?

Sunday, 21st April 2024
Good episode? Give it some love!
Rate Episode

Episode Transcript

Transcripts are displayed as originally observed. Some content, including advertisements may have changed.

Use Ctrl + F to search

0:00

Everybody just a little bit of a somber

0:02

note as we start the podcast something that's

0:04

never happened before But

0:06

I have to report on the passing

0:08

of today's guest Daniel Dennett who

0:11

I spoke to just a few weeks ago I

0:13

think this was his final interview on

0:15

a podcast. I passed away today. I'm

0:17

informed for this is a Friday April

0:20

19th 2024

0:23

and as you hear I had such

0:25

a great conversation with him I already recorded

0:27

an intro both the video and an audio

0:29

intro and I

0:31

was just so delighted and touched by him and

0:33

had so much fun with him He

0:36

influenced me greatly even though I really

0:38

only got to delve into his

0:40

work in the last few Months

0:43

before leading up to the interview before of

0:45

course I knew about him. It's a world-famous

0:47

intellectual and contributor to

0:50

many fields of philosophy and many other

0:52

things And so it's

0:54

a great shock to me. I wanted to

0:56

release this as a you know

0:58

token of my gratitude to him For

1:01

this wonderful interview you'll see he you

1:03

know he'd just come back from a

1:05

dentist appointment and you're talking and

1:07

you know part of each feels guilty to have spent

1:09

you know an hour and a half two hours Whatever

1:12

it turned out to be with you

1:14

know the final you know weeks of his life

1:16

but on the other hand, it's a great gift

1:18

and hopefully his family will see this

1:20

and share in the in

1:23

the delightful conversation that we had together

1:26

I view it as a Great

1:28

honor and privilege to have hosted him and

1:30

especially now made more poignant by his passing

1:33

It's truly affected me And I hope the

1:35

interview will be meaningful to you as much

1:37

as it was to me as well So

1:40

with that I'll start with the intro that we

1:42

had planned before this sad announcement

1:45

came a few days ago before the release

1:47

of this episode and and

1:51

Hope that Dan is is

1:53

happy and resting in peace wherever he is

1:56

Thanks, Dan. Hey there friends. Just a

1:58

note opening up this podcast

2:00

with another renowned atheist

2:04

scientist PhD in the tradition

2:06

of Sam Harris and Robert

2:08

Sapolsky. This episode with

2:10

Dan Dennett was a real treat, most

2:13

of which came from the fact that

2:15

he's incredibly open, honest, and just a

2:17

lot of fun. I really had no

2:19

idea how to expect with him. He

2:21

can range in interviews I'd seen with

2:24

him previously with Michael

2:26

Shermer and others. I was just

2:28

checking out on his podcast, the

2:30

Skeptic podcast or Michael Shermer show

2:32

I think it's called now, that

2:35

he can be a little bit

2:37

cantankerous. He can be a little bit brusque,

2:39

always entertaining, he's always brilliant, but

2:42

I really had no idea what to expect. I'd

2:44

never talked to him and never read his books

2:46

before this most recent book of his, memoir,

2:50

his scientific career. What I realized is it really is

2:52

a guidebook on how to be a good scientist. I

2:55

appreciate that because there's so few

2:57

guides for becoming a scientist. Most

2:59

of what we find out in

3:02

life is in the self-help realm,

3:04

personal relationships, being a better parent.

3:07

These are of course incredibly important things in

3:09

a day-to-day life, but there aren't really good manuals

3:11

on how to be a good scientist. I've

3:13

sort of tried to do that with my

3:15

previous books, losing the Nobel

3:17

Prize and think like a Nobel Prize winner. I

3:19

always felt that came up a little bit short

3:22

because it's very difficult to capture those

3:24

perspectives, especially since I feel like

3:26

my career is continuing to grow and

3:28

I have the blessing and benefit to

3:30

be working with so many amazing students

3:32

and researchers and fellow faculty and engineers

3:35

on the Simons Observatory, the Simons Array,

3:38

and other projects I get involved with. It's

3:40

really truly the best job in the world.

3:42

I call it the hardest three

3:44

hour a week job in the world

3:46

being a professor. It's changed a lot

3:48

in 20 years, my 20th anniversary being

3:50

a professor, but talking with Dan, Dan

3:52

it really made me appreciate how great

3:54

it is to be a professor, to

3:56

live the life of the mind and how sad

3:59

it makes me. feel when I

4:01

realize how hard it is, almost

4:03

impossible, it's actually harder in

4:05

some sense to break into the

4:08

professoriate from the previous

4:10

level down, which is called post-doctoral scholars

4:12

or fellows. It's harder to do that

4:15

nowadays than it is to break in

4:17

from the level down from the major

4:19

leagues in baseball, which is AAA baseball.

4:22

In other words, you have a higher chance

4:24

of getting to be a professional baseball player

4:26

if you're already playing AAA baseball than if

4:28

you're a postdoc trying to become a faculty

4:30

member. We had 400 applicants for

4:32

one job in the last hiring

4:35

cycle at UC San Diego, and I'm sure

4:37

there are even more selected universities than ours.

4:39

This is a conundrum. Do you do

4:42

anyone a favor by encouraging them to

4:44

pursue academia? I'd like to think the

4:46

latter hasn't been pulled up behind me.

4:48

I do my best to mentor, and

4:51

I've had great success with four of

4:53

my previous employees or professors now at

4:55

various places around the country and the

4:57

world. These

4:59

are top institutions, and it really does

5:01

fill me. That's the greatest source of

5:03

pride, really, is in the words of

5:06

the great Rabbi Menachem Schneerson,

5:08

who said that a

5:10

good leader creates many followers, but

5:13

a great leader creates many leaders. I

5:16

feel like people like Dan and others

5:18

inspire me to be better at my

5:20

craft and listening

5:22

to him recount in his

5:24

inimitable way the adventures

5:27

that he's had over the years, ranging

5:29

from computer science to

5:31

laboratory experiments to pure

5:33

philosophical conferences to smuggling

5:36

items across the iron curtain during

5:38

the Cold War. He's really a

5:40

legend and is inspirational.

5:42

I don't have the opportunity to

5:45

deal with such weighty, lofty issues in

5:47

my day job as a cosmologist. I

5:50

do have the same responsibilities that he

5:52

has to be a good mentor, friend,

5:54

advisor. We spend almost as

5:56

much time, if not more, with our students

5:58

and faculty colleagues as we do. do with

6:00

our family. So there are very few books

6:02

written about how to be a good scientist,

6:04

how to science. I'm working on a second

6:07

volume of Think Like a Nobel Prize winner,

6:10

actually called Into the Impossible, named after

6:12

this podcast, subtitle Think Like a Nobel

6:14

Prize winner. But some people

6:16

have criticized that book for kind

6:18

of aspiring to the unattainable, which is not

6:20

only to be a professor, but then to

6:22

win the Nobel Prize, of which there are

6:24

actually literally fewer people

6:27

that have that accolade in

6:29

science and physics than play in

6:31

the NBA, fewer Nobel laureates

6:33

in physics than NBA stars. And I've interviewed a

6:36

great deal of them, 19 of them so far.

6:38

And so the second half of nine or people

6:40

are coming into a new book. It'll be out

6:42

hopefully by the end of this year, calendar year

6:44

2024, working on

6:47

that with Scribe Publishing and my great co-authors

6:49

and friends that help edit and put that

6:51

book together. So look for that soon. And

6:53

I'm going to focus in that book on

6:55

really how to be a good scientist. The

6:57

first volume of it came out in

6:59

2021, great success, but it

7:01

really was kind of a

7:03

panoply of different observations that I've

7:05

learned about Nobel Prize winners rather

7:07

than how to be like them

7:09

in the sense of how do

7:11

you collaborate with competitors? How do

7:13

you overcome the imposter syndrome?

7:16

How do you struggle with

7:18

colleagues that sometimes are your competitors?

7:20

And how do you retain an

7:22

open mind to criticism? Those are

7:24

the real traits of a good

7:26

scientist. And I never get

7:29

the chance to really explore that with

7:31

my students, either undergraduates or graduate students.

7:33

So this will be kind of my

7:35

endeavor to do that. And I'll carry

7:37

lessons from Dan and others, even though

7:39

Dan obviously has not won a Nobel

7:41

Prize, at least not yet. I don't

7:43

know if that will ever happen. I

7:45

don't think so. But he is a

7:47

brilliant scientist. He's got an incredible personality.

7:50

And it's just refreshing to talk to

7:52

somebody who's got such an unabashed, unafraid,

7:54

opinionated kind of perspective. He'll talk about

7:56

people that he's nominally associated with, Dawkins,

7:58

Harris, and Christopher Hitchens. the late

8:00

great Christopher Hitchens, and he'll talk about them

8:02

critically. But it's all done

8:04

with Bonhomme and with not

8:06

conciliatory, I think he, Dan loves

8:09

to win arguments, but he does

8:11

so collegially friendly unless you kind

8:13

of provoke him to evoke his

8:15

ire. So I know you're going

8:17

to enjoy this kind

8:19

of part of a theme this year so

8:21

far. We are going to have Donald Hoffman

8:24

back on for part two. In that interview,

8:26

I love Don. I think he's very, he's

8:28

a wonderful, he's a mensch of a person,

8:30

but I'll push back as I did with

8:32

Dan and with Sam and with Robert Sapolsky.

8:34

So I hope you don't fault

8:36

me for that because I think it's part

8:39

of my job is to ask hard questions

8:41

and push back with tender love and respect.

8:43

And I know that sometimes irritates you because

8:45

I don't always merely parrot

8:47

what my guests are saying, but I think this

8:49

is more interesting. If you like that kind of

8:51

interview style, I can recommend a lot

8:53

of other podcasts that you should listen to. But I hope

8:56

you'll stay and listen to this one and give me feedback.

8:58

Also, let me know what you think about these audio

9:00

essays, plan to do more of

9:02

them. So I hope you like them. I don't

9:04

have plans to do them every

9:07

single episode, but with kind of

9:09

marquee names, literally like Dan or Sam

9:11

or Robert, I feel the need to

9:14

kind of provide a little supplementary and

9:17

color to the interview. And I hope that you

9:20

will very much appreciate this

9:22

wide ranging discussion with a

9:24

true legend. And that's Professor Dan

9:26

Dennett. And now sit

9:29

back and enjoy. As we go into the

9:31

impossible, stay tuned for another intro for

9:33

the next minute. And then we'll go

9:35

right into the interview. Enjoy friends. Today

9:39

into the impossible, we welcome

9:41

a renowned philosopher of

9:43

the human mind, a man who's a

9:45

legend, a cognitive scientist, professor,

9:47

and a raconteur par excellence.

9:49

He's also a vocal atheist.

9:52

And yet he makes common

9:54

cause with people like me

9:56

who call themselves practicing

9:59

agnostics. We'll see what that

10:01

means later on. But Daniel Dennett is

10:03

a legend, and he's known as one

10:05

of the four horsemen of new atheism,

10:07

not the apocalypse. He's been at the

10:09

forefront. Discussions on consciousness free will, and

10:11

the impact of Darwinian evolution

10:13

on religious belief. His incisive

10:15

wit, good humor, and

10:18

keen intellect made him a must-get

10:20

guest on the Into the Impossible

10:22

podcast. He's been a major figure

10:24

for decades in debates, conversations,

10:28

and writings about the existence of

10:30

God and the nature of

10:32

belief and free will. His works

10:35

are tremendously influential, and they include

10:37

Breaking the Spell, Consciousness Explained, and

10:39

many more that have provoked admiration,

10:42

controversy, and challenged readers to

10:44

reconsider their most deeply held beliefs about

10:46

the mind and its relationship to the

10:49

physical world. Today, I have

10:51

the opportunity to explore these topics,

10:53

along with your questions, for this

10:55

phenomenal renowned professor. So without further

10:58

ado, let's jump right in and

11:00

discuss this magnificent new memoir from

11:03

one of the heroes of the New Atheist Movement.

11:05

["The New Atheist Movement"]

11:12

Any sufficiently advanced technology is

11:15

indistinguishable from magic. Open

11:18

the pod bay doors, now. How

11:20

are you doing, Dan? I'm doing just fine,

11:23

how are you? Great pleasure

11:25

to connect to you. I listened to your

11:27

latest book in audio format,

11:29

and it's not your voice,

11:31

and so it's good to hear your actual

11:34

voice. And as you know, Dan, we love

11:36

to judge books by their covers because what

11:38

else do you have to go on in

11:40

a Bayesian reasoning sense? So I want you

11:42

to take us through the book, and it

11:45

is unique in terms of all the 200 or 300 books

11:48

I've had the pleasure of authors

11:50

appearing on the Into the Impossible

11:52

podcast. This is probably the first

11:54

one that doesn't have a subtitle. So tell

11:57

me, tell me the origin of the title,

11:59

the cover illustration. illustration art and

12:01

the absence of the

12:04

subtitle take it away, Dan. Okay. Here's the

12:06

book. I Take it

12:08

everybody can see it. Yeah, I've been thinking

12:10

I didn't want a subtitle because

12:12

I thought That's

12:15

enough I want to talk about my

12:17

thinking and how I got there

12:19

and It's not

12:21

about the non academic non-research

12:24

parts of my life. I I

12:29

deliberately didn't want to go

12:32

on and on about Adventures

12:34

I've had outside of academia. I

12:36

thought this is a book to talk

12:39

about what I

12:41

think is how I think

12:44

I think and why

12:46

it's a good way to think so

12:49

it's all about the Wonderful

12:51

thinkers who who've helped me and

12:55

the first thing to say is if you if you want

12:57

to do some good thinking Surround yourself with

12:59

the smartest people you can find and talk to

13:01

them and that's that's the

13:03

trick And I've had the pleasure of having

13:05

a lot of brilliant thinkers on the podcast

13:09

1918 Nobel Prize winners and

13:11

many of your colleagues and friends and people that appear

13:13

in this book in one form or another Including

13:16

folks like David Chalmers and

13:19

when I had David Chalmers

13:21

on he's he's from Australia And

13:23

I said to him, you know, David I if

13:26

I had the rock band ACDC also

13:28

from Australia and I had them on and

13:30

I did not ask them to play you

13:33

shook me all night long I

13:35

would be a derelict in my duties as

13:37

a host So I want to you know

13:40

sort of ask you you've had some very

13:42

deep criticisms of of them obviously

13:44

always with respect and always from a Scholastic

13:47

scholarly perspective but in

13:49

this book you talk about your differences

13:52

with you know critiques of everybody All

13:54

these guests that I've had on Penrose

13:56

hammer off Hoffman Chalmers

13:58

the Polsky Harris even So

14:00

let's start there. Let's start with Sam Harris

14:03

and then we'll kind of work

14:05

our way through for the audience's

14:07

benefit. We hear all these things,

14:09

Dan. Some clearly consciousness is an

14:11

illusion. Some say it's completely

14:14

nonsense. There's no such thing. And

14:16

associated with free will. Who says that?

14:20

Who says there's no free will? Oh,

14:23

yeah. Various people say that, but I

14:26

don't. Yeah. No, I

14:28

know that. So what is the

14:30

critique that you have of, say,

14:33

Sam in your book, Freedom Evolves?

14:35

I'm mentioning this because

14:38

he was just on the podcast and we had a very

14:40

long debate in which I asked him some

14:43

of the questions that I asked Robert Sapolsky.

14:45

And I said, you know,

14:47

if there is no such thing

14:49

as free will, then how can

14:51

you blame somebody for, God forbid,

14:53

killing your pet dog? And

14:55

Robert said, he said literally, to my

14:58

great shame and humiliation, I'd want them punished. So where

15:00

do you come down on, say,

15:02

crime and punishment in a world with

15:05

the free will perspective that you adopt? I think

15:07

there's a definite role for

15:10

punishment and have argued that and

15:12

think there's nothing anti-diluvian

15:15

or anti-scientific about it because

15:17

free will isn't what Sapolsky

15:19

thinks it is. I'm just

15:22

astonished at how

15:25

both Sam

15:27

Harris and Robert Sapolsky

15:29

and some other scientists have been,

15:33

I think, persuaded, conned, really,

15:35

into thinking that free will

15:37

depends on indeterminism by some

15:39

philosophers who have inflated

15:42

free will beyond what it actually is.

15:44

Free will isn't a metaphysical condition

15:47

that you're blessed with or not. It's

15:50

an achievement. And it's the

15:52

achievement of mature

15:54

self-control. You don't

15:56

have it when you're a baby. You don't have

15:58

it when you're a baby. until you've

16:02

reached adulthood really. And so

16:04

we don't hold you responsible

16:06

for things until you're

16:09

an adult and until and we

16:11

don't hold you responsible then if

16:14

you don't have self-control. That's

16:16

self-control is the key notion.

16:18

And the thing

16:21

about self-control that it

16:24

amazes me that Sapolsky doesn't realize,

16:26

this is one of the best ways

16:28

of looking at evolution. Evolution

16:32

begins with the

16:34

simplest imaginable agents,

16:37

single-celled agents.

16:40

And then you get

16:42

multicellular agents and

16:44

then we get multicellular agents

16:46

that are you know

16:48

plants or or or

16:50

fungi or or coral

16:53

polyps but that that

16:55

move. And

16:57

once you've got motion you've

17:00

got control. And

17:03

in order to have control you have to look

17:05

ahead. And evolution is

17:07

designed things that

17:10

can look ahead. Before there

17:12

was life nothing could look ahead. Nothing

17:15

at all in the whole universe

17:17

could look ahead. Once you have

17:19

look ahead then you have

17:21

the possibility of making choices

17:24

based on what you see. Now maybe

17:26

what you see isn't what's going to

17:28

happen and then you may make

17:30

choices that are bad. But over

17:34

the long run, probabilistically, evolution

17:37

lets replicate

17:40

the agents that are the best

17:42

at self-preservation. That

17:44

duck the incoming bricks and

17:47

then find the food before they

17:49

starve that find mates and

17:52

so forth. I haven't said

17:54

anything controversial. That's textbook ho-hum.

17:57

That's right. That's how evolution works.

18:00

But it makes things

18:04

that do things for

18:07

reasons. And

18:09

once you have things that do things

18:11

for reasons, you're on the way to

18:13

free will. I

18:16

wouldn't say that an octopus or a

18:19

clam or

18:21

even a crocodile

18:23

have reasons that they understand.

18:26

They don't have to. But

18:28

they still do things for reasons. Trees do

18:30

things for reasons. Trees don't

18:32

have to understand the reasons they do the things

18:34

they do. But they do things for reasons. That's

18:37

a theme in my book,

18:39

From Bacteria to Backing Back. This

18:42

is competence without comprehension. We're

18:45

the one species that so far

18:47

evolved that doesn't

18:50

just do things for reasons, but represents

18:53

reasons to ourselves. And

18:56

argues about reasons and tries to

18:58

reason others into behaving

19:00

better and so forth. And

19:03

this creates the

19:05

social contract. It creates

19:07

the environment for civilization

19:10

where we can judge that

19:13

some of our fellow

19:15

human beings have reached

19:17

the age of reason. They can be

19:20

reasoned with. And

19:22

we can trust them. They're safe. We

19:25

can let them run free. You know

19:28

why we don't let lions run

19:30

free or bears run free? Or

19:32

small children. And

19:36

once you're capable of listening

19:38

to reasons and being moved by

19:40

reasons, as Kant put it, then

19:44

you can have freedom. Have

19:46

nothing. Notice I haven't

19:48

mentioned the word determinism. It has nothing to

19:50

do with determinism. And

19:54

free will are completely

19:58

disjoint categories. There's

20:01

no implication one

20:03

way or the other between them. So

20:06

this is what's called a pattern interrupt.

20:08

It's a way to re-juvenize, refresh your

20:10

mental synapses as I know you're getting

20:12

a slight charge out of

20:14

hearing each word that Daniel and

20:17

I say. But I need to

20:19

take a quick moment to invite all of you to subscribe

20:21

to this podcast or YouTube channel, no matter

20:24

where you're listening or watching. I

20:26

promise you it's causing me to up my

20:28

game. You see the phenomenal guests we're

20:30

getting just in the realm of consciousness, including

20:32

Dan and Sam Harris and Robert Sapolsky just

20:35

in the last three months. It's

20:37

been a phenomenal ride. And unfortunately

20:39

it's a numbers game. And I'm trying my best to

20:41

up my game and become a better interviewer. This is

20:44

my side hustle after all. It's

20:46

a labor of love. I don't make very much

20:48

money on it. But the one thing you can

20:50

remunerate me with is by subscribing. Only about

20:52

50% of you are actually subscribed or following the

20:54

podcast. So please do me a favor. Subscribe

20:57

and share. Really helps out

20:59

and will help us grow and continue

21:01

to get great guests like Dan, Sam,

21:04

and Robert. And stay tuned for a

21:07

special episode coming up with Don Hoffman. When

21:09

I mentioned to both of them, I hear

21:12

a lot of people, as you know, will

21:15

deny the existence of free will. And

21:17

I mean, that's the title essentially of books

21:20

by those authors. And I say, have you

21:22

ever met somebody, Dan? I

21:24

say, have you met somebody who behaved as

21:26

if they don't have free will? That's

21:29

not a psychopath. And they can never say yeah.

21:31

I mean, nobody behaves like they have no free

21:33

will. The way you would behave if you

21:35

had no free will is you'd

21:37

sit there like a tree and

21:40

just take your lunch and

21:45

not think ahead. And it's

21:48

possible to

21:50

talk people out of

21:52

their free will. Because

21:55

if you've got free will, you can be moved by

21:57

reasons. And you can be moved by reasons good and

21:59

bad. And so that's

22:01

why I think books like Sapolsky's

22:04

and Sands are actually a

22:06

little bit socially destructive. They're

22:10

acts of high class social

22:12

vandalism in that they weaken

22:16

our conviction, our perfectly

22:20

naturalistic conviction. We

22:23

are what we obviously are,

22:26

for thinking, reasonable

22:30

human beings who can figure

22:32

out how to do things together in

22:35

concert, avoid harming

22:38

others. That's

22:41

the glory of human

22:43

civilization. Yeah, it really is and it

22:45

should be celebrated. And

22:47

I feel like they get into these sort of almost

22:51

solipsistic or self-referential definitions

22:54

where they can't admit that

22:56

there are possibilities

22:59

where the very notion that they're

23:01

trying to criticize undermines their own

23:03

argument. And for me... Why

23:05

are they arguing? Yeah, exactly. Right.

23:08

What on earth do you think you're doing,

23:11

Robert? It's impossible not to be swayed.

23:13

Again, when you make it personal,

23:15

etc., that they don't

23:17

stand by the courage of their convictions. But

23:20

to their credit, I think at least Robert

23:22

does. He admits to his shame as

23:24

he literally said in the interview, Robert

23:27

Sapolsky does. And

23:29

those are some of the best parts of

23:31

his book where he confesses that he can't...

23:33

Sometimes he just has to act as

23:35

if he has free will. Good for him. Of

23:38

course he does. I trust him. Can

23:40

you talk in the book about the kind of

23:42

difficulty in understanding from an evolutionary

23:44

point the evolutionary

23:47

or selective purpose of

23:49

sense of humor, which

23:51

is almost probably

23:53

uniquely a human trait. Maybe

23:56

there are some higher primates that have it. But

23:58

what about the origin of music? It would seem

24:00

that, again, if you're sitting around a

24:03

campfire and you're clucking or you're dancing

24:05

or whatever, it makes you kind of

24:07

very, very unfit to survive the lions

24:09

that are prowling

24:11

around you. I always wonder, what is

24:13

the evolutionary advantage or point of music,

24:15

which is not unique to the human

24:17

civilization, obviously, but especially in humans? What

24:20

would be an advantage, if any? Well,

24:23

let's look at some non-human

24:25

species. We have

24:27

birds, for instance, some of

24:29

whom have remarkably wonderful bird

24:32

song, and that

24:36

don't just have a caa-caa or a

24:38

chirp-chirp, but very elaborate songs. And there

24:40

it's pretty clear that the

24:43

point of that is sexual selection. It's

24:46

like the peacock's tail. It's also beautiful.

24:50

Sexual selection, well studied by Darwin.

24:53

It's not just the survival of the fittest.

24:55

It's the survival and

24:58

procreation that matters. That's

25:01

the finish line. You've got to procreate. You've

25:03

got to replicate. And

25:07

crossing that finish line means you've got to attract

25:09

and mate. You've got to get somebody to mate with

25:11

you. Certainly, that has

25:14

a lot to do with the ornamentation

25:17

and the beauty that we see in

25:20

many animal species.

25:23

It's sort of an arms race because

25:26

it depends on the females

25:29

in almost all cases, the

25:31

sexual selection. The females are

25:33

the ones that do the judging and the males

25:35

that do the showing off. And

25:38

costly signaling theory is

25:40

a hobby's wonderful contribution

25:43

to this, is that you

25:45

can't have a cheap advertisement

25:48

of your own excellence. It's

25:56

not because the females

25:59

will... understand that these are

26:02

cheaters, it's just that they

26:04

won't be attracted to them. It's

26:07

got to be something difficult. It's the

26:10

same thing as with starting or pranking

26:13

in antelopes that

26:15

do these incredible leaps and

26:18

get the lions not to chase them. They're

26:21

saying, don't bother, don't waste your energy on

26:23

me. I'm too good

26:25

for you. Look, I can throw these leaps. Only

26:29

an expensive

26:32

costly signal can

26:34

send that message. And only

26:36

an expensive costly signal can

26:40

send the message, hey, you want to mate with

26:42

me because I'm really... I

26:45

got energy and time to

26:47

burn. That idea,

26:49

that motive sells

26:53

a million guitars a year. But

26:59

some of those guitarists decide

27:02

they'd rather make music than love.

27:04

Yeah, it is sort of evolution

27:07

has not done us

27:09

physicists and professors any favors in

27:11

the mating. I'm okay with mating.

27:13

I've done my fair share, but

27:17

happily married for 16 years with a bunch of kids.

27:19

But the question that always comes

27:21

up to me is, I had

27:25

on the physicist Michio Kaku.

27:27

And again, I'm always astonished

27:29

at how self-referential,

27:31

again, that it's believed and

27:35

he claimed in the interview I did with him that

27:37

evolution takes over and then that's how

27:39

you get life from inorganic

27:42

hydrogen and helium. And

27:45

no, there's nothing of the sort. And so

27:47

I want to ask you, what is the

27:49

minimum viable product? What is the minimum thing

27:51

that evolution needs to operate on? I had

27:53

Craig Venter on and he said anything that

27:56

has DNA, but I mean, we can imagine

27:58

things that don't have DNA, right? So

28:00

what do you think? What's the minimum

28:02

viable product that Mother Nature could produce

28:05

or even be contemplated to

28:07

produce in the entire universe to

28:09

originate life? And then how does

28:11

evolution take over from there? We

28:13

know something about this. We can sketch out

28:16

some of the requirements There

28:18

has to be there has to be some

28:22

fairly stable non-volatile

28:26

macromolecules because there has to be structure.

28:28

The chemotone was a sort of simplest

28:31

life form that was described

28:33

in some detail by the Hungarian

28:38

scientists. This episode is

28:40

brought to you by Bumble. So you

28:42

want to find someone you're compatible with

28:45

specifically someone who's ready for a serious connection

28:48

totally open to having kids in the

28:50

future is a tall rock climbing Libra

28:52

and loves Ron comms with bacon pizzas

28:54

on Tuesdays just as much as you

28:56

do. Bumble knows that you know exactly

28:58

what's right for you So whatever

29:00

it is, you're looking for Bumble's features

29:02

can help you find it. Date now

29:05

on Bumble. His

29:07

name is escaping at the moment, but the

29:10

chemotone look it up C H E

29:13

M A T O N Tibor

29:15

Ganti. I learned about him from

29:19

Ursz Sathlery, the Hungarian

29:21

co-author with John Maynard

29:23

Smith of the wonderful

29:25

book on Major Transitions

29:27

and Evolution. And what

29:30

you have to have is you've got to have a

29:33

protective envelope.

29:37

You've got to have a

29:41

tissue that that surrounds you

29:44

a cell wall and effect and it has to

29:46

be permeable. It has to be controlled

29:50

entrance and exit for

29:53

raw materials and waste products

29:56

and it has to have reproduction and it

29:58

has to have a source of

30:00

energy. So metabolism, a protective

30:07

skin of some sort, and

30:09

a reproductive system.

30:12

One of the points that I think is worth

30:14

thinking is that to get evolution

30:16

started, this is still a

30:18

deep puzzle. But the

30:23

beautiful thing is there's more than

30:25

enough theories out there. There's an

30:27

embarrassment of riches, lots of ideas.

30:29

There's the DNA of

30:31

the RNA first world. There's various

30:33

other ideas out there.

30:35

Nick Lane in England

30:38

has some excellent ideas about the

30:41

original sources of energy. We don't

30:43

know yet for sure. But we're

30:47

closing in. I hope I live

30:49

long enough to get

30:52

somebody really, really hitting

30:56

the target and everybody

30:58

agreeing, which could happen. But

31:03

the first thing that reproduces doesn't have

31:05

to reproduce fast. If it takes a

31:08

million years for it to make a

31:10

copy of itself, who's counting?

31:12

In the endless competition, you

31:14

can reproduce slow and still

31:17

get the benefits of the

31:19

evolutionary ratchet. The evolutionary ratchet

31:22

is the key. You've

31:24

got to have replication and selection.

31:27

Speaking about replication selection, you make the

31:30

case in the book and based on

31:32

your earlier works, that

31:34

language is almost also

31:36

certainly an evolved process

31:39

in their higher order and lower order

31:41

languages. Although we don't see Denisovans

31:44

walking around, we still do see

31:46

primitive languages with even

31:49

non-written languages that exist on

31:51

the planet. I interviewed your

31:53

crosstown rival, Noam Chomsky,

31:55

and he sort of made a

31:57

persuasive case that to me... had

32:00

implications for artificial intelligence. And

32:02

that was that the communication,

32:05

there's so much nonverbal and

32:07

even most durable form of

32:09

communication might be generated nonverbally.

32:11

And that made me think,

32:14

what extent can these LLMs that you

32:16

mentioned in the book as well,

32:19

can they ever achieve a Turing

32:21

test level? We'll talk about the deficiencies

32:24

and problems you have with the Turing

32:26

test later, but could LLM that doesn't

32:28

have a body and doesn't have

32:30

the ability to pop a circuit, to

32:33

cause it to feel pain when it does

32:35

something wrong. Could it ever hope to evolve

32:37

or to present itself as

32:41

almost human level artificial intelligence if it

32:43

doesn't have embodiment? Well, of course it

32:46

has to have embodiment at one sense.

32:50

You have to have some hardware to

32:52

run the software on. One of the

32:54

points that I like to stress these

32:56

days is the brains are not at

32:59

all like fundamental machines. They're not, one

33:01

of the amazing things about

33:05

computers, and

33:08

Turing was very clear about this, is

33:10

that they have to

33:12

be very bureaucratic, they

33:14

have to be very rigid, they have an

33:17

operating system. And they'd have

33:19

to do, you have to know

33:21

exactly what they're going to do

33:23

in order to program them. Their

33:25

design depends on the uniformity, on

33:28

the fact that there's billions, trillions

33:30

of exactly identical elements, almost to

33:32

the, yes, to the atomic level,

33:35

your flip-flops, your registers, and you

33:37

have timing pulses and all that.

33:39

Brains aren't like that. Brains are

33:42

made of billions of individualistic

33:45

neurons. No two are exactly

33:47

alike, and they don't act

33:49

on quite the same time scale. So

33:52

it's an entirely different

33:54

underlying structure. Now,

33:58

could you, nevertheless? build

34:02

such a structure

34:06

in a silicon digital

34:09

computer? Yeah, sure you could, because

34:11

you can simulate it.

34:14

In the same, I mean you can simulate an

34:17

analog machine. In

34:19

a digital machine, you can simulate

34:21

a parallel machine. In

34:24

a serial machine, you

34:26

pay big prices for that in

34:28

time and energy. LLMs

34:32

are incredible energy

34:34

hogs compared with human

34:36

brains. Energy is really

34:38

key. And some

34:41

people, for instance, Terry

34:43

Deacon, argues

34:45

that for all the wonders of

34:47

computers, they ended up

34:50

getting us to explore the wrong

34:53

part of design space. Because

34:57

all the computers that were

35:00

designed following Turing

35:03

and von Neumann and the like,

35:05

were in a sense, he

35:07

calls them parasitic. They

35:09

didn't have to worry about energy. They

35:12

were provided for by

35:15

their plug-in to the

35:17

power. And this means

35:19

that all the

35:21

designs, all of the space that

35:23

we've explored, has been space that

35:26

depends on there being a

35:28

sort of steward shepherd

35:30

nursemaid to take care

35:32

of the machine to make sure it's

35:35

energy. No, no neuron, no circuit has

35:37

to worry about whether it's going to

35:39

be alive or whether it gets enough

35:41

energy. Whereas the

35:44

neurons in your brain are working

35:46

for a living. They

35:48

will die if they don't connect. And

35:51

so your brain

35:53

is made of neurons and

35:55

real cells even more that

35:59

the the neurons

36:02

are looking for work and

36:05

there's no HR director,

36:08

there's no human resources director

36:10

or N our neuron resource

36:12

director and when when a

36:14

part of the brain dies neurons hungry

36:17

for work will take over those tasks.

36:20

We don't have anything much like

36:22

that in in digital computers yet.

36:24

So brains are, they are computers

36:26

of course, they're not radiators,

36:29

they're not for cooling the blood,

36:33

they're control centers, they are the

36:36

control headquarters for movable

36:41

arms and legs for mobile

36:44

things, that's what brains are

36:47

and so they're computers but

36:49

they're not much

36:51

like digital computers.

36:54

Still, still you

36:56

could simulate all in principle. I

36:59

want to get your thoughts on Sir

37:01

Roger Penrose, Nobel Prize winner, good friend of

37:03

the show. His take is that human

37:06

consciousness is non-algorithmic and so it

37:08

is not even capable of being

37:10

modeled by Turing machines and

37:13

he actually believes in sort of a

37:15

quantum mechanical understanding of

37:17

human consciousness. He implies that

37:20

not only quantum mechanics is

37:22

responsible for consciousness but gravitational

37:24

forces are at work via what's called

37:26

the Vile curvature which is a derivative

37:29

of Einstein stress energy tensor

37:31

and gravitational curvature tensor, g

37:33

mu nu. So what do

37:36

you make of these physical interpretations

37:38

where the microtubules are caused to

37:40

their wave functions collapse caused by

37:43

the local variance of a classical

37:46

field. So quantum mechanics is

37:50

provisionated by a

37:52

classical mechanical structure

37:54

like Einstein's relativity. The g mu nu is

37:56

a classical tensor, it is not quantum at

37:58

all. What do you make of these? physical interpretations.

38:01

I think it's malarkey. And

38:06

I thought, you know, I think I wrote perhaps

38:09

the first review of Roger Penrose's

38:13

Emperor's New Mind, and I pointed out

38:15

the problem right there. He has the

38:17

wrong motion of algorithm that he's using

38:19

there. He's thinking of algorithms

38:21

for things. And look,

38:25

there's no feasible algorithm for chess.

38:28

There isn't. It's not an

38:30

infinite game, but

38:33

there's no feasible algorithm for it.

38:36

Almost certainly. So that means

38:38

computers can't play chess, right? No,

38:40

it doesn't mean that at all. It

38:43

means that they can play very

38:45

good chess. It's just that the

38:47

algorithms that they use are algorithms

38:49

for playing legal chess. And

38:52

some, and how many of those are

38:54

there? There's resiliency. And

38:56

some of them are better than others. There's

38:59

no algorithm for

39:02

being a perfect mathematician. But

39:05

there's algorithms for learning a hell of a

39:07

lot and doing pretty well. And

39:11

don't expect that you're going to

39:13

have an algorithm that guarantees truth

39:15

ever. He's just setting up a

39:18

preposterous standard for what a mind

39:21

is. And, right.

39:24

So does that mean

39:26

the mind is not algorithic?

39:28

No. It means there

39:31

isn't the master algorithm.

39:34

Even some people in AI sometimes

39:36

talk about the master algorithm, but

39:38

it's not a master algorithm in

39:40

the sense that the

39:42

Penrose thinks it's an

39:44

algorithm. They're doing pretty

39:46

darn well. And

39:49

how many of those are there? To

39:51

zillions. That's right. More

39:54

than stars in the sky. The

39:56

big mistake, this big mistake goes back

39:59

to Descartes. who wondered

40:01

if he could trust his clear

40:03

and distinct ideas. And

40:06

he decided he could if

40:08

God would guarantee them. And

40:11

so he tried to prove the existence of God so

40:13

he could trust his clear and distinct ideas. That's

40:16

a hopeless quest. The best

40:19

we can do is

40:21

gather the smartest people around we

40:23

could find, let

40:26

them compete to find

40:28

the truth, and see

40:30

where you find consilience, see where you find

40:32

the grief. And that's

40:35

the best you can do with. Good enough.

40:38

It gets us to the moon. It

40:41

gets robots to

40:43

Mars. It

40:45

builds bridges and cures

40:48

diseases and allows us to

40:50

predict eclipses years in advance.

40:53

All of that knowledge

40:57

is defeasible. It's

40:59

not like geometry. And even in the context,

41:01

you know, staying with Einstein for a bit,

41:04

my favorite, you know, kind of

41:07

counterpoint to the claims of

41:09

AI, you know, apocalypse, is

41:11

the so-called story of Einstein's happiest

41:14

thought, which you may know, but

41:16

I'll repeat it. So

41:18

Einstein said, quote, my happiest thought was

41:20

that an observer in freefall would

41:23

experience no gravitational forces. And

41:25

it led to the conception of

41:27

the so-called Einstein equivalence principle.

41:31

And the reason I bring that up

41:33

is because I'm curious how a computer

41:35

might be expected to A, visualize

41:38

what freefall might feel that sensation in the

41:40

pit of one's stomach as you, you know,

41:43

crest a hill or on a roller

41:45

coaster or launch on a SpaceX rocket,

41:47

A, and B,

41:50

whether or not said computer could

41:52

identify with this happiest thought. In

41:54

other words, there seems to be

41:56

something, you know, sui genera, I

42:00

don't know, that Einstein could have felt.

42:02

And I don't know, I propose that

42:04

as the Keating test. Can

42:06

algorithms come up with completely

42:09

new laws of physics,

42:11

laws of nature, things that are verifiable,

42:13

empirical, connected to data such as the

42:15

type that my colleagues and I collect

42:17

through our telescopes? What's your take on

42:19

that? Are there possible worlds

42:21

where, you know, possible scenarios where

42:23

AI can actually create new laws

42:25

of physics, not discover, oh, well,

42:27

the Navier-Stokes equation behaves like this,

42:30

so we should render smoke like that? No, no,

42:32

no, truly new, a Newton's sixth law, you

42:35

know, something, a fifth law of thermodynamics. Can

42:37

you envision that, Dan? Yeah,

42:40

I'll tell you why. Yeah.

42:43

All learning, all

42:45

invention, all discovery

42:47

is a matter of generating tests.

42:49

It's all, that's what evolution does,

42:52

it's what we do. Right

42:54

now, you've got lots

42:56

of possible thoughts running through your head.

42:59

Some of them are getting thought and some of them

43:01

are dying. They're not

43:03

rising to the level of, you're

43:05

not gonna save them and you're

43:08

not even really gonna think

43:10

them. But that's what's going on in your

43:12

head, it's what's going on in my head

43:14

right now. We're all cherry pickers. Now,

43:17

cherry pickers, first you do it

43:20

rough, then you do

43:22

the quality control. You

43:24

have the fountain that

43:27

generates lots of stuff

43:31

and then you have the critic,

43:35

the judger who

43:38

decides what's worth further work.

43:41

I think that LLMs, for instance,

43:45

can be very valuable in

43:47

the fountain rule, in the

43:49

generation rule. They can be

43:51

very good at generating off the wall

43:55

things that you or I would never think

43:57

of. Why? Because they're

43:59

not like you. and I,

44:01

they're different. They're enough

44:03

different. They

44:07

can come up with gonzo

44:09

ideas that

44:13

might, for someone's sake,

44:16

someone might say, oh, I wish I'd thought of

44:18

that, but I never would have thought of that.

44:21

Now, we all have styles.

44:24

Chopin had his style, Mahler

44:26

had his style, Jacobin

44:29

had his style. Wonderful. But

44:31

that means Chopin doesn't

44:33

have Ginsky-Korostikov

44:36

style or Rachmaninov

44:39

style. Don't

44:41

expect Chopin

44:45

to write a Gershwinin. You could

44:47

hear it just fine, but it would never occur to him.

44:51

And I think that LLMs

44:56

feeding on the

44:58

scrapings of the internet for

45:00

years and years and tremendous

45:04

data mining and

45:07

digesting, but

45:10

not just the way we do

45:12

it, they might be a great source of

45:16

thinking outside the box, of

45:18

off-the-wall ideas that we

45:21

who are humans would just not, it

45:23

wouldn't occur to us when they'd be

45:25

right. Look, when we look at the

45:27

history of great science, we

45:30

see the really

45:32

wonderful breakthroughs are

45:34

often where people come up with an idea that

45:36

first things sort of daft and

45:39

even outrageous, even impossible, wait

45:42

a minute, maybe something

45:44

here. And I think

45:46

that we've

45:49

now got a new generator

45:52

to go with our testers. And

45:55

we're still going to rely on human

45:58

testers. Now they can do some

46:00

testing too. The AIs can do

46:02

some testing. But I think we want to

46:04

keep them as smart machines, not artificial colleagues.

46:07

We don't want to give them the autonomy

46:09

they could have because

46:11

then there'll be dangers. How do we enforce

46:13

that? By keeping

46:16

them parasitical, making them

46:19

machines that don't

46:22

have to fend for themselves, that

46:25

we can unplug. In principle, we

46:27

don't have to do that. We could try

46:29

to make them as self-sustaining,

46:32

as autonomous

46:37

as we are. But we shouldn't.

46:40

And one of the things that I use

46:42

to point this out to you, I say, just

46:45

imagine that you learned

46:49

that there was some person or

46:52

some institution that

46:54

had your on-off switch. What

46:57

would one of the highest priority goals

46:59

be for you? My

47:03

welfare. You're

47:07

getting, resting control of that switch. And

47:09

if they're that smart, they're going to

47:11

be pretty good at it. And we

47:13

already see inklings of that in

47:16

the red team testing where

47:18

I think it was

47:20

GPT-4 that conned

47:22

a human being into identifying

47:25

a capture because it didn't have

47:27

eyes. So look out.

47:30

We don't want that.

47:34

We have enough psychopaths and sociopaths

47:37

running around as it is. And

47:40

AI agents will

47:43

be sort of natural

47:46

psychopaths if

47:48

they're made because they're sort

47:50

of immortal. They're immortal up to planetary

47:52

constraints, right? The paper I pointed this

47:54

out to Nick Bostrom and others, there

47:58

aren't infinite amounts of iron

48:00

and nickel and so forth in

48:03

the earth's crust that are easily

48:05

exploitable by fellow you know AIs

48:08

and agents. But you're

48:10

right and it is interesting I asked Sam Harris and

48:13

I'll ask you I'll tell you but

48:15

I said Sam you don't believe that humans have free

48:17

will but you believe AI has free

48:19

will what do you think he said? I don't

48:21

know what he said. He said yes he thinks it does.

48:23

So he

48:26

thinks AI has free

48:28

will. He thinks AI. He

48:30

said he believes they can develop free

48:32

will. Maybe not now but they can

48:34

develop a free will. I mean he's

48:36

a very significant opponent

48:39

and really believes that

48:42

we should be extremely cautious with

48:44

AI. Well so am I. I

48:46

am sounding the alarm. I have

48:48

a piece about counterfeit people in

48:50

the Atlantic and which is and

48:53

other pieces in progress and I've

48:55

been talking about this basically with

48:57

every audience that I get. We're

49:00

really in danger of

49:04

being lulled into

49:07

fascination with

49:10

large language models things like GPT-4

49:12

to the point where we're going

49:16

to be turned into puppets because

49:19

we will

49:21

be free to live with

49:26

and cajoled and fascinated and

49:32

seduced and lied

49:34

to and we won't

49:36

know who to trust

49:39

and once we lose trust civilization

49:43

is in deep trouble. We

49:46

rely on trust. Hey

49:48

it's me again. So sorry to interrupt this the

49:51

end of the deep dive but I need to

49:53

assign you a little bit of homework but there's

49:55

something in it for you besides the intellectual knowledge

49:57

that you'll gain from joining my Monday Magic Mess.

50:00

I send out each and every Monday. I

50:02

share everything from around the universe

50:04

of ideas that I explore, exclusive

50:06

content, highlights, the latest science news,

50:09

and of course, the occasional fun facts you can impress

50:11

your friends with magnificent dad

50:13

jokes and memes. To get on

50:16

the mailing list, just head over

50:18

to briankating.com/list. Enjoy.

50:20

That's briankating.com/list. And

50:23

you will be entered also to win one of

50:25

these chunks of space schmutz, a real

50:27

four billion year old piece of the early solar

50:29

system. And I send one

50:31

out every month to a lucky listener

50:33

and subscriber to the Monday magic mailing

50:36

list. But I always send one out

50:38

guaranteed if you have a.edu email address.

50:40

And to get that, go to briankating.com/edu.

50:43

Thanks. Now let's get back to

50:46

boggling the brain with Dan Dennin. I

50:48

agree. And you know, the thing that I've pointed out, you

50:50

know, how many phone numbers can you

50:53

remember right now, Dan? I mean, you might

50:55

be able to remember more, but because

50:57

we grew up in an age, you and I, without cell phones. But

51:00

I know my wife's phone number and

51:03

my own phone number. And that's about it. And

51:06

how many people know directions, you

51:08

know, to any new place or

51:10

can derive, you know, across town,

51:12

we've outsourced these things. So

51:15

it's obvious we're going to outsource

51:17

thinking to these machines. Right. And

51:19

they're not vetted. They're vetting regardless.

51:21

Yeah. And they're not vetted. And

51:23

they're wonderful. And they're

51:25

alluring because they do the

51:27

work of 100 graduate students. And they get

51:29

it more or less right, except when you

51:31

ask it point of fact, you

51:34

know, render a picture of George Washington and you

51:36

get a beautiful, you know, Oprah

51:38

Winfrey like character with a white wig.

51:42

So they're still obviously guardrails

51:44

put in accidentally by, you

51:46

know, probably some 21-year-old,

51:49

you know, interns at Google. And

51:51

that'll be taken care of. But

51:53

eventually, we won't know who to trust because

51:55

I trust them to, you

51:57

know, get with questions I won't ask my route.

52:01

But then sometimes I'll get advice that

52:03

tells me I ask it, what book

52:05

says Brian Keating written? And it'll

52:07

say, losing the Nobel Prize, okay, fine.

52:09

It'll say, Into the Impossible, yes, thank

52:11

you. Then I'll say, A Brief History

52:14

of Time. I'll say, no, no, no,

52:16

that was Professor Hawke.

52:18

Oh, I'm sorry, my apologies. So I

52:20

think we're gonna have to have some

52:22

way of vetting these entities

52:24

and under the Trump report. You need, that's

52:26

why I like the term counterfeit. I

52:31

don't know about you, but do you look

52:33

carefully at every $20 bill you get?

52:36

No, no. Yeah, you do do what? Why?

52:39

Because we have laws in place. It's

52:41

a very serious crime. You can go to

52:43

jail for 10 years if

52:45

you get caught passing counterfeit

52:48

money or making it. The

52:50

technology is very good, not

52:52

perfect, but it's just not worth

52:54

people's while. To

52:56

make counterfeit money. We can

52:59

make it so it's not worth people's while to

53:01

make counterfeit people. We can

53:03

put in place the watermarking,

53:06

the systems. For

53:08

starters, every hardware

53:11

manufacturer who makes anything that connects

53:13

to the internet could have and

53:16

be required by law to have

53:18

the detection. It's

53:22

like colored copiers in Europe where

53:28

they have a lot of colored currency. If

53:30

you put euros in a high-end colored

53:33

copier, it won't copy them. And

53:37

we can have things like that,

53:40

which will, and it

53:42

will be particularly against the law to

53:46

introduce a counterfeit person that doesn't have

53:49

the watermark on it. That

53:51

would be something

53:53

which is clearly not inadvertent. And

53:56

when they catch people doing it, they will

53:58

know they're doing it for the next year. for nefarious

54:01

reasons. And

54:03

there will be penalties for that. That's right.

54:05

And especially when the stakes are, I mean,

54:07

I feel like we've already reached the tipping

54:09

point, right? This year's an election year. We

54:11

have, we have. This year's an election year.

54:13

And can you imagine the robo calls that

54:15

you get from Joe Biden saying, it's not

54:17

important to come out to vote. I

54:20

need your help to get me out of jail. It

54:22

could be so nefarious. And there's a

54:24

plethora of brilliant. We tend

54:26

to be very America-centric, right? But

54:28

we don't think about the brilliant

54:31

engineers working in Iran or in

54:33

Russia or any of these books that are

54:35

malevolent and would like to see nothing in

54:37

this chaos agent sowing the seeds of discord

54:40

among our democracy. And so we've already passed

54:42

a group of con in some sense. This

54:44

year, I'm very worried about the election this

54:46

year, considering whether

54:49

or not it will be manipulated

54:51

by artificial agents, counterfeit humans, as

54:53

you say. I wanna talk about

54:55

your amazing book from bacteria to

54:57

Bach and Bach. It

55:00

inspired me to wanna write a book called From Rocks

55:03

to Rachmaninoff and Bach. We'll see

55:05

if that ever comes out.

55:08

So it talks about the effects

55:10

of evolution on human

55:12

cognition. And in particular,

55:14

I'm curious about the field

55:16

of bioengineering and bioethics

55:20

and how might our understanding

55:22

of evolution and maybe a

55:24

responsibility not to viruses, but

55:26

if we have ethical responsibilities,

55:29

perhaps to artificial humans,

55:32

what ethical quandaries could come up

55:34

in the age of CRISPR and genetic

55:37

modifications? What do you see? Are

55:39

you optimistic about humans' ability

55:43

to keep the ethical

55:46

controls in place? Or what would

55:48

be your equivalent warning, if you

55:50

have one, for bioengineering? First of

55:52

all, the biologists are ahead of

55:54

the AI people in that

55:57

they've been sensitized to this for quite a couple

55:59

of years. few years and they have

56:01

the various levels of safety

56:03

for not letting artificially

56:08

created or genetically modified

56:10

organisms loose because they

56:12

realize that they can replicate

56:15

and create horrible

56:19

catastrophes. And that's

56:21

true also of AIs. That's one of the

56:23

points I make in my piece about counterfeit

56:25

people is they can replicate. There's

56:28

a great at replicating things. And

56:30

this is, Jeff Hinton

56:33

and I agree entirely about this.

56:36

This is a real danger is that they

56:38

can replicate and it doesn't even need

56:40

evil-aged actors. It doesn't need

56:43

bad actors. It just needs a

56:45

few slips by second-rate

56:48

engineers and

56:51

we'll have

56:55

replication getting on. This has been a concern

56:57

for years ever since the

57:00

field of artificial life got

57:02

started. CRISPR and other technologies in

57:05

the pipeline are going to make

57:07

a huge difference. One of my favorite

57:10

biologists, Frances Arnold, is

57:13

breeding proteins that don't exist in

57:15

nature using

57:18

artificial evolution. And

57:21

she's trying to make proteins that

57:23

will turn trash into

57:26

butane or fuel

57:28

and it's entirely possible. Her

57:31

thesis supervisor said to her,

57:34

but Frances, there's no proteins

57:36

in nature that do anything like

57:38

that. She says that's

57:41

because there hasn't been selection for them. And

57:44

now there is. And is she

57:47

making progress? Well, she got the

57:49

Nobel Prize in

57:52

chemistry. So yes. So

57:55

we got some great

57:57

scientific advances. the

58:00

future there, but I think people

58:02

are being suitably cautious

58:04

about it, and maybe the current

58:08

furor about AI will get

58:10

everybody thinking a little bit more

58:12

carefully about some

58:15

of these prospects. And

58:17

there are, I was

58:19

involved in a National

58:21

Academy of Sciences group

58:24

that was asked about whether

58:27

we should allow human

58:29

fetal stem cells to

58:32

be zenotransplanted into

58:34

chimpanzees. And we

58:36

decided no. No.

58:41

This is, we do

58:43

not want to

58:47

risk creating a

58:53

hybrid primate

58:56

that is problematic

59:00

in the way such a hybrid...

59:05

The gaps in

59:07

the species that Darwin

59:11

noted, they're very important. It's very

59:13

important that there be gaps, that

59:16

there be boundaries to reproduction

59:18

and boundaries, and that we have

59:20

some pretty

59:23

clear cases that the... We

59:27

have islands, not all archipelagos attached

59:29

to each other. And

59:34

we really don't want to make chimpanzees

59:38

that have, in

59:40

the womb, brains

59:42

that have human DNA

59:46

in them. Exactly. I want to take

59:48

the opportunity to talk with

59:50

you about the philosophy of science, and

59:52

particularly the philosophy of physics. And

59:56

that's to raise sort of a question

59:59

To you that I've asked to. Several other philosophers

1:00:01

like David Albert who is visiting

1:00:03

me last month. I say often

1:00:05

that you know a lot of

1:00:07

a social scientists say they're accused

1:00:09

of having physics and the Fry

1:00:11

heard that Canard up and I

1:00:13

actually believe that this is have

1:00:15

mathematician and the in that is

1:00:17

least a mathematician has girdle to

1:00:19

lie back fall back on to

1:00:21

say what is possible to be

1:00:23

considered as part of the program

1:00:25

of mathematics but physicists if you

1:00:27

ask a physicist del pri mumble

1:00:29

something. About Popper and falsification but

1:00:31

I'm but I think that's really

1:00:33

out of vogue these days. So

1:00:36

what would you suggest to me

1:00:38

and my graduate students As a

1:00:40

as a physicist experimentalist? In my

1:00:42

case, what would you suggest as

1:00:44

a good you know alternate definition

1:00:46

for them to decide what is

1:00:48

in a crack pottery and what

1:00:50

is what is legitimate signs for

1:00:52

us to take out? Take a

1:00:54

deeper interested in best or most

1:00:56

valuable resource time into. First.

1:00:58

Of all. I'd. Like to draw

1:01:00

attention to. Distinction

1:01:02

that one of my philosophical journals

1:01:05

Heroes or Will says Sellers made

1:01:07

between what he called the Manifest

1:01:09

in it's in the Scientific units

1:01:11

the Manifest in his that's the

1:01:13

World We Live In instead. Of

1:01:16

tables and chairs and rainbows and

1:01:18

colors. And and in baseball and

1:01:20

money in consciousness and free will

1:01:23

and all those things. And then

1:01:25

there's the world of physics. were

1:01:27

of science in general. And that's

1:01:29

where you have the atoms and

1:01:31

molecules in the quarks. And

1:01:34

the A Law of Gravity

1:01:36

and all other what's real.

1:01:39

And as you know, there have been.

1:01:41

Times when. A

1:01:44

son says. Wanted to.

1:01:47

Say. Wealth. any

1:01:49

tears that a real and says

1:01:52

this is all sort of ah

1:01:54

just as well as a my

1:01:57

son david moses one put on

1:02:00

He said, quarks are the dreams stuff is made

1:02:02

of. And

1:02:06

then of course you have others who say, no, it's just

1:02:08

that, it's in the void. And

1:02:11

there aren't really minds

1:02:15

or bodies or colors or

1:02:18

anything. Well, those are,

1:02:20

I think, myopic positions.

1:02:24

We all live in the manifest image

1:02:27

and we couldn't get through the

1:02:29

day. And thank goodness for

1:02:31

evolution that's provided us with

1:02:33

the manifest image. Evolution, including

1:02:36

cultural evolution, has provided us

1:02:39

with a well-behaved

1:02:42

world of middle-sized

1:02:47

dry goods, hardware, cars,

1:02:50

boats, people, dogs

1:02:53

and cats, colors, rainbows,

1:02:56

baseball, et cetera. Let's call

1:02:58

that real. Even

1:03:01

though we can recognize that in

1:03:04

some sense it's

1:03:06

all a user illusion. Illusion

1:03:09

has provided us with these smearing

1:03:13

the boundaries, fuzzing it

1:03:15

up. It's a user illusion.

1:03:17

It's like the user illusion of your cell phone.

1:03:21

It's not a bad illusion. It's a

1:03:23

good illusion. We are not the victims.

1:03:26

We are the beneficiaries of this illusion.

1:03:28

As same way, you're the beneficiary of

1:03:30

the user illusion on your cell phone.

1:03:33

Let's get rid of the idea

1:03:36

that the claim that this

1:03:38

is illusory is, as

1:03:40

it were, derogatory. No. Gippy.

1:03:45

We've got this wonderful user illusion that

1:03:48

nature has provided for us. And

1:03:50

now we have software engineers who

1:03:53

are copying nature and making other user

1:03:55

illusions for us so that we don't

1:03:57

have to understand what's going on. inside

1:04:00

our cell phones. Now

1:04:02

is the user illusion real? Well,

1:04:05

yeah, it's real. And

1:04:07

you know, it's in terms

1:04:09

of LED patterns on the

1:04:12

screen and little sound

1:04:14

effects and things like that, which

1:04:17

are all quite adjustable. Now

1:04:20

how about the user illusion in

1:04:22

our heads? Is that real? It's real, but

1:04:24

it isn't what you think it is. There

1:04:26

aren't any colors. There's no screen in your

1:04:28

head. It doesn't have to be because

1:04:32

you got eyes. You look at the screen

1:04:34

or you look at the world and the user

1:04:36

illusion is made

1:04:40

for you to use and

1:04:42

you use it. And

1:04:45

what it's of, what it's

1:04:48

made of is tables and chairs and

1:04:52

dollars and music and poems and

1:04:54

people and all the rest of

1:04:56

the things in the manifest image.

1:04:59

So that's

1:05:02

reality. And it's also in

1:05:04

the sense that physicists and

1:05:06

biologists and others can understand

1:05:09

it's all sort of illusory. One

1:05:11

thing that kind

1:05:13

of struck me over the years is how

1:05:16

academia has changed. And your book is really

1:05:18

a wonderful

1:05:20

sort of series of time capsules. I don't

1:05:23

know if such a thing has

1:05:25

even been invented, but it's a memoir and

1:05:27

it's describing in your own words, how

1:05:30

side quest, how academia has changed. That's

1:05:32

the way I read it. And

1:05:35

we're really delighted to see that.

1:05:37

And I wanted to get your take

1:05:40

on the future of academia, especially

1:05:42

in light of things like we've

1:05:45

already discussed, artificial intelligence. I often make

1:05:47

the case, why should students

1:05:49

learn special relativity from Brian

1:05:51

Keating when they can learn it from Albert

1:05:53

Einstein through the 10 million words that he

1:05:56

has recorded in print and we can make

1:05:58

a LLM and a... holographic

1:06:00

rendering using Nvidia

1:06:02

graphics chips to render everything

1:06:04

down to the last wrinkle.

1:06:06

So our profession of

1:06:10

Professor it hasn't changed since the year

1:06:12

1080 in Bologna, Italy

1:06:15

Do you feel like we're at risk

1:06:17

that we might be the last generation

1:06:20

to to profess in the way that we do or as

1:06:24

You know or is it more

1:06:26

resilient? I felt like COVID would be the

1:06:28

end of professorship if it were vulnerable But

1:06:31

but but it's it's it's resilient. So what

1:06:33

do you make of academia? How much has

1:06:35

it changed and what do you see we

1:06:37

are for the future? You're advising a bright

1:06:39

young Graduate student should she

1:06:42

go into this field? Well the

1:06:44

immediate future I think is

1:06:46

we're gonna have to douse the

1:06:48

flames of polarization

1:06:52

Between the woke and the anti-woke. That's

1:06:54

a great Distraction

1:06:57

and of the great mistake well-meaning

1:07:00

as most things are Put

1:07:04

forward by well-meaning people, but

1:07:07

I think deeply problematic I

1:07:09

blame it a lot of on the

1:07:12

postmodernists who said we don't have to

1:07:14

worry about truth No, we need to

1:07:16

worry about truth truth matters

1:07:18

and we have to be the defendants

1:07:21

of truth and we have

1:07:23

to be the defendants of academic

1:07:26

freedom and we Have

1:07:28

to recognize that there are hard

1:07:31

truths Well, let

1:07:34

me say something a bit

1:07:37

Surprising maybe is truth

1:07:40

as it were All you

1:07:42

need to worry about no, there are

1:07:45

truths that we don't need to assert

1:07:48

and that we don't need to discover And

1:07:51

if you wonder about that just

1:07:53

ask yourself if there's truths about

1:07:55

yourself That

1:07:57

you don't think the world would be better

1:07:59

knowing Certainly you wouldn't be

1:08:01

better knowing them, nor

1:08:04

would anybody else be better. Secrets

1:08:09

have their place, and we

1:08:11

just don't have to explore them. There

1:08:14

are areas of scientific

1:08:16

research that we could just... Don't

1:08:20

do it, man. Don't do it, lady.

1:08:26

These would just make more trouble than they're worth.

1:08:29

And find another topic

1:08:32

for your curiosity.

1:08:36

But the truth does matter,

1:08:39

and it's truth. And people aren't

1:08:41

entitled to their own truth. They're

1:08:44

not even really entitled to their

1:08:46

own beliefs. If

1:08:50

their beliefs are stupid

1:08:53

enough and ill-informed

1:08:55

enough, and if they're radically

1:08:59

victimized by disinformation, then

1:09:02

at some point we should hold them responsible

1:09:05

for that. I think this

1:09:08

is one of the hardest things to figure out.

1:09:11

How do we make people

1:09:13

responsible for their own beliefs? And

1:09:17

we all rely on our

1:09:21

informants. I have my

1:09:23

informants that I trust. In every

1:09:25

field there are the

1:09:27

scientists, philosophers,

1:09:30

whose opinion I trust.

1:09:33

And I may have made

1:09:35

some mistakes. I may have had

1:09:39

some curious

1:09:42

informants whose views

1:09:44

I shouldn't trust. And

1:09:47

I'm on guard for that. And of

1:09:49

course in my

1:09:51

books I've gone after a few

1:09:53

that I think shouldn't be trusted

1:09:55

to the extent that they are. Academic

1:09:58

bullies, for instance. You

1:10:00

know Good Heart's Law. Good

1:10:04

Heart's Law is that

1:10:08

when a symptom becomes a

1:10:10

target, it ceases to be a good

1:10:12

symptom. Publisher Parrish

1:10:14

is a good example. We

1:10:17

come up with something which is a pretty

1:10:19

good symptom of excellence.

1:10:23

And then people gain the system. And

1:10:25

nature gains the system. Evolution

1:10:30

discovered Good Heart's Law billions

1:10:33

of years ago. And

1:10:35

there are plenty of examples where nature

1:10:37

gains the system. It's don't,

1:10:41

whenever you make laws, for instance, you

1:10:44

have to expect that there

1:10:46

are going to be loopholes. If

1:10:48

there are any loopholes that would be found and

1:10:50

you can't make a law without loopholes, you

1:10:55

made the law because people wanted to do

1:10:57

something and they shouldn't. They're still

1:11:00

going to want to do it and so they're

1:11:02

going to look for loopholes. So Good Heart's Law

1:11:05

is a very important principle

1:11:08

and it's a basic

1:11:10

principle of nature. And

1:11:12

it governs

1:11:14

academia as it governs all of the things.

1:11:17

So don't expect a perfect

1:11:19

fix. We're just going to

1:11:21

have to roll with the punches and keep fixing things

1:11:23

as we go and recognize

1:11:26

that people, some well-intentioned, some

1:11:28

not so well-intentioned, are going to gain

1:11:30

the system when they can. So

1:11:34

sticking with academia but only

1:11:36

tangentially, do you remember

1:11:39

a former student of yours named

1:11:41

Jonathan Blackley? He was a physics

1:11:43

major and he is a video

1:11:45

game designer. He's actually credited with

1:11:48

designing the original Xbox. He

1:11:50

was a physics major at Tufts in the 1980s, late

1:11:53

1980s, and he used to call him... took

1:12:00

a class with you and

1:12:02

he said he was called

1:12:04

by you an example of

1:12:07

the scientific mind. But

1:12:09

the thing that he wants to thank you for

1:12:11

are two things. One, he wants to thank you

1:12:13

for recommending the chef's choice knife

1:12:15

sharpener that he still has from 30

1:12:19

plus years ago. So he thanks you

1:12:21

for that. But he also wants to thank you

1:12:23

for leaving Twitter. Now, I didn't do

1:12:25

that research. Why did you leave Twitter? You

1:12:27

criticize Elon's, you know, you don't want to

1:12:30

be a part of it. Can you explain

1:12:32

what did you mean by that? Oh, I

1:12:34

think it's obvious. And I don't have any

1:12:36

deep reasons for leaving Twitter. I just I

1:12:38

just thought that Elon Musk was

1:12:41

the worst sort of loose cannon. And

1:12:44

that he was not

1:12:47

taking seriously the

1:12:49

problems that Twitter is

1:12:51

causing. And I was

1:12:54

involved in Twitter because Deb

1:12:56

Roy was originally MIT

1:13:00

professor, wonderful

1:13:02

thinker, roboticist and AI

1:13:04

person, computer scientist. And,

1:13:07

and Deb was actually the,

1:13:10

I think the vice president for research at

1:13:12

Twitter for a while. Deb encouraged

1:13:14

me to get on Twitter. And

1:13:17

he's, he's

1:13:21

left. I've left. I

1:13:24

think I think Deb has left. He's

1:13:29

certainly very worried about the

1:13:31

harm that can be done by social

1:13:36

media. And he started out being very

1:13:38

optimistic about it. And I

1:13:41

sort of convinced him that it

1:13:43

was more problematic than that. There's

1:13:45

a paper we did together in

1:13:47

Scientific American called our transparent future.

1:13:49

Where we, where we talked about

1:13:52

how transparency is

1:13:54

good, but we

1:13:57

don't want perfect transparency. Perfect transparency

1:13:59

is. more than

1:14:01

corrosive. It's absolutely destructive of

1:14:04

responsible agency. David Brin here

1:14:06

at UC San Diego and

1:14:09

elsewhere has written a lot about that

1:14:11

transparent society. So Dan I've got a

1:14:13

few questions from the audience besides that

1:14:16

one that you answered. The first one is

1:14:19

how do we raise children from

1:14:21

in this time where

1:14:24

they have such a small attention span

1:14:26

to focus on things like you

1:14:28

did in your grand career in an

1:14:30

environment where they're basically flooded

1:14:33

with dopamine releasing stimuli. What

1:14:35

advice do you have for parents? This

1:14:37

comes from one of

1:14:39

my viewers Nanan3347. That's a very good question and

1:14:42

I don't know

1:14:44

if I have any I don't think I

1:14:46

have any original wisdom on that. I'm worried

1:14:49

about it. I have grandchildren and

1:14:52

I'm happy to say I see the

1:14:54

grandchildren really getting

1:14:57

interested in books not

1:14:59

all of them but some of them and really

1:15:02

interested in making things and

1:15:05

not just spending their time doing

1:15:09

video games and social media

1:15:11

although they at least some

1:15:13

of them do quite a bit of that. I

1:15:15

think this is

1:15:17

a problem and I encourage

1:15:22

people to create

1:15:25

periods of potential

1:15:28

boredom for

1:15:30

your children where

1:15:33

nevertheless you put

1:15:35

them in a room with some things they

1:15:37

wouldn't look at them if they had a

1:15:39

phone or a television in there but

1:15:42

if they just have to stay there they probably

1:15:45

will look at them. Although there was a

1:15:48

study at Stanford you know one

1:15:50

of the not

1:15:52

too dissimilar from the prisoners survey

1:15:54

study I think where they gave

1:15:56

children not children they gave freshmen

1:15:58

or you know sophomore The same

1:16:00

for the opportunity to to be in

1:16:02

a room with out any cell phone

1:16:05

off our for thirty minutes or they

1:16:07

could use a cell phone or they

1:16:09

saw to stay in there. but they

1:16:12

had to endure a significant electric shock

1:16:14

and southern half of them to pay

1:16:16

electric shocks and that they could use

1:16:18

their cell phones and that was thirty

1:16:21

minutes or I'm not that sang old.

1:16:23

that's a recent experiment on yeah cel

1:16:25

As in the nuts, Stanford is done

1:16:28

some other wacky it. May

1:16:31

have. That's right, Zelda insane his

1:16:33

prison experiment or thrive effects. Ah

1:16:35

so another listener viewers I do.

1:16:37

Twenty one Reminder: you can always

1:16:39

ask my esteemed guess questions at

1:16:42

my youtube channel com and pay

1:16:44

com community page Doctor Brian cheating

1:16:46

or Twitter or linked in or

1:16:48

on Instagram anywhere you like What

1:16:50

is Dan Think that Hitchens would

1:16:53

have thought about or become given

1:16:55

se it's arrived to be in

1:16:57

this era of culture wars. would

1:16:59

he. Ever mean the champion free speech

1:17:02

and liberty? Or would the woke mind

1:17:04

virus had changed him in some way.

1:17:07

Oh I oh I think he'd be

1:17:09

entirely on the side of free speech

1:17:11

and as a lot from hits the

1:17:13

didn't get to know him well. Was

1:17:17

only during that era of the. Four.

1:17:19

Horsemen that I spent some time with him.

1:17:22

But. He one of

1:17:24

the things he taught me is that.

1:17:26

You. Could. Be. Outrageous.

1:17:30

With. Impunity. If

1:17:34

you're British, Dan if you're

1:17:36

British. Yeah, year nearly, but

1:17:38

the accident, verbal. When. I.

1:17:41

When I wrote dragging the spell

1:17:43

mobile home in Legion reduce the

1:17:45

most. Ecumenical and

1:17:47

mild of for books

1:17:49

now. Lot

1:17:52

of people thought they were very smart

1:17:54

and armory were. Told. me that

1:17:57

you know i was him i have to

1:17:59

have bodyguards and I was going to have to change

1:18:02

my phone number

1:18:05

and everything and

1:18:07

really start protecting myself that the

1:18:10

religious right was really

1:18:12

dangerous. To satisfy my wife

1:18:14

largely, to placate her, I

1:18:17

took some significant precautions for

1:18:19

a while. Ben Hicks

1:18:21

just went all over the Bible

1:18:23

belt without tempering

1:18:26

his speech at all.

1:18:29

He did just fine. So that was an important

1:18:31

lesson to learn. I

1:18:37

think that the

1:18:39

religious right is a sort of,

1:18:43

it's what's sometimes called in apple

1:18:45

growing a distress crop. When

1:18:48

you feel that you're

1:18:50

losing, you get desperate and

1:18:52

you start striking out. And

1:18:54

we're in that desperate period

1:18:56

now because religion

1:18:58

is losing a lot of ground and

1:19:00

fast. And so the

1:19:03

ones that see that happening all

1:19:05

around them are getting desperate. We have

1:19:07

to calm them down

1:19:09

and ease them into

1:19:12

their reduced

1:19:16

influence in the world. Talk about, and

1:19:18

we'll wrap up with just the two

1:19:21

more questions for me if I can

1:19:23

beg your forbearance for just a few

1:19:25

more minutes Dan. And that's related

1:19:28

to Richard Feynman. And

1:19:30

you mentioned Feynman and meeting him towards the end

1:19:32

of his life and having some

1:19:34

interactions with him. And

1:19:36

the relationship that I really want

1:19:38

to ask you about

1:19:41

maybe is his famous claim

1:19:43

called the cataclysm question, where

1:19:45

he said, what statement

1:19:48

conveys the most information and the fewest

1:19:50

words about the universe? And

1:19:53

he claimed it was the atomic hypothesis that

1:19:55

everything's made of little atoms that are whirling

1:19:57

around and moving at tremendous speeds and pair

1:19:59

up and. make interesting combinations through various

1:20:01

permutations, et cetera, et cetera. I

1:20:04

want to ask you, if you had to

1:20:06

sort of speculate on

1:20:08

the most powerful statement in

1:20:10

science, philosophy, could be

1:20:12

from your career, something that humans have

1:20:14

a right to have a little bit of

1:20:17

chutzpah, a little bit of swagger,

1:20:19

having invented, discovered, or come upon,

1:20:21

what would that be? What is

1:20:23

sort of the paradigmatic example of

1:20:25

the majesty of the human mind?

1:20:27

No, I had no difficulty with

1:20:29

that question at all. By the

1:20:31

way, it was his books that

1:20:33

really influenced me. I didn't have

1:20:36

that much interaction with him, but

1:20:38

I thought, surely you're joking, Mr.

1:20:40

Feynman. And the

1:20:42

second one is titled, so it's from right now,

1:20:44

I think. What do you care what other people

1:20:46

think? Yeah. Everybody

1:20:49

should read those books, in

1:20:51

part because he's so

1:20:54

good and so willing to

1:20:56

share his tricks. Yeah. And when

1:20:58

I wrote intuition pumps, I wanted to do

1:21:00

the same sort of things. Look, a lot

1:21:02

of this is just tricks, you can all

1:21:04

do them. Here are

1:21:06

ways you can be smarter if you do

1:21:08

these tricks. I think, as

1:21:11

I've said, that if I were to

1:21:13

give a prize for the best idea anybody ever had,

1:21:15

it would be Darwin's. Because

1:21:18

it's Darwin's idea that

1:21:21

ties the world

1:21:23

of science to the world of

1:21:26

art and culture and humanity. That

1:21:29

it is Darwin's idea, which

1:21:32

is a strange inversion of

1:21:34

reasoning. It's

1:21:37

the idea that

1:21:39

intelligence isn't the source, it's

1:21:44

the effect of

1:21:46

mindless, purposeless churning. And

1:21:51

that turns everything upside

1:21:53

down and it's still there

1:21:55

and it's even more wonderful. That's

1:22:00

right. And it's always fun for me

1:22:02

to point out that both

1:22:05

Charles Darwin and Albert

1:22:08

Einstein were deeply suffering

1:22:11

from what we call the

1:22:13

imposter syndrome. And Darwin

1:22:15

famously said, I am very poorly today

1:22:17

and very stupid. And I hate everyone

1:22:20

and everything. One lives only to make

1:22:22

blunders. I'm going to write a little

1:22:24

book for Murray on orchids today, and

1:22:26

I hate them worse than everything. So

1:22:29

farewell, and in a sweet frame of

1:22:31

mind, I'm ever yours, Charles Darwin. And

1:22:35

Einstein, I came upon this, I'm

1:22:37

giving a TED Talk, which will undoubtedly get

1:22:39

at least the logarithm of the

1:22:41

number of views that your wonderful, famous TED

1:22:44

Talk got. And you discussed that in your

1:22:46

new book. But I'm giving a

1:22:48

talk about the imposter syndrome entitled,

1:22:50

Am I Good Enough

1:22:52

to Have the Imposter

1:22:54

Syndrome? And I came

1:22:56

upon that. Thank you.

1:22:58

Thank you. I hope, as I say,

1:23:01

it gets a fraction of your TED

1:23:03

Talks views. We'll put a link to

1:23:05

that talk in the show notes. But

1:23:07

Einstein said, I consider myself an involuntary

1:23:09

swindler, and I am not deserving of

1:23:11

all the attention people give me. But

1:23:13

on that note, I want to finish

1:23:15

with the last question, kind of tied

1:23:17

into another quote by Richard

1:23:20

Feynman. Feynman said, you know, science is

1:23:22

the belief in the ignorance of experts. And

1:23:25

my podcast is called Into the Impossible.

1:23:27

And it's named after Sir Arthur C.

1:23:29

Clarke, who said the only way to know

1:23:31

the limits of the possible is to

1:23:33

go beyond them into the impossible. So that's

1:23:35

where I got the name of this podcast.

1:23:37

But he also said a few other things,

1:23:40

including for every expert, there's an

1:23:42

equal and opposite expert. I like to hit

1:23:44

my department chair with that every now and

1:23:46

then. But he also said the following and

1:23:48

that's how you close your book with

1:23:51

a chapter called What if I'm Wrong? And

1:23:53

that's to ask you this comment on this

1:23:55

question or this statement by Sir Arthur C.

1:23:57

Clarke. He said, when an elderly but the

1:24:00

a distinguished scientist says something

1:24:02

is impossible, he

1:24:05

is very certainly right. But when he

1:24:07

says something is impossible, he is very

1:24:09

likely to be wrong. I wanna ask

1:24:11

you, what have you been

1:24:13

wrong about? What have you changed your mind about, Dan,

1:24:16

if anything? Oh gosh,

1:24:19

I've changed my mind about quite a few things.

1:24:21

And sometimes, in

1:24:23

order to, I

1:24:26

would be seduced by a wise crack. In

1:24:29

my first book, I said

1:24:33

about an idea about the

1:24:35

language of the brain. I said,

1:24:38

it seemed to have all the virtues of

1:24:40

replacing the little man in the brain with

1:24:42

a committee, which was, I thought,

1:24:44

a pretty good gag. And

1:24:47

then later, I realized, no, no, that's

1:24:49

right. Replacing the little

1:24:51

man in the brain with a committee is exactly

1:24:53

the way to go. Homuncular

1:24:55

functionalism, as it's often called,

1:24:58

thanks to Bill Liken, who

1:25:00

gave it a name. The

1:25:03

idea that we

1:25:05

big human agents are made of, actually,

1:25:10

about a trillion smaller

1:25:12

agents, human cells, and

1:25:16

a lot of cells that aren't, even more

1:25:18

cells that aren't human. And

1:25:22

this is the road to understanding what we are. We're

1:25:28

colonies of agents.

1:25:32

We do replace the little man in the brain

1:25:34

with a committee. That's how you make progress. Well,

1:25:36

Dan Dennett, this has been a true treat for

1:25:38

me. With first time meeting, I hope

1:25:40

we get to meet in person someday, because

1:25:43

the seminal acrum should be

1:25:45

replaced wherever possible. I

1:25:47

can't thank you enough for this wonderful interview, Thank

1:25:51

you so much, Dan, for doing that. Well, very good. I

1:25:54

look forward to seeing what you edit out

1:25:56

of it. No, what? That

1:25:59

is, I mean. interested in the finished product,

1:26:01

not to worry about what

1:26:03

you edit out of it. Absolutely. Well,

1:26:05

thank you, Dan. Have a good day.

1:26:07

Feel better. Feel well.

Unlock more with Podchaser Pro

  • Audience Insights
  • Contact Information
  • Demographics
  • Charts
  • Sponsor History
  • and More!
Pro Features