Podchaser Logo
Home
Solo: The Coming Transition in How Humanity Lives

Solo: The Coming Transition in How Humanity Lives

Released Monday, 25th March 2024
Good episode? Give it some love!
Solo: The Coming Transition in How Humanity Lives

Solo: The Coming Transition in How Humanity Lives

Solo: The Coming Transition in How Humanity Lives

Solo: The Coming Transition in How Humanity Lives

Monday, 25th March 2024
Good episode? Give it some love!
Rate Episode

Episode Transcript

Transcripts are displayed as originally observed. Some content, including advertisements may have changed.

Use Ctrl + F to search

0:00

Hello everyone, welcome to the Mindscape Podcast. I'm

0:02

your host Sean Carroll. And as

0:04

I'm recording this in March of 2024, a few days

0:06

ago, Wernher Vinge passed away. You

0:10

might know Wernher Vinge was quite a

0:12

well-known science fiction author, the author of

0:14

A Fire Upon the Deep and

0:17

other novels. Basically his favorite

0:20

thing to do was to take technology and

0:23

to extrapolate it, to imagine technological innovations

0:25

far beyond what we have in the

0:27

present day, and then to think about

0:29

the implications of those technological innovations for

0:31

humanity, human behavior, and society, and life,

0:33

and so forth. Yes, something that science

0:35

fiction has always been very good at.

0:38

In fact, even if you've never read any

0:40

of his books, you might be aware of

0:42

the impact of Wernher Vinge because he was

0:44

the one who popularized the idea of

0:47

the technological singularity, a

0:50

moment when advances in technology would

0:52

become so big that a fundamental

0:55

change would happen in the nature

0:57

of human existence. He

0:59

did not coin the term singularity, not

1:02

quite, not in this sense. It goes

1:04

back to John von Neumann, of all

1:06

people, maybe not surprising actually, in retrospect.

1:08

von Neumann was one of the leading

1:11

mathematicians and physicists and thinkers of the

1:13

20th century. And if you

1:15

look up the Wikipedia page for the

1:18

technological singularity, you will find something I

1:20

did know that it was first

1:22

mentioned in a kind of off

1:24

hand remark by John von

1:26

Neumann talking to Ulam

1:28

and mentioning that with humanity

1:31

was approaching an essential singularity

1:33

in technological progress. Now,

1:35

since then, this idea has been

1:37

borrowed by others most famously by

1:39

Ray Kurzweil, and it has

1:42

gained a little bit of, well,

1:44

there's enthusiasm for it in some

1:46

quarters, there's skepticism about it in

1:48

other quarters. The specific version of

1:51

the technological singularity that Vinge and Kurzweil

1:53

were talking about, we don't know exactly

1:55

what von Neumann was talking about, but

1:58

Vinge and then Kurzweil were talking about

2:00

a technological

2:02

singularity driven by AI

2:06

superintelligence. Okay, so

2:08

the basic idea is at some

2:10

point artificial intelligence becomes so

2:12

smart that it will be

2:15

able to design even smarter

2:17

artificial intelligences and then you get

2:19

a positive feedback loop and runaway

2:22

growth and eventually you hit a singularity

2:24

where the growth is sort of effectively

2:26

infinitely big. Many

2:28

people, like I said, have been a little skeptical

2:30

of this for a couple reasons. Number

2:34

one, I will mention down

2:36

the road in this podcast that

2:39

this might be a slightly overly

2:41

anthropocentric view of what artificial intelligence

2:43

is and what kind of intelligence

2:46

it has. But also number

2:48

two, because the actual data,

2:50

the actual evidence in favor of this idea

2:53

was always a little dodgy. And the

2:55

first file in particular was

2:57

very fond of just plotting things and

2:59

it was not clear how objective it

3:01

was what he's plotting, you know, the

3:03

number of technological breakthroughs over time. Number

3:07

one, it's not clear why that would matter if

3:09

it's eventually AI that it's going to do the

3:11

transitioning. Number two, it's

3:13

not clear how to count what is

3:15

a technological innovation. You know, is every

3:18

new iPhone model a technological innovation? It

3:20

was just not very well defined. It

3:23

got a lot of hype. People always react

3:25

against hype. And so it wasn't necessarily

3:27

taken too seriously in a lot of

3:29

quarters, including in this quarter

3:31

here at Mindscape World International Headquarters. I

3:33

never really worried too much about the

3:36

technological singularity. That was not my cup

3:38

of tea. But recently

3:40

we had a lecture at Johns

3:42

Hopkins by Jeffrey West. Jeffrey

3:44

West, you will all know he was one

3:46

of the first guests on Mindscape. I just

3:48

presume that every listener has listened

3:50

to every back episode. Jeffrey

3:53

was formerly the president of the Santa

3:55

Fe Institute And he's one

3:57

of the leading figures in complexity science,

3:59

a former particle. they deserve to switch

4:01

to complexity. When the Super Conducting Super

4:03

Collider project was cancelled and Jeffrey has

4:05

studied scaling laws and networks in biology

4:08

but also in human systems and you

4:10

are a wonderful book called scale that

4:12

you can read or you can read

4:14

about the Scaling Last Dog where you

4:16

hear about going law stuff. In the

4:18

Podcast episode we did so this letter

4:21

he gave that was part of the

4:23

Natural Philosophy Forum that we now have

4:25

a Johns Hopkins and one of the

4:27

things that Natural Philosophy form does is

4:29

every year a distinguished lecture to last

4:31

year was Daniel Dennett, another former Mindscape

4:33

guest this year. He was every once

4:36

and he talked about a lot of

4:38

as usual stuff, but then he talked

4:40

about something that I think probably I've

4:42

heard him talk about before but it

4:44

didn't really sink in. The other is

4:46

that and you can hear things and

4:49

you can understand them in the moment

4:51

and they don't really make an impact

4:53

on your deeper thoughts until the time

4:55

is right and that's what happened with

4:57

me. The thing he was talking about

4:59

was. Essentially, the technological singularity use that

5:02

term. He he mentioned the history of

5:04

it it cetera, but he had much

5:06

better data the have ever seen before.

5:09

A he was urging us to take

5:11

seriously this idea of a technological singularity.

5:13

but there are two things that for

5:15

me made it much more persuasive than

5:18

anything I'd her before. Number One, Like

5:20

I said, he had better data. So.

5:23

He wasn't just plotting how many

5:25

technological innovations there had been per

5:27

unit time, but rather the pace

5:29

at which innovations are adopted. You

5:32

may have heard that wouldn't have

5:34

Gp T, the lawyers language model

5:36

from Open A eyeing came became

5:38

public. It was adopted faster than

5:41

any other similar technology in human

5:43

history, and Jeffrey showed data showing

5:45

that this is a trend that

5:48

not only are we innovating, which

5:50

we have been for long time.

5:52

But it is faster and faster that

5:55

we are actually. Quickly.

5:57

innovate quickly taking up those innovations

5:59

and adopt them. So that was one thing

6:01

that I thought was quantitatively a lot more objective

6:04

and believable than what

6:06

I had seen before. The other is

6:08

that he wasn't talking about artificial

6:11

intelligence that much at all. His

6:13

story did not rely on any

6:15

particular understanding of what it means

6:17

to have artificial intelligence or what

6:19

it might do. It

6:22

was just the pace of innovation is

6:24

increasing for plenty of reasons. The underlying

6:26

causality is almost irrelevant. His point was

6:28

that the data are pointing to

6:31

something like a singularity. And

6:34

Jeffrey, of course, is a well-trained physicist.

6:36

He knows the math and this idea

6:38

of a curve, you plot something versus

6:40

time or versus some other variable, and

6:43

the curve blows up at a finite

6:45

point. At some moment

6:47

in time, again, where other variables are changing with

6:49

time, the curve seems to go to infinity. So

6:52

it's of the form 1 over

6:54

x as x approaches 0.

6:57

That is called a singularity

6:59

in mathematics and physics. And

7:02

where those things show up in

7:04

physics, you might think

7:06

of in quantum field theory, there are

7:08

infinities from Feynman diagrams or in general

7:10

relativity, there's a singularity at the center

7:13

of the black hole. And indeed, those

7:15

are examples of physical quantities becoming infinitely

7:18

big, but then there's ways to get

7:20

around them. A much

7:22

more relevant example is in phase

7:24

transitions. So a phase transition happens

7:27

when you have some underlying stuff,

7:29

water molecules or whatever, and

7:31

you change some external parameter,

7:33

density or pressure or again,

7:35

whatever, and you measure different

7:37

physical quantities in this substance

7:40

as you're changing some overall

7:42

parameter. And sometimes, well,

7:44

there can be a phase transition, ice

7:47

turning into liquid water or whatever,

7:49

things evaporating, solids, liquids, gases are

7:51

the traditional examples, but there

7:53

are others. And if you

7:55

measure the right quantity, then at a phase

7:57

transition, you can find that this quantity goes

8:00

to infinity. And

8:02

that's not crazy or ill-behaved, actually. Of

8:04

course, it never actually reaches infinity because

8:07

your measurements are not infinitely precise, and

8:09

there's only literally one point of time

8:11

or temperature or what have you where

8:13

that would happen. So the

8:16

real world always smooths things out a

8:18

little bit. But the point

8:21

is that you can still continue

8:23

the behavior past the singularity

8:26

in these phase transitions, right?

8:29

Ice doesn't cease to exist when it

8:31

melts, et cetera. What Jeffrey actually showed

8:33

was propane and its heat capacity

8:35

as a function of temperature. So

8:37

if you go to my

8:39

website, preposterousuniverse.com slash podcast,

8:42

I will reproduce that graph

8:44

that Jeffrey showed for the

8:46

propane phase transition. Singularities

8:49

in physically

8:51

observable quantities are characteristic

8:53

of phase transitions. So

8:56

in other words, thinking like a physicist,

8:59

two things are suggested and

9:01

only suggested, right? Not proved or derived or

9:03

anything like that, but suggested. Number one, we

9:06

should take the possibility of a

9:08

singularity very seriously. They happen in

9:10

real down-to-earth physical systems. And

9:13

number two, they might be

9:16

harbingers of phase transitions. It's

9:18

not that the system cease to exist

9:21

or blows up or self-immolates or anything

9:23

like that. It's just that it changes

9:25

in a dramatic way to a different

9:27

kind of thing. You can

9:29

think of it as there being a sort of equilibrium

9:31

configuration of the stuff on one

9:34

side of the phase transition and

9:36

a different kind of equilibrium configuration

9:39

on the other side. So

9:42

this discussion that Jeffrey had really

9:44

made me think like, oh my

9:46

goodness, maybe this is actually worth

9:48

taking seriously. That's what we're going

9:50

to do today. This led me to do this

9:53

podcast. So I had already had the idea of

9:55

doing the podcast after Jeffrey's talk before I knew

9:57

that Werner Binge passed away, but it's now even

9:59

more. appropriate. This

10:02

thing is very hard to think about. What we want

10:04

to think about is future technological

10:06

innovations and changes, right? We've talked

10:08

about such possibilities in the podcast

10:10

many times in various different modes,

10:13

but it's hard to be comprehensive. It's

10:15

hard to put them together, see how

10:17

different kinds of changes and innovations can

10:19

affect each other and so forth. It's

10:21

also both too easy

10:23

to be extremist, to

10:25

wildly over extrapolate what's going to

10:27

happen and lose your sense of

10:29

accuracy and proportion, and also far

10:31

too easy to be sanguine, to

10:34

say, you know, there's always been alarmists and

10:36

people saying the sky is falling and it

10:38

doesn't happen so I can just ignore this.

10:41

You know, I think we have to be

10:43

responsible. You know, maybe this is all wrong.

10:45

Maybe there's no phase transition coming. Maybe the

10:47

rate of innovation will appropriately slow down or

10:49

it will continue but we'll handle it in

10:51

some not very dramatic way. But if

10:54

you take the numbers seriously, then at some point in

10:56

the future, 50 or less than a hundred

10:59

years from now, we are

11:02

in for a shift. We're in for a

11:04

different way of living here

11:06

on earth as human beings. So

11:09

I'm not an expert on this.

11:11

I've been doing the podcast

11:13

for a long time. I've talked a lot of experts

11:15

on different things. So this is

11:17

going to be my untutored

11:19

semi-educated reflections and musings on

11:22

this possibility. Think of it

11:24

more as an invitation for you to

11:26

think than as anything

11:28

like a true high credence

11:30

set of predictions, okay? If you don't believe

11:32

me, that's a hundred percent fine. I want

11:35

us all to be contemplating

11:38

these possibilities. They seem to be important.

11:41

They seem to be things that we haven't thought

11:43

about. I'm not going to say we haven't thought

11:45

about them a lot because plenty of people have

11:47

thought about them. I don't think we

11:49

thought about them seriously and responsibly

11:51

enough. So this is an

11:53

invitation to do exactly that.

11:56

Let's go. I

12:14

thought it would be good to kind of

12:16

get our bearings by remembering

12:18

the story of human

12:20

history, as it were. I

12:23

am not an expert in this, as you

12:25

know, but it's important to recall

12:27

the very basic parts of this story that

12:29

we probably are all familiar with. At

12:32

some point in the development

12:34

of Homo sapiens, we developed

12:37

language and symbolic thinking maybe

12:39

a hundred thousand years ago, something like that,

12:41

of that order. The

12:44

way of sharing information with our fellow

12:46

Homo sapiens in a way that gave

12:48

us the ability to do things like

12:50

cooperate, to build on previous

12:53

knowledge, to learn and pass down

12:55

culturally what we have learned. Over

12:58

the course of time, that led to

13:00

the innovation of agriculture. We

13:02

went from a set of

13:04

hunter-gatherer societies to mostly agricultural

13:06

ones that opened up possibilities

13:08

for specialization. Not everyone had

13:10

to do the same job. And

13:13

that opened up the possibility of social

13:16

structure for better or for worse, different

13:18

people having different roles in the community.

13:21

Note that people are not necessarily

13:23

happier in agricultural

13:25

societies than in primitive agricultural

13:27

societies than in primitive hunter-gatherer

13:29

societies. This is something that

13:32

anthropologists and historians debate about.

13:35

Arguably you have more free time

13:38

in a hunter-gatherer society. Almost

13:41

certainly there is more inequality once you

13:43

go to the agricultural model, but maybe

13:46

you also, on average, have a higher

13:48

standard of living, maybe a little bit

13:50

more reliability in your food supply, things

13:52

like that. But we're not here to

13:55

judge. That's not the goal. The

13:57

point is that these agricultural

14:00

societies with more specialization

14:03

open the door to different

14:05

kinds of innovation. And

14:07

innovation as a word is usually

14:09

attached to scientific or technological engineering,

14:12

invention kinds of things, but there

14:14

are also innovations in philosophy, in politics,

14:17

in art and so forth. And these

14:19

kinds of things can begin to flower

14:21

about 10,000 years ago once

14:24

we invented agriculture. There's a positive

14:26

feedback loop, as we mentioned before.

14:28

The population grows because

14:31

you have more agriculture,

14:33

more food and things like that, and then

14:36

you get more innovation because there are more

14:38

people. One of the things that Jeffrey West

14:40

and his collaborators have shown is that

14:44

of course there's more innovation in

14:46

cities than in rural environments

14:48

just because there are more people, right?

14:50

But in fact, the amount

14:52

of innovation scales super

14:55

linearly with population density.

14:57

So in other words, not only do

14:59

you get more innovation in cities because

15:01

there are more people, but there's more

15:04

innovation per person, presumably

15:06

because the people are interacting with

15:08

each other, sharing ideas and things

15:10

like that. So the rate of

15:12

innovation speeds up as these transitions

15:14

begin taking place. But it's

15:17

also super important to note

15:20

that the whole thing takes a lot of time.

15:22

When you're thinking about social

15:24

structures, innovation, things like that,

15:27

the space of possibilities, the

15:30

space of possible inventions or

15:32

philosophical ideas or whatever, artistic

15:34

forms, is hugely large.

15:37

So even if things look like they've

15:39

been more or less the same for

15:42

a hundred year period, there can actually

15:44

still be very, very important changes going

15:46

on. And we see this

15:48

because of course eventually we hit the

15:51

Scientific Revolution, Industrial Revolution, Renaissance

15:53

Enlightenment kind of era where

15:56

things change once again pretty

15:59

dramatically. So, you

16:01

know, you know that population has been going up

16:03

on Earth for a long time. There's details

16:06

about what it's doing right now,

16:08

but historically population has been rising

16:11

since we've had this agricultural shift.

16:14

But the rate at which population

16:16

is growing has not been constant.

16:19

You know, we're all familiar with

16:22

exponential growth. If there's some time

16:24

constant over which quantity gets bigger

16:27

by a certain multiplicative factor, so if

16:29

you multiply something by 2 every so

16:32

often, then you will grow exponentially,

16:34

and that might look like

16:36

what population is doing. But

16:38

the rate of population growth has not been constant.

16:41

Between the birth of agriculture and

16:43

the scientific revolution, it grew, but

16:45

it's been growing much faster since

16:48

the scientific revolution. This is a

16:50

sign that something is going on

16:53

more than simply a constant rate

16:55

of growth. So

16:59

hand in hand with

17:01

the scientific revolution, industrial

17:03

revolution, etc., we get

17:05

democracy, open societies, cities,

17:07

all feeding into this

17:09

culture of innovation. One

17:12

thing to note as we're just getting

17:14

things on the table to remember as

17:16

we go through this journey is that

17:18

it's very, very hard to start with

17:20

some observation of what is happening and

17:23

to naively extrapolate. Or rather, sorry,

17:25

I should have said the opposite of that.

17:28

It's easy to extrapolate, but it's almost useless.

17:30

It is incredibly dangerous to

17:32

extrapolate. We famously have

17:34

Moore's law, for example. Moore's

17:37

law says that the number of components in

17:39

a computer chip or the equivalent doubles

17:42

every 18 months, something like

17:44

that. And that's

17:46

exponential growth right there. Exponential

17:49

growth happens for various kinds of processes.

17:51

You can extrapolate into the future on

17:54

the assumption that the exponential growth will

17:56

continue. Crucially important is

17:58

that exponential growth happens. growth never

18:00

truly continues. There's nothing in

18:02

nature that grows exponentially forever

18:04

with the possible exception of

18:06

the universe itself. Because

18:09

here on Earth, there is a finite

18:11

amount of resources that we can use.

18:14

Or if you think, well, we'll go into space someday,

18:16

that's fine, maybe we will. In

18:18

the observable universe, there is a finite amount of

18:21

resources. When I did my little chat about immortality

18:23

at the end of last year, I point out

18:26

low entropy is a finite resource. And

18:28

there's no way to just get that

18:30

to go infinitely. I'm not really talking about

18:32

these cosmic timescales right here. I'm just pointing

18:34

out that something can

18:37

temporarily be exponentially increasing,

18:39

but have a very different future

18:41

history. If you look at when

18:44

we had the COVID pandemic

18:47

discussion about the rate of

18:49

growth, and we want to get the rate of growth

18:52

down low enough that we can handle the pandemic, that

18:55

rate is calculated assuming that at

18:58

this instant of time, things

19:00

are growing exponentially. But if

19:02

you actually look the number of cases over

19:04

time, we've all seen these peaks and valleys

19:06

and so forth. It's a curve,

19:08

but it is not an exponential growth curve,

19:11

because many other factors kick in. So

19:14

extrapolating on the basis of a

19:17

current rate of growth is always

19:19

incredibly dangerous. That's not

19:21

to say growth will always slow

19:23

down. It could. I mean, you might have

19:25

something that looks very much like exponential growth

19:27

in some quantity right now, but

19:30

really it is what is called a logistic curve,

19:32

or a sigmoid. It's going to exponentially grow for

19:34

a while, but it's going to turn over and

19:36

flatten. In other words,

19:38

it's extrapolating or interpolating between one

19:41

almost constant value and a different

19:43

almost constant value. That kind of

19:46

behavior can look perfectly exponential. But

19:49

there's also the possibility of growth

19:51

that is faster than exponential.

19:54

So a singularity is mathematically described

19:56

by, like I said, something like

19:58

1 over x. If

20:00

x if x equals 0 is in the future

20:02

if we're in the minus x regime right now

20:05

Then that rate of growth is faster than

20:07

exponential. It's not just constant rate of growth,

20:09

but the rate of growth itself is Increasing

20:13

as another kind of thing that could be happening

20:15

and it can be very very

20:17

difficult to tell just on the basis of

20:19

some finite piece of

20:21

noisy data whether you're seeing

20:24

a sort of pole singularity

20:26

growth These are called poles

20:28

in physics and also in math Or

20:31

you're seeing exponential growth or something like

20:33

that. So it's interesting. Well,

20:35

it's possibly interesting It might be completely

20:38

trivial, but it's interesting to note that

20:40

when John von Neumann Made

20:43

his off-handed remark about

20:45

the coming singularity in human

20:48

development he used the phrase Essential

20:51

singularity and he might

20:53

have just been speaking casually or you might

20:56

not even speaking in English I'm honestly not

20:58

sure but he

21:00

was a very good mathematician and

21:02

the phrase essential singularity has a

21:04

precise technical meaning in mathematics it

21:07

means The singularity is

21:09

essential in the sense that it's

21:11

sort of uncontrollably That

21:13

it is faster than 1 over x or

21:15

1 over x squared or anything like that

21:17

as x goes to 0 An

21:20

example of a singularity would be

21:22

an exponential of 1 over x right

21:25

each of the 1 over x Grows

21:27

faster than any power of x that's an

21:29

example of an essential singularity I

21:31

don't know one of the things that Jeffrey

21:34

was points out is that this little off-handed

21:36

remark But by von Neumann was never elaborated

21:38

upon we don't really know what he was

21:40

thinking It's uncharacteristic of him.

21:42

He was very careful to write things down

21:44

and to expand upon them Jeffrey

21:47

says that you know in some sense the work he's doing right

21:49

now can be thought of as Filling

21:51

in the mathematical details there,

21:54

But none of that is really super important for

21:56

the current discussion. What Matters is that it

21:58

is completely possible? The Wall

22:01

In these mathematical. Characterizations

22:04

of various curves of growth and

22:06

so forth. To. Apparently reach

22:08

an infinite value and a finite time

22:11

and that is the sign of the

22:13

or a phase transition or something like

22:15

that. So and I'm going to depart

22:17

a little bit from what Jeffrey was

22:19

actually said in his talk and when

22:22

a link to the talk if you

22:24

haven't already seen and I will link

22:26

to it at the blog post with

22:28

for the podcast on the podcast web

22:30

page. Jeffries argument

22:32

is that we can kind

22:35

of avoid the singularity by

22:37

continually innovating. That. Will we

22:39

need is faster and faster innovation

22:41

and of course also the ability

22:43

to deal with those kinds of

22:45

innovations. And he can see in

22:47

previous day that times when human

22:49

behavior has shifted in one way

22:51

or another. you know from one

22:53

kind of mathematical extrapolation to a

22:55

different kind and says what? We

22:57

can do that again And so

22:59

it's not actually going to be

23:01

the end of anything more. We

23:03

hit this singularity south. He shows

23:05

us the plot of propane and

23:07

the heat capacity. Propane going to infinity,

23:09

going through a phase transition and then

23:11

find a new equilibrium, but he suggesting

23:13

that that's not actually what will happen.

23:16

And he does that because you

23:18

can't extrapolate. What? You

23:20

need is some kind of theory

23:23

you need a mechanistic understanding of

23:25

why these various quantities are growing

23:27

at whatever rate they're growing at.

23:30

And he base a lot of

23:32

his discussion on work by Will

23:34

Stephen who coined the term the

23:37

Great Acceleration. It's. Not just one

23:39

quantity that is growing very fast stuff and points

23:41

out that is a lot of parties or of

23:43

give a link to that also on the web

23:45

page. so

23:47

whatever you're saying is that even

23:49

though he shows the a phase

23:51

transition plot he thinks that we're

23:54

not actually necessarily headed toward that

23:56

even if we are headed toward

23:58

a singularity this other ways dealing

24:00

with it. I don't know

24:02

about that. I don't claim to

24:04

understand Jeffrey's underlying mechanistic theory. A

24:07

lot of it he hasn't published yet, etc. So

24:11

I actually am quite open to

24:13

the possibility that it is a phase

24:16

transition. I'm a big believer in phase

24:18

transitions, by which I mean, you know, the

24:22

social or political or societal or

24:24

economic equivalent of the

24:27

atoms or molecules in a

24:29

substance having different macroscopic properties,

24:32

right? Different emergent properties

24:34

in the macroscopic realm because

24:37

of slightly different conditions, slightly different

24:39

overall parameters governing how these microscopic

24:41

pieces come together. So

24:43

I think that's very plausibly what we're

24:46

seeing. We're still human beings, right? The

24:48

actual physiology and

24:51

genetic makeup of human beings hasn't

24:53

changed that much in the last

24:56

10,000 years. It's changed a little

24:58

bit. It's not going

25:00

to change that much by natural causes

25:02

over the next couple hundred years, but

25:05

we can absolutely interact

25:07

differently, and it's pretty clear that we're

25:10

beginning to interact differently than we used

25:12

to do. So the

25:14

kind of background idea I

25:17

have is that if there

25:19

is going to be a singularity, let's imagine

25:21

that there is a different kind of equilibrium

25:24

on the other side. The

25:26

phase transition singularity that we're approaching

25:28

will not be the end of

25:30

the world, necessarily, I mean, that's

25:32

one possibility. I'm not going to

25:34

really worry about existential risks, and

25:36

those are real, right? Nuclear war,

25:38

biological warfare, pandemics. There's a

25:40

whole bunch of actually real worries to have, but

25:42

that's not my intention to

25:44

think about right here. I'm

25:46

thinking about, given all

25:48

these technological changes, can

25:51

we settle into some quasi-static

25:53

new mode of living? It

25:57

might be worse. It might be better.

26:00

But we should at least think about that possibility. You

26:02

know again, none of this that

26:04

I'm talking about in this solo podcast is

26:07

highly rigorous Super research

26:09

or anything like that. I'm trying to make you

26:11

think about it I'm trying to get my own

26:13

thoughts in a slightly more systematic

26:16

fashion and inspire you to carry it on

26:18

from there. So Again

26:21

this thinking about a new equilibrium the word

26:23

equilibrium is not Accidental

26:25

an equilibrium doesn't just mean that

26:27

you've settled into some Particular

26:30

mode it means that there is

26:32

some stability in that mode that

26:35

you settled into Thinking

26:37

you know the word equilibrium started

26:39

in physics and thermodynamics you have

26:42

thermodynamic equilibrium Two objects that

26:44

are different temperatures when you bring them together

26:46

They will settle down to a common temperature

26:48

and come to equilibrium It

26:50

also appears elsewhere like in game

26:52

theory you have Nash Equilibria in

26:54

game theory where the different players

26:57

of the games all have a strategy and

26:59

they can't Individually change those

27:01

strategies to get better results. They're

27:03

in equilibrium. So that's the

27:05

important thing There's nothing that any individual

27:07

or any part of the system can

27:10

do to make things better for themselves

27:12

Whatever it is meant by better So

27:15

and I'm revealing a personal opinion that

27:17

I have here that other people might

27:19

not disagree with Which

27:22

you know roughly translates into saying that

27:25

Values of individuals all by

27:27

themselves don't matter that much.

27:30

So in other words, I'm saying Encouraging

27:33

individual people to behave in a certain

27:36

way is not really going

27:38

to drive the overall shape of

27:40

society If you

27:42

can you know tell people eat less

27:44

meat or use fewer grocery bags or

27:46

whatever These are largely symbolic

27:48

gestures if you feel better by doing

27:50

them That's great. I do

27:52

think values can matter but only

27:55

when they get implemented as large-scale

27:57

social constraints Whether those

27:59

are literally laws, you know, you can't do

28:01

this or you get arrested, but maybe they're the

28:04

tax policy, you know, certain

28:06

behaviors, you have to pay

28:08

more money, maybe it's institutions

28:10

or whatever, but you

28:13

need, the way that I think about

28:15

it, which I think is pretty robust,

28:17

is that given the large scale constraints,

28:20

individuals are going to largely pursue

28:23

their self-interest, okay? I'm

28:27

not characterizing what I mean by constraints perfectly

28:30

because it's not all, you know, laws

28:32

and regulations, you can have broad scale

28:34

social understandings that are not formally written

28:37

down, but you need some

28:39

agreement, you need some consensus, otherwise

28:41

these understandings have no oomph, they're

28:43

not true constraints, they're just, again,

28:45

making individuals feel good. As

28:48

an aside, this makes me very

28:50

sad that in current discourse between

28:53

and among people who agree and

28:55

disagree with each other, you don't

28:57

see much attempt to persuade other

28:59

people to your side, you know, most

29:01

of the people who I see are just

29:03

making fun of or disagreeing with people, arguing

29:05

with them, it's

29:07

hard to make large scale changes that way, you

29:10

need everyone to agree, or at least a lot

29:12

of people to agree, to really

29:15

agree to change the social system as

29:17

a whole in a way that would

29:19

lead us to a better equilibrium,

29:23

you can't just take the people on your side

29:25

and fight, you need to actually

29:28

change the minds of people on other sides, and

29:30

that's something that doesn't happen a

29:32

lot these days, and maybe that's part

29:34

of the technological world

29:36

in which we live, that certain things

29:38

are incentivized and certain things are not.

29:41

Okay. That's the background,

29:44

that's the throat clearing, telling

29:46

you what my particular perspective on these

29:48

things are. So now let's talk about

29:50

technology and the changes that we are

29:53

facing, and there are many of them,

29:55

and I'm not going to go through

29:57

all of them, again, super non-systematic here,

30:00

talk about three aspects and again

30:02

very quickly superficially. One

30:04

aspect, the environment, energy

30:07

consumption, climate change, things like

30:09

that. Another one, the sort

30:11

of biological ways that technology

30:13

is changing our lives, whether

30:15

it's synthetic biology or gene

30:17

editing or whatever. And then

30:19

finally computers, artificial intelligence,

30:22

those kinds of information electronic technology

30:24

things that we're also very fond

30:26

of. I think all of these

30:28

matters so this is why it

30:30

is not just a recapitulation of

30:33

the vinj, Kurzweil, kind of AI

30:35

superintelligence driven technological

30:39

singularity. I don't

30:41

think that's the point but if lots of things

30:43

are happening so that's that's the place that I've

30:45

come to temporarily. Probably I'll change

30:47

my mind about all these things before too long but here's

30:49

where I am right now. Let's think about the

30:52

environment, sustainability,

30:55

energy sources, things like that. This

30:57

is a little different than

30:59

the other ones because

31:01

it's more a story

31:03

of gloom, right? The

31:07

environment is something that changes. We

31:09

shouldn't get into the mindset that

31:11

there is a right way for the environment

31:14

or ecology to be the

31:16

biosphere for that matter. Change

31:19

is very natural but

31:22

we the human race are

31:24

causing changes in a highly non

31:27

reflective, non-optimal way, right? We

31:30

are making things worse. Change

31:33

is not the problem. The problem is that

31:35

we are clearly hurting the

31:38

environment in very tangible quantifiable ways.

31:42

So climate change is clearly getting

31:44

worse and

31:47

it's getting worse faster. The

31:49

recent news is that

31:52

people have always said that the people who

31:54

want to deny the reality of climate change

31:57

have long pointed to the difficulty of modeling

31:59

the climate. They say, these climate

32:01

models are not reliable, blah, blah, blah, blah. I

32:05

get that. It is very,

32:07

very difficult. Again, much more difficult than

32:09

theoretical physics. The climate is a paradigmatic

32:11

complex system. There's a lot going on,

32:13

a lot of different forces

32:15

at work. But the

32:18

empirical fact seems to be that

32:21

if the climate models that we've been trying

32:24

to work on for the last several decades

32:26

are wrong, it's wrong because the reality is

32:28

worse than what the model

32:30

has predicted, especially in this year, 2024, all the global

32:32

temperature indicators

32:35

are higher than we expected them

32:37

to be. On

32:40

the other hand, there are small signs of hope. So

32:45

we've talked about these issues on the

32:47

podcast before. We talked about actual climate

32:49

change with Michael Mann and the problems

32:52

there. But we also

32:54

talked with Hannah Ritchie relatively recently

32:56

about hopeful prospects. I mean, mostly

32:58

for cleaning up the environment

33:01

rather than combating climate change. But

33:03

Hannah's point was, you can't just

33:06

become passive and full of doom.

33:08

You have to keep hope alive.

33:10

You have to say, okay, but

33:12

what can we do? And

33:14

you have to remember that there is evidence that

33:16

things can be done, progress can be made. Specifically

33:21

when it comes to energy and renewables,

33:23

we had a podcast quite a while

33:25

ago with Ramaz Nam, where he talked

33:27

about the absolutely

33:29

true fact that

33:31

progress in renewable energy has been

33:33

moving faster than we expected it

33:36

to do. As

33:38

dependent as we currently are on fossil

33:40

fuels of all various sorts, there

33:42

are alternatives that are becoming very

33:45

realistic and are being implemented. And

33:48

something that is always true in

33:50

these discussions of rapid change is

33:52

that there can be competing influences,

33:54

both of which are rapid. And

33:57

there can be a race. So,

34:00

I forget who mentioned this. I

34:02

always like to try to give credit to people.

34:04

Someone pointed out to me recently, you know,

34:06

it might have been Chris Moore at SFI. But

34:10

we are getting better at things

34:12

like solar and wind power and

34:15

things like that. But maybe not

34:17

so fast that people are ready

34:19

to wait until we

34:21

are completely converted to

34:23

those kinds of energy generation. And

34:26

if they are not, they might say, well, let's

34:28

build some more infrastructure to burn some more fossil

34:31

fuels, either natural gas, fracking, whatever it is. And

34:33

then once that infrastructure is there, we are going

34:35

to be using it for the next 40 years.

34:38

So, there is a race that is on

34:41

to see whether or not we can resist

34:43

the temptation to just burn through more fossil

34:45

fuels and make the climate even worse. But

34:48

there is the possibility of

34:50

doing better. There has certainly been

34:52

a relatively legitimate

34:55

worry that the only way

34:57

to cut greenhouse gas

34:59

emissions would also be to

35:01

slow economic growth. There,

35:04

the evidence is quite

35:06

optimistic, namely, in many countries

35:09

around the world, the rate of

35:11

economic growth has become decoupled from

35:14

the rate of greenhouse gas emissions.

35:16

In other words, there are many

35:18

countries out there that have been

35:20

lowering their CO2 and other greenhouse

35:22

gas emissions while nevertheless growing economically.

35:25

So, it can be done. That is a little sliver of

35:27

hope. It doesn't say we will all

35:29

choose to do it. It's not necessarily the biggest, the

35:32

worst perpetrators that are lowering their

35:34

CO2. And I

35:36

always like to bring up one of my favorite podcasts

35:39

that we've ever done was with Joe Walston, who

35:42

is a conservation scientist who

35:44

tries to preserve various species.

35:46

And he gives a sales pitch and

35:49

also a sort of prognostication for

35:51

urbanization as a phase

35:53

transition. He notes, again,

35:55

something that the data are pretty clear

35:58

about, that living in cities is

36:00

better for the environment than living

36:02

in, than scattering the human race

36:05

around urban or suburban places

36:08

to live. You might visualize cities

36:10

as having factories and having

36:12

pollution and things like that. But per person,

36:14

it is way more

36:16

energy efficient to have people live in cities.

36:19

We don't use as much land. We don't

36:21

use as much fuel to heat your houses,

36:23

because you're living in group buildings and things

36:25

like that. You don't need to drive as

36:28

far. So there's many reasons

36:30

why cities are better for the environment if you

36:32

have the same number of people. And the good

36:34

news is the world is

36:36

urbanizing. So Joe Walston

36:39

suggests another glimmer of hope,

36:41

that we're entering a new

36:43

kind of distribution of humanity,

36:45

where the vast majority of humans live in cities.

36:47

There are some who still are

36:49

out there on the farms, living in the country. That's

36:52

fine. And this is not driven by rules. This

36:55

is not the communistic cateorship telling you where

36:57

to live. This is that people are

36:59

choosing to live in cities at

37:02

unprecedented rates. And if that

37:04

comes true, then we can envision,

37:07

at least, a future equilibrium

37:09

where we live sustainably on

37:12

the land, where we don't ruin

37:14

the rainforest for beef or things

37:16

like that. But we have other

37:19

ways of getting our food supply and

37:21

so forth. So

37:23

I don't, throughout all of this discussion,

37:26

I have no agenda, really.

37:28

I'm not trying to convince you one

37:30

way or the other. I'm exploring the

37:32

possibilities. And I think one of the

37:34

future optimistic possibilities comes from urbanization.

37:36

I think that for a lot of reasons,

37:38

cities are good if we do them right.

37:41

And something else we discussed on that

37:44

same podcast was population growth. The

37:46

population of the Earth is pretty

37:48

big, almost 8 billion people. And

37:51

it's still growing. It is going

37:53

up. But the rate of growth

37:55

has noticeably decreased. So it's growing,

37:57

but it's growing slower, more and

37:59

more. More slowly. The first

38:01

derivative is positive, the second derivative is

38:03

negative. For some reason

38:05

there are people who are worried about

38:07

this. I am completely not worried about

38:09

this. I don't think that it

38:12

would be better to have 20 billion people on

38:14

Earth than just 10 billion people. I

38:16

think 10 billion people is fine. And indeed, if

38:18

you want to imagine some sustainable way

38:21

of living here on Earth, I think

38:23

there's probably some maximum number for which

38:25

that would be a comfortable situation. Obviously

38:29

we can think of the fact

38:31

that the rate of population growth has

38:33

decreased as a slight precursor to the

38:35

coming phase transition. This is

38:37

something Joe mentioned. The new way of

38:40

living that is more urban and

38:42

also coupled with better healthcare and

38:45

higher education rates and things like

38:47

that is a new way

38:49

of living that doesn't require quite as many

38:51

babies as it used to. This

38:53

whole idea of the singularity is a little

38:56

bit fuzzy because different technological changes are happening

38:58

at different rates. Maybe

39:00

the change in population growth

39:02

rate is a harbinger

39:05

of a better, more stable new equilibrium

39:07

to come. Having

39:10

said that, I think again the data

39:13

are speaking very clearly that at the moment

39:15

we are destroying the Earth. The

39:18

climate is getting worse. There are

39:20

positive feedback mechanisms that are making it

39:22

get worse faster. The

39:25

upshot of that is that I don't

39:27

think it's an existential risk. Existential

39:30

risks are defined as those that

39:32

literally speak to the end of

39:34

humanity as we know it. I don't think it's

39:36

like that. What I think

39:38

is that it will lead

39:42

to enormous suffering as well

39:44

as enormous economic costs, climate

39:46

change. That's bad. I

39:49

don't think that it's going to lead

39:52

to the extinction of the human race,

39:54

but it will absolutely lead to the

39:56

extinction of other species. It

39:58

will change. the biosphere

40:01

in very, very important,

40:04

somewhat unpredictable ways. It

40:07

will eliminate much of the

40:09

land that a lot of people

40:11

live on now from being livable.

40:13

It will completely change habits of

40:15

farming and food production. Guess

40:18

what? Poor people will

40:20

be hurt disproportionately compared to

40:22

rich people. Even

40:24

the rich people will suffer because it will

40:27

just cost enormous amounts of money. We will

40:29

lose enormous amounts of human wealth. We

40:31

are going to lose enormous amounts of

40:34

human wealth because of climate change. That's

40:37

bad. It doesn't have to be an existential risk,

40:39

to be bad. I

40:41

think we can recognize that it's bad and we should be

40:44

very, very motivated to do what we can do to prevent

40:46

it. But there

40:48

is, like I said, there is still hope

40:51

for stabilizing things in the future, not

40:53

even counting clever scientific

40:56

possible solutions. Can we terraform

40:59

our own planet? Can we do

41:01

things to the atmosphere that will

41:04

undo the effects of dumping fossil fuels

41:06

into them in forever? I

41:08

don't know. I know people get very emotional talking

41:10

about these things, but I think that

41:14

medium term, things are going to get

41:16

noticeably worse for the climate

41:18

than they are right now. Long

41:21

term, we will survive possibly at

41:23

a different equilibrium and

41:26

our job is to make the

41:28

transition, like give us a soft

41:30

landing, right, to make the whole

41:32

thing as less, as least painful

41:34

as it possibly can be. So

41:37

good. That's all I have to say about

41:39

the environmental climate change and things like that.

41:41

Nothing profound. I know that. I

41:43

want to get out of the way first because on the one hand

41:45

it's super important. On the other hand,

41:48

you've heard this message before. So there's

41:50

my version of it. Let's

41:52

move on to biology because

41:55

here's where I think we

41:57

should as a society be paying more.

42:00

more attention than we have

42:03

to what advances

42:05

in our

42:08

knowledge of biology and our

42:10

technical abilities to manipulate biology

42:13

are going to do, okay? Going

42:15

to do for what it means to be

42:17

a human being. And we've

42:19

talked a little bit about this set of

42:21

things in the podcast, but maybe not as

42:23

much as we could have. So I'll just

42:25

mention a few things to keep in mind

42:27

when we ask ourselves these questions. One

42:31

is longevity. We

42:33

did have an early podcast with Colleen Murphy,

42:35

who is one of the world's experts on

42:37

this, and she has subsequently come out with

42:39

a book that you can buy on longevity.

42:42

And I think that there are mixed messages.

42:44

On the one hand, when you look

42:47

at little tiny organisms, not just microorganisms,

42:49

but little tiny worms and things like

42:51

that, there are remarkable

42:54

things you can do by playing

42:56

with the DNA of these little

42:58

organisms. You can make them live

43:00

much longer than they ordinarily would.

43:02

But those particular kinds of changes

43:05

don't obviously scale up to mammals

43:07

or other human beings. And

43:11

it's an interesting situation because there's

43:13

no rule out there in the

43:15

laws of nature that

43:17

says you can't stop or reverse aging. It's

43:20

an engineering problem, as we theoretical physicists

43:22

like to say, but it's

43:24

a very, very hard engineering problem.

43:27

So for example, if

43:29

you track average lifespan

43:32

of civilizations or societies as

43:34

they become more technologically

43:37

developed, the average lifespan

43:39

tends to go up. So

43:41

you tend to think we're living longer and

43:43

longer, and that's a trend that will continue.

43:46

But if you dig into the data a little bit, the

43:49

maximum lifespan of human beings hasn't

43:51

actually changed that much, whether you

43:53

think about it as 120 years

43:55

or something like that. The

44:00

people who live the longest have

44:03

been living that long for a long time, regardless

44:05

of what kind of society they're in. The

44:08

reason why our average life expectancy

44:10

is going up is because people

44:12

aren't dying young nearly as much.

44:15

We are living, on average, closer and

44:18

closer to that upper

44:20

limit. But changes in diet

44:22

and exercise and medical knowledge

44:24

haven't really increased the sort

44:27

of envelope, the cutoff for

44:29

how long human beings can

44:31

live. So in

44:34

the spirit of taking changes

44:36

that are going on and imagining that they

44:39

are indicating that we are heading towards some

44:41

kind of major transition, I'm going

44:43

to boldly predict that we are not headed

44:46

toward a major transition in longevity. As

44:49

I said, we could at some point do

44:51

that, but I don't think that we're currently

44:53

on that trajectory in the medium or short

44:55

term to do that. I'm

44:58

hoping that we will live healthier lives and more

45:00

of us will live to be 100 or whatever,

45:03

but I don't foresee a lot of people living to

45:05

be 200 in, let's say, the

45:07

next 100 years. I

45:10

could be wrong, of course, very happy to be

45:12

wrong about that, but I don't think that's where

45:14

I'm going to bet my money for a major

45:16

transition. There are other places to put your money

45:19

for major transitions. One, of course, is

45:21

gene editing. We did have a discussion

45:23

of gene editing with

45:26

Theodore Urnoff, one of the pioneers

45:28

of this. There's

45:30

sort of a hype cycle in

45:32

these kinds of discussions. When

45:34

CRISPR first came out, and for

45:37

that matter, when we first mapped the

45:39

human genome, people started having panicked discussions.

45:41

Oh, actually, yeah, we talked to Alta

45:43

Charo way back, very, very early discussion

45:47

in the history of the podcast. We talked about the

45:49

legal side of bioethics and gene

45:51

editing. So people had these

45:53

discussions about, you know, should we, are

45:56

we worried that people are going to make

45:58

designer babies and are going to sort of

46:00

be more... mucking with our own human genome and

46:02

that's going to lead to some dramatic change in

46:04

everyone is going to be I don't know

46:06

blonde and blue eyed or something like that

46:09

or there will be like all boys

46:11

and no girls or vice versa there's

46:13

a lot of reasons to worry and

46:16

some of those worries are just

46:18

kind of stodgy conservatism

46:21

right the human race has always

46:23

been like this therefore we should

46:25

not mess with it I

46:27

don't buy that kind of at all you know I

46:30

think that if we gather the

46:32

ability to look into

46:34

the genetic information

46:37

inside a zygote

46:40

or embryo and realize that it's

46:42

headed towards some terrible disease that

46:45

we're imagining we have the ability

46:47

to prevent then I think we

46:49

should go ahead and prevent it

46:52

but more than that it

46:54

doesn't matter what I think what

46:57

I think is that is going to happen

47:00

so you can talk all you want about responsible

47:03

limitations on what scientists

47:05

can do and what

47:07

doctors can do whether

47:10

or not couples can choose different features

47:12

of their babies and so forth I

47:15

don't think that there's much prospect

47:17

for any of those hoped

47:20

for restrictions working

47:23

because we don't have a world government that can

47:25

make those restrictions if nothing else right if one

47:27

country says we're not going to do it another

47:29

country's going to do it and

47:31

then the first country's going to say well wait a minute they're

47:34

doing it we better start doing it also so

47:37

I think we have to face up to

47:39

the designer babies I think that they are

47:42

coming I don't think that that can be

47:44

stopped and it's

47:46

not just designer babies I think that this sort

47:48

of panic over you know worrying

47:50

that people are going to choose

47:52

a certain kind of child and we'll

47:54

all become homogeneous and boring etc has

47:57

again led us to not think very

47:59

carefully systematically about what the possibilities

48:01

are. I think we should have

48:03

more discussion of what

48:06

the world could be like and

48:09

how the world could be good if,

48:12

when parents decided to have a

48:15

baby, they could also choose its

48:17

characteristics. Again, I'm not

48:19

saying that this is what should happen. I'm just saying

48:21

I think it's what will happen. I don't think that

48:24

we have that much choice. Because

48:26

the incentive structure does not give me an

48:28

easy route to imagine that the whole world

48:31

is going to prevent this. And as Viridor

48:33

Urnov said, it's not going to be hard.

48:35

You're not going to need a multimillion dollar

48:37

laboratory to do this. You'll be able to

48:40

do this in your garage. So

48:42

I think the responsible

48:44

thing to do is to think carefully about

48:46

what we want those changes

48:49

to be like. Even

48:51

if we can't stop it, maybe we

48:53

can stop abuses of it in

48:55

some effective way. I don't know, but I

48:57

do think it's going to be a huge deal. And I think we

48:59

should be talking about it more. A

49:02

related issue, which I think is going to

49:04

be a huge deal, is synthetic biology. And

49:06

we really haven't talked about that very much.

49:08

It's appeared a couple of times in passing.

49:10

But synthetic biology is not just mucking with

49:13

the human genome or the genome of a

49:15

sheep or anything like that, but mostly

49:18

for tiny microorganisms

49:20

designing new organisms,

49:23

synthetic biology. So going in

49:25

there and making a genetic

49:28

code that creates the kind of organism that

49:30

you want. There's

49:33

related kinds of biological

49:35

exploration. Since I'm not a biologist, I just

49:37

mix them all together in my mind, even

49:39

though the experts think these are very different.

49:42

But DNA computers and

49:45

DNA robots. DNA

49:48

is obviously very useful to

49:50

us. It carries our genetic information, et

49:53

cetera. But there's a reason why that

49:55

particular molecule is the one that works

49:57

to carry information and living it's

50:00

because it's extremely flexible. Forgetting

50:03

about the actual use of DNA

50:05

as the carrier of genetic information,

50:08

DNA is a great way to build

50:10

things, microscopic, very tiny

50:12

scale objects that do things you

50:14

want them to do. You

50:17

can very easily imagine building

50:20

little DNA robots that

50:22

will go into a person's body

50:24

and remove their allergies, or

50:27

prevent them from getting cancer, or

50:29

solve other health problems

50:31

that could pop up. Synthetic

50:34

biology could design organisms that

50:36

could, again, help us with

50:38

our health problems, but also

50:40

maybe help eat the carbon

50:42

dioxide excess that is in

50:44

the atmosphere, or dramatically change

50:46

how we do food production,

50:49

both good old agriculture making it

50:51

more effective, and also synthetic meats,

50:53

other kinds of food sources and

50:55

things like that. These are

50:58

going to be huge deals. If you're

51:00

talking about a technological singularity

51:02

coming that is going to change human

51:04

life, I think that editing

51:06

our genes and synthesizing new kinds of

51:09

organisms had better be right there near

51:11

the top of your list. We

51:14

could imagine, we talked to Leah Goentoro here on

51:16

the podcast, a

51:19

Caltech scientist who has, not human

51:21

beings, but for

51:24

much tinier organisms, has regrown

51:26

limbs. We still

51:28

are in this world where a lot

51:30

of people could use these dramatic improvements

51:34

in our ability to

51:36

control and shape biological

51:38

function in ways that

51:40

we could help them, amputees or people

51:42

who are suffering in various

51:44

ways. This is really going to change what

51:46

it is like to be a human being.

51:49

I don't think that

51:51

we will be uploading ourselves into The Matrix. The

51:55

Matrix movie is going to appear a couple times in

51:57

this podcast, but I recently...

51:59

read of course there was a little panic on

52:02

Twitter because people realized that their

52:04

first-year college students professors were panicking because

52:06

their first-year students had not seen the

52:08

Matrix. They didn't know what it

52:11

was about and the Matrix

52:13

for people of a certain age was a very

52:15

formative movie and so I encourage you to go

52:17

see it if you haven't seen it already but

52:19

you've heard the basic idea that people

52:22

are uploaded and into this computer

52:25

simulation and they think that it's real life

52:27

that's the Matrix right so there's both the

52:29

real physical world and then there's the Matrix

52:32

the simulation they're in and it's all controlled

52:34

by evil people and robots and things like

52:36

that so it's a fascinating philosophy set of

52:38

questions as well as a good movie

52:42

for various reasons that is not the

52:45

change in human biology that

52:47

I'm actually thinking about. I'm

52:50

not worried or or

52:52

I'm not gleefully anticipate that

52:55

people will upload their consciousnesses

52:57

into computers and the

52:59

reason why is because I know that people I'm

53:01

not a non-physicalist about

53:03

consciousness I think that you

53:06

can make conscious creatures out of

53:08

silicon and chips just as well

53:10

as you can out of neurons

53:12

and blood and tissue but

53:15

they will be profoundly different.

53:18

If you take the information that is in your

53:20

brain and encode it in

53:22

some computer chip you have

53:24

removed its connection to your body and

53:27

what we think about as human

53:29

beings are is inextricably

53:31

intertwined with their bodies we

53:34

are embodied cognitions as we

53:36

have talked about many times

53:38

on the podcast Andy Clark,

53:40

Lisa Azizadeh and so forth

53:43

our bodies are what make us human just as much

53:46

as our brains. We get hungry,

53:48

we get thirsty, we get tired

53:51

eventually we die. There's all sorts

53:53

of Antonio Damasio another person we

53:55

talked to he talked about homeostasis

53:57

and feelings that we have fundamentally

54:00

physiological things that

54:03

profoundly shape who

54:06

we are mentally. And

54:08

so it's not that we can't upload the information

54:10

into a computer, it's just that it wouldn't be

54:12

a person anymore. It might be something, but it'll

54:15

be different and that's okay. It's okay for

54:17

it to be different. So there might very

54:19

well be creature-like things

54:22

that we recognize as conscious who live

54:24

in computers, but they won't be the

54:26

same as human beings.

54:28

They'll be something different and that's okay.

54:31

So I'm not suggesting

54:33

that that's the big phase

54:35

transition that we are going to see in

54:37

the future, but there

54:40

will be brain-computer interfaces. This

54:43

has been a hot topic lately in the news. Neuralink

54:46

is Elon Musk's company, but there's

54:48

actually lots of other companies that

54:50

are further along in this search

54:52

for ways to make human brains

54:54

interface directly with computers.

54:57

And in fact, that's part of a

55:00

broader thing, making human bodies interface

55:02

directly with machines. These

55:04

are cyborgs or some version of that,

55:06

depending on how science fiction-y you want

55:09

to sound. This is another

55:11

technology that I absolutely think is coming and is

55:13

going to be important. This is going to be

55:15

a big deal. Think of it this way. Cell

55:19

phones, smartphones or whatever,

55:21

even personal computers, whatever you want

55:24

to call mobile information technologies connected

55:26

to the internet. These

55:29

have already had a very big

55:31

impact on human life. They've had

55:33

an impact because poor farmers in

55:35

Africa can keep track of weather

55:37

conditions in ways they never could

55:39

before because the cell phones are

55:41

pretty cheap. But

55:44

also they're changing us socially.

55:46

There's been enough data by

55:48

now that I

55:51

think it's accurate to conclude that cell

55:53

phones have had a number of negative

55:55

effects on the lives of young people.

55:58

And Of course, it's not the technology. The G the

56:00

does. but the uses of the technology?

56:03

Whether it's because they don't go out

56:05

anymore, to the just texting, or whether

56:07

they're seeing unrealistic depictions of beauty or

56:09

whatever. I don't know. and I this

56:12

is something that that's the conclusion that

56:14

I was always reluctant to buy into

56:16

because it sounds a bit alarmist and

56:18

luddite et cetera. But again, I think

56:21

the data or their cell phones have

56:23

made young people on average of less

56:25

happy. Than. They used to be

56:27

and that might be the that's

56:29

not necessary connection. Obviously right. This

56:32

is a fixable thing. We can.

56:34

We are not yet at equilibrium,

56:36

right? We're in a in a

56:38

moment of change of dynamism. We

56:40

haven't yet figured out how to

56:42

do these things correctly, heavy, use

56:44

these technologies in the best possible

56:46

way. But my point is, Whatever.

56:49

You think the cell phone has

56:51

done I think is easily imaginable?

56:53

The brain computer interfaces are going

56:56

to be a hundred times more

56:58

influential. Than. That. If.

57:01

We are embodied. Remember when we

57:03

talked with i'm Michael Move The

57:05

Krishna About. You.

57:07

Felt various things. That one thing

57:10

was the fact that human beings

57:12

tend to offload some of their

57:14

caught cognition right. Chimpanzees think for

57:16

themselves more than young human beings

57:18

do because human beings have been

57:20

trained to trust other human beings.

57:22

Because we're not just our brains

57:24

and our bodies weaken right? We

57:26

can learn, We can teach, We

57:28

can store information and then go

57:31

access it. So we have not

57:33

only cell phones, but we have

57:35

watches. and we have calculators and

57:37

computers. And things like that, We

57:39

have writing and books. All this

57:41

stuff our cognition. Our

57:44

thinking happens. In.

57:46

Ways that extend beyond our

57:48

brains and even our bodies

57:50

Again, That's. Gonna

57:53

explode, whatever to whatever extent were doing

57:55

that. Now we're going to do it

57:57

much much more. In the future for

57:59

better and for. You know there's

58:01

blood. All good, not all bad, I

58:04

am as as sort of

58:06

slightly extrapolate he or or

58:08

speculative I'm trying to be

58:11

here. I'm I'm reluctant to

58:13

predict. Exactly what changes those

58:15

are going to be like. but you

58:17

know? look you've all seen quiz shows

58:20

jeopardy. Who wants to

58:22

be a millionaire? Where you're asking people

58:24

questions about various trivia questions and things

58:26

like that that you could imagine that

58:28

goes away. right? To the

58:31

everyone has instant access to the internet

58:33

Nudist We could be your google something

58:35

right away in your brain meal without

58:37

touching anything. Okay and it's much more

58:39

profound than that. Of course you can

58:41

call up all sorts of pieces of

58:44

information. Not just we could be the

58:46

are you can record things maybe your

58:48

rather than a. A camera

58:50

in your cell? You

58:52

just blinked and now you have a

58:54

recorded image of whatever you're looking at

58:57

right now and you can store it

58:59

and play it back. make videos, you

59:01

know, record conversations. How does this change

59:03

learning? How does this change?

59:06

Performance In all sorts of feals,

59:08

we have much more immediate access

59:10

to all sorts of information. Of

59:14

course there's much more down

59:16

to earth and obvious impacts

59:18

of these technologies because again,

59:20

some people are you know,

59:22

paraplegic or i'm locked in

59:24

syndrome The various kinds were

59:26

brain computer interfaces can help

59:28

them lead much more a

59:30

rich, interactive lives with everyone

59:32

else. So. I am

59:35

reluctant to predict what will happen, but it's

59:37

again. there's no barrier to these technologies coming

59:39

in. They are coming there, start up doing

59:42

them right now so we should be thinking

59:44

about we can't just say oh, that would

59:46

be terrible. I don't like it. I

59:48

want to live like we've lived for less

59:50

than thousand years. I think we have to

59:53

take seriously how those technologies are going

59:55

to change things is gonna happen whether we

59:57

like it or not. okay

59:59

So I know that leaked into the

1:00:02

sort of computer tech kind of thing,

1:00:04

but basically that was my biology discussion.

1:00:07

I think that there are

1:00:09

arguably profound changes in biology

1:00:12

that we have so far done not a

1:00:14

great job of taking seriously in terms of

1:00:16

how they will shape our

1:00:18

notion of what it means to be a human

1:00:20

being over the next hundred years. But

1:00:24

now, the moment we're all waiting for here, what about

1:00:26

AI? Or even

1:00:28

more broadly, what about computers and

1:00:30

information technology of all sorts?

1:00:32

How will that... That was

1:00:34

the original motivation of Vinge and Kurzweil, etc.

1:00:37

That AI and

1:00:40

the idea of AGI, artificial

1:00:42

general intelligence, will be a

1:00:44

complete game changer. I

1:00:48

think that's just a little bit wrong. I'm sorry,

1:00:50

I still think it's a little bit wrong. I

1:00:52

said this in my AI Solo podcast, and

1:00:55

some people, including, by the way,

1:00:57

all of the

1:00:59

AIs out there, like GPT-4,

1:01:02

agreed with me, while many other people

1:01:04

disagreed with me profoundly when I said

1:01:06

that AI... It's

1:01:09

crucially important to recognize that

1:01:11

artificial intelligences, as we currently

1:01:13

have them implemented, have a

1:01:15

very different way of thinking

1:01:18

than human beings do. And what that

1:01:20

means is, when you

1:01:22

toss around ideas, like general

1:01:25

intelligence, you're kind

1:01:27

of being hopelessly anthropomorphic.

1:01:30

You're looking at what AI does.

1:01:32

If Dan Dennett were here, he

1:01:34

would explain that you have fallen

1:01:36

victim to an overzealous

1:01:39

implementation of the intentional stance. By

1:01:42

the intentional stance, he means attributing

1:01:46

intentionality and agency

1:01:48

to things that behave in a

1:01:51

certain way that we are trained

1:01:53

to recognize as intentional and agential,

1:01:55

conscious, cognitive thinking. experience,

1:02:00

we meet human beings and

1:02:02

other animals and things like that and we know the

1:02:04

difference between a cat and a rock and one is

1:02:06

thinking and one is not. And

1:02:09

so there are characteristics that we associate

1:02:11

with thinking well and being intelligent and

1:02:13

it's a rough correlation and kind of

1:02:16

all makes sense to us and we

1:02:18

can argue over the worth of IQ

1:02:20

tests or standardized tests or whatever but

1:02:22

roughly speaking some people seem smarter than

1:02:25

others. So when we

1:02:27

come across these programs which are currently

1:02:29

the leading ones are large language models

1:02:31

but there's no restriction that that has

1:02:33

to be the kind of technology used

1:02:35

going forward. The point is there's a

1:02:38

computer that is trained on

1:02:40

human text. It

1:02:42

is trained to sound human to

1:02:45

the greatest extent it possibly can and

1:02:48

it succeeds. That's the thing that has happened in the

1:02:50

last couple years that these large

1:02:52

language model algorithms really

1:02:54

really can sound very very human

1:02:56

and so since all of our

1:02:59

upbringing has taught us to associate

1:03:01

this kind of speech

1:03:03

even if it's just text with

1:03:06

intelligence we go oh my goodness

1:03:08

these are becoming intelligent and

1:03:11

if it's becoming intelligent it's a whole

1:03:13

new kind of intelligent then it can

1:03:15

become more intelligent than us and then

1:03:18

the worry is that if it's more intelligent than

1:03:20

us it will either be a superhero

1:03:22

or a supervillain. So

1:03:25

our very pressing duty is to guide

1:03:27

AI toward

1:03:31

becoming a superhero rather than a supervillain

1:03:34

and I don't think it's going to be either

1:03:36

one not in the current way that we're doing

1:03:38

AI anyway again in principle

1:03:41

one could imagine things along those lines

1:03:43

but I don't think that's where we're

1:03:45

going right now. So

1:03:47

I know that people are worried

1:03:49

about artificial super intelligence with

1:03:52

the idea that once the computer becomes smarter

1:03:54

than us then we can't control

1:03:56

it anymore because if we tried to control

1:03:58

it it would resist And

1:04:00

it would trick us because it's smarter than we

1:04:03

are. What can we do in the face of

1:04:05

such overwhelming intelligence? And again,

1:04:07

I think this is hopelessly anthropomorphic in

1:04:09

the sense that it is

1:04:11

attributing not only the ability to sound

1:04:13

human to these models, but

1:04:16

the kinds of motivations and

1:04:18

desires and values that human

1:04:20

beings have. The

1:04:23

origin of our motivations and

1:04:25

desires and values is just

1:04:27

completely disconnected from the

1:04:29

way that these AI programs work.

1:04:31

It is a category error. It

1:04:34

is thinking about them incorrectly. They

1:04:38

might very well develop very,

1:04:40

very good reasoning skills of

1:04:43

various sorts. After all, my cell phone

1:04:45

is much better at multiplication than

1:04:47

I am. I do not attribute

1:04:49

general intelligence to it. My point is

1:04:52

that even if they become better at

1:04:54

abstract cognitive tasks, they won't be

1:04:57

just like humans except smarter. That's

1:04:59

not what they're going to be.

1:05:03

So there are different kinds of things, and I

1:05:05

think that we have to be clear-eyed about what

1:05:07

their effects would be. None of

1:05:09

this is to say that

1:05:11

the effects will not be enormous. And

1:05:15

so I want to emphasize that. That's what I'm here to

1:05:17

do. I'm not worried about some

1:05:20

kind of artificial intelligence becoming a dictator.

1:05:23

I'm not worried about Skynet. I'm worried

1:05:25

not worried about existential risks. I'm worried

1:05:27

about the real influence that AI is

1:05:29

going to have. I'm not worried, but

1:05:32

thinking about the real ways

1:05:34

in which real AIs

1:05:36

are going to change how we live. I

1:05:39

think those changes could be

1:05:41

enormously big, even if the

1:05:43

way to think about those changes is not as super

1:05:45

intelligent. I hope that that

1:05:47

distinction is a little bit clearer. Look,

1:05:52

AI is going to do many things. Many

1:05:55

things that are now the job of

1:05:57

human beings are going to be done

1:05:59

by AI. It's always

1:06:01

amusing to take the current

1:06:03

generation of AIs and see

1:06:06

them making mistakes, right? Because they make

1:06:08

mistakes. Of course they do. The

1:06:11

mistakes they make are mildly amusing, but it's

1:06:13

kind of not the point. It's

1:06:16

only amusing when they make

1:06:18

mistakes because they are clearly

1:06:20

super-duper good at not making

1:06:22

mistakes. That's sounding actually really

1:06:24

human, right? That's much more

1:06:26

notable to me than the fact that they

1:06:29

still do continue to make mistakes. So

1:06:31

things like writing computer

1:06:34

programs, writing books, writing

1:06:36

articles, designing buildings or

1:06:39

inventions or chemistry

1:06:42

processes, creating things, creating

1:06:44

art, creating life,

1:06:48

living spaces or whatever, doing

1:06:50

architecture. All of these things

1:06:52

in my mind is very natural to

1:06:54

imagine that AIs are going to play

1:06:56

a huge role doing that. Either literally

1:06:59

doing it or helping human beings

1:07:01

do it. Just

1:07:04

to mention one very obvious thing, AI

1:07:06

will be able to help human

1:07:09

beings learn things that they didn't

1:07:11

know, right? Not

1:07:13

in any sort of simple-minded, let's just

1:07:15

replace all professors with AIs or anything like that,

1:07:17

but why would you want to do that?

1:07:19

That's not the model you would choose. You

1:07:22

personally and individually can learn things

1:07:24

with the help of AIs in

1:07:27

ways that, once we clean up

1:07:29

the obvious mistakes that they keep

1:07:31

making, which is an ongoing

1:07:33

project that might improve

1:07:35

very rapidly for all I know, but it

1:07:37

will be enormously helpful. Think about

1:07:40

that. I don't know how to...

1:07:44

Well, again, it's slightly too

1:07:46

easy to dwell on the mistakes because

1:07:48

there's a thing that's been going around

1:07:50

the Internet recently of a cookbook That

1:07:53

comes... I Don't know. you buy some

1:07:55

oven or something like that and this

1:07:57

cookbook comes along with it and it's

1:08:00

clearly AI. I generated and is just

1:08:02

full of nonsense and we absolutely need

1:08:04

to be. Worried. That

1:08:06

some A I produce thing is gonna

1:08:08

kill people because it's not actually thinking

1:08:11

and the same way we do and

1:08:13

it produces nonsense and someone follows it

1:08:15

a little bit too literally. I'm very

1:08:17

much in favor of worrying about that.

1:08:20

Okay, but it will also more often

1:08:22

than not help you learn how to

1:08:24

cook or how to speak French, or

1:08:26

how to ski or whatever. Or how

1:08:29

did you theoretical physics? There's no reason

1:08:31

to think that A I will be

1:08:33

enormously helpful in that will be enormously

1:08:35

helpful. In accelerating the rate of

1:08:38

other kinds of innovations. So even

1:08:40

if the traditional singularity spiel that

1:08:42

says a i become super smart

1:08:45

and he designs other A eyes

1:08:47

the become even smarter use. That

1:08:49

is not the right way of

1:08:52

thinking about it because the word

1:08:54

smart as being misused in that

1:08:57

context the Ai will absolutely help

1:08:59

accelerate the rate of innovation. You.

1:09:02

Know when you're a chemist or a

1:09:04

biologist or whatever. Very often the systems

1:09:06

you're thinking about are just so complicated

1:09:09

that he had to take them. stabbed

1:09:11

in the dark are some educated kisses

1:09:13

and then run trials, right? Drug trials.

1:09:15

This is something that we do all

1:09:17

the time if it's possible to simulate.

1:09:20

Those. Kinds of trials. You could in

1:09:22

principle enormously speed up the process. All

1:09:26

of things this discussion we

1:09:28

just had about brain computer

1:09:30

interfaces, Genetic engineering, Synthetic Biology.

1:09:33

The rate of progress on

1:09:35

those friends in very possibly

1:09:38

be enormous. The improved sped

1:09:40

up using help from a

1:09:43

I Okay, so that is

1:09:45

a kind of bootstrap being

1:09:47

positive feedback, acceleration of progress.

1:09:50

that is characteristic that is kind

1:09:52

of singularity behavior and whether or

1:09:55

not you believe in a d

1:09:57

i in the traditional sense is

1:10:00

reason to be skeptical about that kind

1:10:02

of thing. So what is that going

1:10:04

to mean? How will the world be

1:10:06

different when AI gets good at

1:10:08

these things? Even right now, if

1:10:10

you're a basketball fan like I am and you look

1:10:12

up a little recap of

1:10:15

last night's games, chances are

1:10:17

pretty good that that recap was written by an

1:10:19

AI and sometimes they're terrible. There's

1:10:23

still the ability to find real

1:10:25

human beings, so most of what

1:10:27

I read is by human beings,

1:10:29

but the simple-minded daily story from

1:10:31

Associated Press or whatever is often

1:10:33

going to be artificially created. So

1:10:36

how far is that going to go? So I asked

1:10:38

this for my own thought

1:10:40

experiment purposes. I wondered, could

1:10:43

AI replace me in

1:10:45

the sense of writing my books? I've

1:10:48

written several books. I mean, maybe you could do

1:10:50

the podcast too for that matter, but

1:10:53

could AI do a good job

1:10:55

of writing books in the mode

1:10:58

or in the style of Sean Carroll?

1:11:01

So well that I don't need to

1:11:03

write them anymore, right? That

1:11:06

is a crucially

1:11:08

important, difficult, interesting,

1:11:12

very near-term question, I

1:11:15

think. That is not a silly question. I

1:11:18

did look. I looked on Amazon. Are

1:11:21

there any books currently being sold that purport

1:11:23

to be by me but are actually written

1:11:25

by AIs? I couldn't find any. I

1:11:29

guess that's good. I did find

1:11:31

books that are written by AIs

1:11:33

that summarize my books. So

1:11:36

it's very possible that there are books that are

1:11:38

trying to be written by me that just don't

1:11:40

attach my name to them, right? That are sort

1:11:42

of a little more subtle than that. But if

1:11:45

you search my name on Amazon, you find my

1:11:47

books, you find books by former Mindscape guest Sean

1:11:49

B. Carroll, the biologist, who's written a lot of

1:11:51

great books, but you also find books with titles

1:11:53

like Summary of the Big Picture. And

1:11:56

sometimes these are written by human beings, but

1:11:58

sometimes Very, very clearly. They are written

1:12:00

by a eyes and you can tell

1:12:02

when way of telling his. Just click

1:12:05

on the amazon reviews and a review

1:12:07

says ah, this is clearly computer generated.

1:12:09

Any kind of sucks but again. The.

1:12:12

Days young, right? You know the

1:12:14

progress is still happening, so could

1:12:16

you feed? as. A.

1:12:19

Model A large language model were some

1:12:21

improvement thereof. Everything I've ever written about

1:12:23

and have it ready. New book. Maybe

1:12:25

give it a topic rain, maybe say

1:12:28

write a book about. And

1:12:30

so Katie Mack, former Mindscape guest wrote a

1:12:33

great book about the ways Universe can and

1:12:35

I've never written a book about that so

1:12:37

you could ask the Ai? what would a

1:12:39

book by Sean Carroll about the ways the

1:12:41

Universe. Could. End be like. And

1:12:46

it would be could write a book you that

1:12:48

really do it right now and it would suck.

1:12:50

It would not be very very good at all.

1:12:52

But imagine. That. It. Gets

1:12:54

better. So.

1:12:57

I. Again, I think that this is

1:12:59

going to depend on. Technologies.

1:13:01

We don't quite have yet.

1:13:03

There is beyond the sort

1:13:05

of obvious actual mistakes that

1:13:08

a eyes are still making

1:13:10

right now. There is kind

1:13:12

of this difference between interpretation

1:13:14

and extrapolation, right? A

1:13:17

eyes are good at seeing everything

1:13:19

written in such kind of going

1:13:21

between them. And things like

1:13:23

art. This is very very provocative because

1:13:26

he got to between two different kinds

1:13:28

of our to get something that is

1:13:30

kind of new but when it comes

1:13:32

to sentences that's less true and right

1:13:34

if you have different senses and use

1:13:36

for to going in between them which

1:13:38

is again not the only thing I

1:13:41

can do but a natural strength of

1:13:43

large language models you get sort of.

1:13:45

Something. less interesting read something not

1:13:47

as provocative and creative as what

1:13:50

you're looking for in a book

1:13:52

extrapolating to say well you know

1:13:54

this here's a sentence uses and

1:13:57

here's a sense the next sentence

1:13:59

and completely different area by

1:14:01

the same person should look like

1:14:03

this, that's much harder. It's harder to then

1:14:06

to create a way, given

1:14:08

the current ways that

1:14:10

large language models and other AIs are

1:14:13

constructed, because they're constructed to sound as

1:14:15

much like they're predicting what comes next,

1:14:17

usually, right? And the fun part in

1:14:19

a good book is to have what

1:14:22

comes next not be that predictable. So

1:14:25

that's a clear tension between what large language

1:14:27

models right now are good at and what

1:14:29

you want. But I don't

1:14:31

think that's a tension that is

1:14:33

impossible to resolve. Here's one way to

1:14:35

do it. Throw in some random

1:14:37

numbers. Have these

1:14:40

like imagine that we have enough computing power,

1:14:42

just write a thousand books. And

1:14:45

then search through and find the one that

1:14:47

is most interesting and creative, right? That's something

1:14:49

you could imagine doing and that could extrapolate

1:14:52

in very interesting ways. Now, footnote,

1:14:55

I should have said this earlier in the podcast,

1:14:57

but one of the

1:14:59

challenges back up there when we were talking about the

1:15:01

environment, you know, one of the things you might have

1:15:03

thought back if you

1:15:05

were thinking 20 years ago about climate

1:15:08

change and fuel use and so forth is, well,

1:15:11

maybe we'll reach a saturation point

1:15:13

where we have a constant amount

1:15:15

of fuel we need to burn, right?

1:15:17

You know, maybe once everyone is flying and

1:15:19

everyone has their car, we're not going to

1:15:22

need to continue to increase the

1:15:24

amount of fossil fuel consumption. Recent

1:15:27

years have given a lie to

1:15:29

that anticipation, even if anyone had

1:15:32

anticipated that, for the simple reason

1:15:34

that we continually invent new ways

1:15:36

to burn fuel, to

1:15:39

use energy and computing

1:15:41

is it right now. Somewhere

1:15:45

I read that the what

1:15:48

we call the cloud, right? Like

1:15:50

when you store your files, your

1:15:52

photos or whatever in the cloud. So that's,

1:15:54

you know, the cloud is not very fluffy

1:15:57

and intangible. It's a set of

1:15:59

physical servers. sitting in various rooms

1:16:01

in different places. So

1:16:03

the energy consumption, you

1:16:06

see the, I didn't exactly write this

1:16:08

down when I read it, but either

1:16:10

the energy consumption or the fossil fuel

1:16:13

emission from just keeping the cloud going

1:16:15

is larger than that of the entire

1:16:17

transportation industry. We're putting

1:16:19

an enormous amount of energy

1:16:22

into running computers of

1:16:24

various sorts and large

1:16:26

language models are some of the

1:16:28

worst defenders of this. It's

1:16:30

an enormous computational problem and

1:16:33

we would like to do more computation and

1:16:35

that's gonna take more energy. That's

1:16:38

a problem. If we think that we're

1:16:41

just at the beginning of the AI revolution

1:16:43

and other various kinds of ways in which

1:16:46

computers are going to be used, just

1:16:48

finding the energy to run them is going to

1:16:50

be difficult. I just did the thought experiment of

1:16:52

imagine writing a thousand versions of a new book

1:16:55

by me and then searching through and looking for

1:16:57

the good one, that's gonna

1:16:59

cost a lot if that becomes common

1:17:01

to do. Now there's

1:17:03

another problem, which is that at

1:17:06

some point you're in Borges' Library

1:17:08

of Babel. Remember Jorge

1:17:11

Luis Borges wrote this story,

1:17:13

The Library of Babel, which imagine that

1:17:16

it, there's a library that contained every

1:17:18

book you could possibly write. And

1:17:20

the problem there is you can't find the book,

1:17:23

right? Yes, it's true that War

1:17:26

and Peace by Tolstoy is there

1:17:28

somewhere, but there's many, many, many, many

1:17:30

other books that are exactly like War and Peace,

1:17:32

but a few letters are different. So

1:17:34

at some point that's going to be the

1:17:37

problem that you face if you think you

1:17:39

can create new knowledge by

1:17:41

throwing some random numbers at an AI.

1:17:44

Finding what the knowledge is versus what

1:17:46

the nonsense is, is going to eventually

1:17:48

require some judgment of some

1:17:51

kind. And so all

1:17:53

of which is to say, maybe I

1:17:56

can be replaced by AI's writing my

1:17:58

books, but there are obstacles. to

1:18:00

it happening that I don't think make it

1:18:03

imminent. I think

1:18:05

a much bigger problem than that is

1:18:07

the more sort of news social

1:18:11

media kind of effects and

1:18:13

here I'm not saying anything

1:18:15

at all different than what

1:18:17

many other people have said.

1:18:20

It's already happening right? If

1:18:22

you go on social media or just

1:18:24

go on the internet more broadly it's

1:18:27

becoming harder and harder to tell number

1:18:29

one what was written by a human

1:18:31

being versus what was AI generated. Number

1:18:34

two whether images are actually photographs

1:18:37

of real things that happened or

1:18:39

were AI generated and even video

1:18:41

and voice and things like that.

1:18:44

It's very easy now to make

1:18:46

a fake so-called evidence

1:18:48

for claims that you have

1:18:52

and this is going to lead to

1:18:54

two huge problems. One

1:18:56

of course is that

1:18:58

you can manufacture evidence for

1:19:00

whatever claim you like. Oh you think

1:19:03

that this person did this bad thing?

1:19:05

Make a video that shows them doing

1:19:07

that bad thing okay and so it

1:19:09

it becomes hard to know whether evidence

1:19:12

is reliable that way. But

1:19:14

the other problem which I

1:19:16

think is underappreciated is that

1:19:18

real evidence becomes less trustworthy.

1:19:21

Donald Trump has already used this defense you

1:19:24

know he says some crazy things people get

1:19:26

him on tape for saying the

1:19:28

crazy things he says ah that's just AI

1:19:30

generated you can't believe that I actually said

1:19:32

those things and whether it's true

1:19:35

or not the doubt is

1:19:37

there right? There is a

1:19:39

loss of reliability there's the loss

1:19:42

of the ability to validate the

1:19:44

claims that we make in the

1:19:46

social sphere and we've

1:19:48

already seen this happening in other ways but

1:19:50

we know what the outcome is it is

1:19:53

kind of an epistemic fracturing. We

1:19:57

divide into tribes

1:19:59

into bubbles. The problem

1:20:01

of a bubble is not that an

1:20:04

epistemic bubble, an information bubble, where

1:20:06

you get, you're mostly talking to

1:20:08

people you agree with. Who

1:20:10

was it? Brendan Nihan,

1:20:12

who talked about this, or

1:20:15

Hugo Mercier, I'm not sure. But the

1:20:18

problem is not that you're only – I

1:20:21

think it was Brendan Nihan – that you're

1:20:23

only exposed to information you want

1:20:25

to hear and already agree with. The

1:20:28

problem is that you are

1:20:30

exposed to contrary information and you just don't pay

1:20:32

any attention to it. You just don't listen to

1:20:34

it. You don't give it any credence. You don't

1:20:37

take it seriously. We human

1:20:39

beings – this was Hugo's point – we human

1:20:41

beings are really, really good at ignoring

1:20:43

the information we want to ignore. And

1:20:47

this ability to artificially generate

1:20:49

fake information in all sorts

1:20:51

of ways is going to

1:20:53

tremendously exacerbate that problem. We

1:20:56

can plausibly imagine that it

1:20:58

becomes hard to trust anything,

1:21:01

and we descend into a

1:21:03

kind of fantastical miasma of

1:21:05

entertainment and wish

1:21:07

fulfillment or bias fulfillment. So

1:21:10

we don't know what to believe, so we believe what we

1:21:12

want to believe, and that's it. The

1:21:15

reality-based community ceases to exist

1:21:17

because everyone chooses to believe

1:21:19

or chooses to believe what they want

1:21:21

to distrust what they want,

1:21:23

and maybe rightfully so. There's just as

1:21:26

much crap

1:21:28

out there as there is real

1:21:31

stuff. So I don't

1:21:33

know what the equilibrium

1:21:36

will be there. I

1:21:39

don't know once it becomes so

1:21:41

easy to generate evidence-looking

1:21:44

things as

1:21:46

it is to generate real evidence. I don't know

1:21:49

where we land. I don't know how we change,

1:21:51

how we evaluate the world.

1:21:53

I mean, it's already true When

1:21:56

we think about politics or

1:21:59

international. That bears and things like

1:22:01

that that We hear claims on the

1:22:03

internet that we like and we spread

1:22:05

those planes and then some says actually

1:22:08

that was wrong and them it's much

1:22:10

harder to bring him back and undo

1:22:12

the damage again. I think where the

1:22:14

beginning. of this change were

1:22:16

not near the end of it ah

1:22:18

for whatever various reasons and seen or

1:22:20

it came to be. Journalism.

1:22:24

And newspapers have club

1:22:26

drain have imploded. It.

1:22:28

Was actually is. As many of you know that

1:22:31

if you want to put. Point. A

1:22:33

finger at one events that

1:22:35

led to the collapse of

1:22:37

journalism. It was Craig's list. Craig's

1:22:39

List The online classified service.

1:22:41

Because many, many newspapers actually

1:22:43

got most of their revenue

1:22:45

from their classified sections and get

1:22:48

him going back up to

1:22:50

the discussion of people are

1:22:52

going to follow their self

1:22:54

interest. Ah, if there's if they're

1:22:56

allowed to do so. It

1:22:58

is better to have classified online

1:23:01

and why the real with every

1:23:03

one than the have them individually

1:23:05

printed in physical newspapers. It's just

1:23:07

easier. So the model of newspapers

1:23:10

and their revenue streams sort of

1:23:12

went away and didn't he a

1:23:14

plot the that very dramatic transition

1:23:16

pretty easily and this is a

1:23:18

new thing that the the shift

1:23:21

to distrusting pieces of information is

1:23:23

a different kind of thing but

1:23:25

will be equally important if we

1:23:27

don't have. Things that we can

1:23:29

rely on so that's a be a big

1:23:32

deal. Okay. So.

1:23:35

Given. All that, so again,

1:23:37

all this sort of slightly

1:23:39

meandering exploration of what I

1:23:41

think our technologies that will

1:23:43

really lead to huge, important

1:23:45

changes. What? Do we

1:23:48

think is going to be the

1:23:50

end story? If it's true that

1:23:52

we're approaching a singular moment after

1:23:55

which human life in society will

1:23:57

look different? What will

1:23:59

look like. Okay, and you know,

1:24:01

look, I'm going to be brutally

1:24:03

honest here. I'm going to disappoint you if

1:24:06

you want to get the answer,

1:24:08

the correct answer from me because I don't know. I

1:24:10

think it's a very hard question to ask. I think

1:24:13

it's very worthwhile to ask. I think that,

1:24:16

I guess I've said this already, but when

1:24:18

people talk about it, I just

1:24:20

don't think they're being serious in

1:24:22

the sense that they

1:24:25

are too,

1:24:27

not eager, but susceptible to

1:24:30

either wildly over-exaggerating effects or

1:24:32

under-appreciating the possible effects. I

1:24:34

think that the balance, and

1:24:36

I don't blame people, I'm

1:24:38

a person, it's

1:24:40

very, very hard to strike the balance between

1:24:44

carefully thinking through all of the

1:24:46

possible things that can happen and

1:24:49

yet sort of soberly imagining which

1:24:51

ones are more likely than others,

1:24:53

right? So that's what

1:24:55

I'm trying to encourage people to

1:24:57

do. I'm not successfully completing that program,

1:24:59

but I hope that I

1:25:01

can give some food for thought for people who

1:25:04

want to think it through. So to

1:25:07

acknowledge that I don't know what the answer is, I

1:25:09

will sketch out two sort of edge case

1:25:12

scenarios, a pessimistic scenario and

1:25:14

an optimistic scenario. And

1:25:18

originally I thought of doing the optimistic one first

1:25:20

and then the warning of the pessimistic scenario, but

1:25:23

that's depressing. So let me

1:25:25

do the pessimistic one first and close

1:25:27

with the optimistic one, even though you'll

1:25:30

have to judge for yourself, which you think

1:25:32

is more plausible given the things that are

1:25:34

happening to us. So the pessimistic

1:25:36

scenario, a good

1:25:38

analogy, a good metaphor, once again

1:25:40

comes from The Matrix, the movie,

1:25:43

but not from what

1:25:45

most people take to be the central

1:25:47

theme of The Matrix, the possibility that

1:25:50

we're living in a computer simulation or

1:25:52

something like that. Many

1:25:55

people and myself included

1:25:57

have pointed to one

1:25:59

aspect of the Matrix movie

1:26:02

as the silliest and

1:26:04

the one that we really wish had not

1:26:06

been part of it and that is the

1:26:08

following of course there is still in the

1:26:11

world of the Matrix a physical world so

1:26:13

people have physical bodies then but

1:26:15

their experiences their thoughts etc are

1:26:17

all in the Matrix they're all

1:26:19

in the simulation so

1:26:22

what are most and you know

1:26:24

our plucky heroes are you know

1:26:26

pirate rebels who are navigating the

1:26:28

real physical space but most people

1:26:31

who are living their lives in the

1:26:33

matrix what are their physical bodies doing

1:26:36

and in the world of the movie

1:26:38

they are batteries basically the

1:26:41

technology of the computer simulation is

1:26:43

powered by human bodies right so

1:26:45

all the human bodies are put

1:26:48

in these pods and hooked up

1:26:50

to tubes and wires and whatever

1:26:52

okay it makes for great visuals in

1:26:54

the movie but completely hilariously

1:26:57

nonsensical in terms of thermodynamics and

1:26:59

physics right I mean human bodies

1:27:01

don't create energy they use

1:27:03

up energy is the opposite of what you

1:27:05

want we're terrible batteries or power generating sources

1:27:08

or whatever you might want to be so

1:27:10

I and others have

1:27:12

made fun of the Matrix movies for

1:27:14

that particular conceit but finally

1:27:16

I don't know I honestly don't know

1:27:19

whether this is in the intention of

1:27:21

the Wachowskis when they made the movie

1:27:24

or whether it's just a good way

1:27:26

of thinking about it finally it occurred

1:27:29

to me there's a much better

1:27:31

way of thinking about that image

1:27:34

of the people powering the Matrix

1:27:36

which is not take it literally

1:27:39

but to take it metaphorically

1:27:42

okay in other

1:27:44

words to to imagine that what

1:27:46

is being imagined is

1:27:49

not that our literal urgs and

1:27:51

jewels we human beings create are

1:27:53

powering the Matrix but that

1:27:55

our human capacities are

1:27:58

powering this particular

1:28:00

fake reality, right?

1:28:04

That's the metaphor that is actually kind

1:28:06

of useful. So the pessimistic scenario that

1:28:08

I want to sketch out is

1:28:11

one where human capacities, for

1:28:14

the most part, mostly become

1:28:17

fuel for a rather

1:28:19

unpleasant kind of society that we

1:28:21

can live in. That

1:28:24

might sound a little vague and abstract, and conceptual,

1:28:26

let's try to put some meat on the bones.

1:28:29

Part of this inspiration for thinking about

1:28:31

things this way, for me personally,

1:28:33

came from a conversation I had

1:28:36

with a physicist, Victor Yakovenko, at

1:28:38

the University of Maryland. Victor

1:28:41

is a condensed

1:28:43

matter statistical mechanics physicist. So he thinks

1:28:45

about, originally from Russia, but he moved

1:28:47

to the US a while ago. So

1:28:50

he thinks about thermodynamics, statistical mechanics, things

1:28:52

like that, entropy, and so

1:28:54

forth. You've heard the words, right? But

1:28:56

at some point, he became interested in

1:28:59

economics, like many physicists do. Physicists like

1:29:01

to colonize all the other fields of

1:29:03

human intellectual effort. Economics is

1:29:06

a good one because there are equations in

1:29:08

it, right? So there's a whole burgeoning field

1:29:10

of econophysics. So Victor had

1:29:12

the following idea, and he sort of worked this

1:29:14

out before he talked to any

1:29:17

actual economists. He said, you know, if

1:29:19

I have a box of gas, and I

1:29:21

have some molecules in the box, and I put them

1:29:23

in some initial configuration, and I let

1:29:26

them bump into each other, we know

1:29:28

what will happen. You will equilibrate, right?

1:29:30

You will go to a maximum entropy

1:29:32

configuration, basically because all

1:29:34

the molecules bumping into each other

1:29:36

will exchange energies. And

1:29:38

after many, many such exchanges, you

1:29:41

will reach a known distribution of

1:29:43

energies that was derived back in

1:29:45

the 19th century by Maxwell and

1:29:48

Boltzmann, the Maxwell-Boltzmann distribution. And

1:29:51

this is experimentally verified, as well

1:29:53

as theoretically derived. So

1:29:55

Victor says, you know, that's kind

1:29:57

of like money in a society.

1:30:01

Energy in a box of gas is kind of

1:30:03

like money in a country. Now,

1:30:05

money supply is not completely constant,

1:30:07

right? We know the Federal Reserve

1:30:09

increases or decreases the money supply

1:30:11

in response to economic conditions, but

1:30:13

that's a tiny effect. Let's imagine

1:30:15

that for the most part, there's

1:30:17

a fixed amount of money in

1:30:19

society, and the money gets

1:30:21

exchanged, right? People buy goods, and they sell

1:30:24

goods, and the money moves around. So

1:30:26

Victor says, he was not

1:30:28

too serious about this, but he said, let's imagine

1:30:31

that it's kind of the same thing, and that

1:30:33

money reaches a... that

1:30:35

wealth, if you like, reaches a maximum

1:30:37

entropy distribution, and he derives

1:30:39

that it should look like the Maxwell-Boltzmann distribution,

1:30:41

just like energy is in a box of

1:30:43

gas. So then he

1:30:45

goes to some real economist, and he says,

1:30:47

here, look, I have a theory for how

1:30:50

wealth is distributed in society. And

1:30:52

they laugh, and they roll their eyes, because, of course,

1:30:54

they know much better than this, and they say, look,

1:30:56

just at one very simple level, there

1:30:59

is a feature of this distribution you've

1:31:02

written down, which is that as

1:31:04

the wealth gets more and more, the

1:31:07

number of people who have that

1:31:10

much wealth decays exponentially. So

1:31:12

we were talking about exponential growth before, here's exponential

1:31:14

decay. The point is, in either case, it's fast.

1:31:17

So it is a feature of the Maxwell-Boltzmann distribution,

1:31:21

of energies, of molecules in a box

1:31:23

of gas, that there will be occasionally,

1:31:26

rarely, some high-energy molecules,

1:31:29

but there are exponentially fewer of

1:31:31

those than molecules moving with the

1:31:33

average energy. And Victor's model

1:31:35

said the same thing about wealth, that

1:31:37

there should be exponentially fewer wealthy

1:31:39

people than average median earners.

1:31:43

That is not true. We've known for a very

1:31:45

long time that that is not true in any

1:31:47

society that we've ever met. At

1:31:50

the mathematical level, there is a power law

1:31:52

that describes the distribution

1:31:55

of wealth at the high end

1:31:57

of wealth. And what that means

1:31:59

in practical terms is that... there are a lot

1:32:01

more wealthy people than you

1:32:03

would expect if you were just exponential

1:32:05

fall off. It falls off much more

1:32:07

slowly than that. There is a

1:32:09

fat tail. There are more black swans than you would

1:32:11

expect if you want to put it in those languages.

1:32:15

So Victor was appropriately

1:32:18

chastened and he went back and he said, well, let me

1:32:20

just check this data. And

1:32:22

it turns out to be very hard

1:32:25

to get the data about

1:32:27

the wealth distribution in a country

1:32:29

because especially at the wealthiest

1:32:31

edges, people hide their wealth. They don't want

1:32:33

to tell you how much they have.

1:32:36

You can do it for income though.

1:32:38

So okay. So he plotted that and

1:32:41

what you see is actually like

1:32:43

it's pretty remarkable. I got to say, you

1:32:45

know, when you're doing economics

1:32:48

or any other social science, it's

1:32:50

rare to get a curve of

1:32:52

data that you can fit

1:32:55

so easily and cleanly with a theoretical model.

1:32:57

And what Victor found for the distribution

1:32:59

of wealth is for the distribution of

1:33:01

income rather is that indeed for

1:33:05

high earners, there is a power

1:33:07

law decay, not the Maxwell Boltzmann

1:33:09

distribution, but for lower earners,

1:33:12

there is more or less exactly the

1:33:14

Maxwell Boltzmann distribution. And there

1:33:16

is indeed a very clear,

1:33:18

crisp changeover point. It's at

1:33:20

about three times the median

1:33:23

income level below three times the

1:33:25

median income. It's Maxwell Boltzmann above

1:33:28

three times the median income. It

1:33:30

is a power law decay. What

1:33:33

is going on there? So it'd be nicer if you had

1:33:35

the theory first and made the prediction, but okay, sometimes we

1:33:38

get the data and then we fit the theory. And none

1:33:40

of this is surprising to economists, by the way, I'm not

1:33:42

trying to say that. I'm just telling a fun story to

1:33:45

motivate how I think about it. The

1:33:47

physicists here are the late comers, not

1:33:49

the pioneers. The

1:33:53

theory is the following and it's pretty

1:33:55

close to reality, I think. There are

1:33:57

two ways to earn money. They're

1:33:59

too close. classes of earners in

1:34:01

the world. One class

1:34:03

of earners are basically additive. In

1:34:06

other words, you have goods. Your

1:34:09

goods might be, you know, your time and your

1:34:11

effort. If you're a factory worker, you get a

1:34:13

salary, or but maybe you have like a hot

1:34:15

dog stand, you're selling hot dogs or whatever. And

1:34:18

by additive, I mean that you sell these

1:34:20

goods that are consumed once and

1:34:23

you make money from it. So that's

1:34:25

pretty analogous to the molecules bumping into

1:34:27

each other and exchanging energies, right? There's

1:34:29

some fixed amount of wealth that is

1:34:32

being passed around you at

1:34:34

one at a time and there's kind of an

1:34:36

upper limit on how much money you can earn,

1:34:38

which is how many goods you have times the

1:34:40

amount of sales that you can make, okay?

1:34:44

But there's a whole other way that

1:34:46

you can earn which is more multiplicative. That's

1:34:49

when you can sell the same

1:34:51

service, the same good, many, many,

1:34:53

many times. And there

1:34:55

are obvious examples of that like book

1:34:57

authors. I write a book

1:35:00

once and then I try to sell as many copies as I

1:35:02

can. But

1:35:04

also athletes, entertainers, etc.

1:35:08

Their services are infinitely

1:35:11

multipliable so they can sell them many

1:35:13

times. And of course,

1:35:15

the classic example are not

1:35:18

writers or entertainers, but capitalists,

1:35:21

owners, investors, because

1:35:23

they can just increase the size

1:35:25

of their factories or whatever, or they can

1:35:27

invest in more and more stocks and earn

1:35:30

more and more money. And that's again, positive

1:35:32

feedback. So they're

1:35:34

earning multiplicatively rather than merely

1:35:36

additively. And there, there's no

1:35:38

limit on how much you can

1:35:40

earn except for like the size of the earth and things

1:35:43

like that, right? So there's no realistic

1:35:45

hourly wage that ever gets you to be a

1:35:47

billionaire, but there are billionaires. And

1:35:50

that's because there are different ways to earn

1:35:52

than just selling your services one hour at

1:35:54

a time. And no judgments

1:35:57

here, right? I'm not trying to say. say

1:36:00

this is somehow unfair, whatever. You can have

1:36:02

debates about what is the just economic system.

1:36:04

Good, go for it, love it, but that's

1:36:07

not why I'm here right now. The

1:36:09

point is that there

1:36:12

is efficiency questions raised

1:36:15

about this distribution of

1:36:19

wealth or income or whatever. In

1:36:21

order for there to be the

1:36:24

multiplicative earners, they have to

1:36:26

try, you know, their goal, if you're a hot

1:36:28

dog vendor, you have two goals.

1:36:30

One, make a really good hot dog. Two,

1:36:32

find a customer who will want to buy

1:36:34

the hot dog. Pretty straightforward. But

1:36:36

in this multiplicative regime, you want more

1:36:39

and more. You want to find more

1:36:41

and more customers. And you want to,

1:36:43

if you can, get them to give

1:36:45

you more and more money, right? So

1:36:48

you're aiming for efficiency in the sense

1:36:50

of extracting profits from the largest number

1:36:52

of people. And

1:36:54

there is, there can be, in

1:36:56

principle, and there clearly is, in

1:36:58

practice, very often, a

1:37:01

tension between efficiency and

1:37:03

human happiness. I

1:37:05

don't mean that as a general statement of efficiency,

1:37:08

but this particular kind of

1:37:10

efficiency, whereas an efficiency in extracting profits

1:37:12

from a very, very large number of

1:37:14

people that can

1:37:17

help with human happiness in some ways,

1:37:19

but it's not necessarily correlated, that they

1:37:22

can get in the way of each

1:37:24

other. They can destructively interfere. So

1:37:27

think of it this way. You know,

1:37:29

in a market, you don't pay more

1:37:31

than you choose to, right? If someone says, I

1:37:33

have a good hot dog, it costs two bucks.

1:37:35

You might say, okay, good. Give me the hot

1:37:38

dog. If it's the same hot dog, you can

1:37:40

say it costs 200 bucks. Most people

1:37:42

are gonna say, you know, no, I'm not

1:37:44

gonna buy it. I have chosen not to

1:37:46

participate in that exchange, right? And

1:37:49

there is, therefore, some value. There's some cost

1:37:51

of the hot dog that you would pay

1:37:53

for it. And above that cost, you would

1:37:55

not pay below that cost you would pay.

1:37:57

Okay? That's how markets work. And

1:38:00

by efficiency what I mean is really

1:38:03

homing in on what that

1:38:05

maximum amount that you would pay could be.

1:38:08

And at that point where

1:38:10

if it were a penny more you wouldn't pay

1:38:13

and it were less you

1:38:15

would pay, maybe you

1:38:17

would pay at that point but you're not going to

1:38:19

be happy about it. You're going to

1:38:21

grumble a little bit. You're like, yeah, that's expensive hot

1:38:23

dog. I wouldn't pay any more than this but I

1:38:25

guess I will pay exactly that much. The

1:38:29

efficiency goal that a corporation

1:38:31

wants to get or anyone who's trying

1:38:33

to extract wealth from a large number

1:38:35

of people, even a book author, right?

1:38:38

How much can I charge for the

1:38:40

book? Perfectly reasonable question to ask. No

1:38:43

value judgments here. No statements about

1:38:45

evil or anything like that. This

1:38:47

is just natural incentives. This

1:38:49

is just every individual trying

1:38:52

to work to their self-interests. If

1:38:55

you go back to the conversation we had with Sam

1:38:57

Bowles, he was very clear

1:38:59

Adam Smith said something really brilliant

1:39:02

and insightful and true about how

1:39:04

good market outcomes can come from

1:39:06

every individual just trying to work

1:39:09

for their self-interest. But

1:39:11

I think an underappreciated point, I shouldn't say that

1:39:13

because I don't know what economists appreciate and don't.

1:39:16

A point that I haven't successfully

1:39:19

appreciated is that one,

1:39:22

the reason this is going to come back to what

1:39:24

we're actually talking about in the podcast is one

1:39:27

crucially important aspect of

1:39:29

the technological innovations and

1:39:31

improvements that we are undergoing

1:39:34

is that it makes it

1:39:36

easier for markets

1:39:38

to reach that perfect

1:39:41

point of efficiency where things are

1:39:43

sold but nobody is really happy

1:39:45

about it. This

1:39:47

does not guarantee the best

1:39:50

outcomes. You

1:39:52

can see this in many, many different

1:39:54

examples. When I

1:39:57

say this, I mean the fact that

1:39:59

technology is sort of... of helping us

1:40:01

reach that efficient equilibrium, which might be

1:40:03

efficient, but doesn't necessarily make us happy.

1:40:06

Think about Google Maps or

1:40:09

other mapping GPS services on your cell

1:40:11

phone. Back in the day, when

1:40:13

I was your age, we would have a route that

1:40:15

we would go from point A to point B. If

1:40:18

we knew where we were going, we would take the

1:40:20

obvious route, and we would go there. Sometimes there'd be

1:40:22

a lot of traffic. These

1:40:24

days, we have a

1:40:26

computer with information in it that

1:40:28

will tell us, yeah,

1:40:31

usually you would take that route, but there's

1:40:33

traffic on there. So here is a different

1:40:36

way to go that naively,

1:40:38

you might think, takes longer. But today, it

1:40:40

takes shorter. And so when things

1:40:43

get clogged, suddenly traffic, because everyone has

1:40:45

their GPSes out there, right, or enough

1:40:47

people do, suddenly traffic

1:40:49

spreads out to take many different

1:40:51

routes. And that is overall

1:40:53

more efficient. But not

1:40:56

everyone is happy about it, because maybe

1:40:58

the people who live on those local roads are

1:41:00

now seeing three times the amount of traffic they

1:41:03

used to see. Literally, where

1:41:05

I used to live in Los Angeles,

1:41:08

while we were living there, a whole

1:41:10

bunch of local streets were converted

1:41:13

from two-way streets to one-way streets,

1:41:15

precisely to prevent people from

1:41:17

taking shortcuts suggested to them by

1:41:19

Google Maps. So more

1:41:21

efficiency, not necessarily more happiness.

1:41:24

You know about there was a recent discussion

1:41:28

about dynamic pricing. Dynamic

1:41:30

pricing is something that ride-sharing

1:41:33

services like Uber and Lyft have used for

1:41:35

quite a while. The price of a certain

1:41:38

ride from point A to point B is

1:41:41

lower when there's not that much demand, and

1:41:43

higher when there is a lot of demand.

1:41:45

Supply and demand, but now in the time

1:41:47

domain, OK? This is

1:41:50

something that without computers, without massive data

1:41:52

sets, you would have a difficult time

1:41:54

figuring out. Maybe you could crudely approximate

1:41:56

it, but now you

1:41:58

can pinpoint exactly how... much

1:42:01

if you're a ride-sharing service you can

1:42:03

reasonably charge people at different

1:42:05

times of day. You're coming

1:42:07

closer to extracting as much wealth from

1:42:09

these people as you possibly can and

1:42:12

still have a profitable company. And maybe

1:42:14

that won't work long term because there's

1:42:16

lots of specific messy aspects

1:42:18

of being a ride-sharing service that's still

1:42:20

very much in flux. But

1:42:23

recently that's an older story. The reason

1:42:25

thing is that Wendy's tried

1:42:27

to do exactly this. They tried to say, you know, we'll

1:42:29

make it cheaper at 10 a.m.

1:42:31

more expensive at 1230 because people are

1:42:34

coming for lunch at 1230, right? Outrage.

1:42:38

People did not like this because

1:42:40

of course people are not thinking of it as being

1:42:42

cheaper at 10 a.m. they're thinking of

1:42:44

it as being more expensive when they actually want

1:42:46

lunch, right? And that kind of

1:42:48

gets people upset. So I believe that

1:42:51

Wendy's backed down. But you

1:42:53

can see this, you know, being more and

1:42:55

more clever about how to make a few

1:42:57

bucks. We've seen this again. So many ways.

1:42:59

I'm gonna have to like stop myself from

1:43:01

giving examples. But separate

1:43:04

fees to check a bag on an

1:43:06

airplane, right? We used to just get that for

1:43:08

free. Now they figured, oh, if we charge that,

1:43:10

people will not mentally include it in the price

1:43:13

of their ticket and we'll make more money. Resort

1:43:15

fees in hotels. I still have no idea

1:43:18

what a resort fee is. You have the,

1:43:20

you know, you buy your hotel online, there's

1:43:22

a certain price, and then you show up,

1:43:24

there's an extra resort fee. And

1:43:26

then you pay it because you're there, but I really

1:43:28

don't know what it means. My

1:43:30

favorite example is actually student loans,

1:43:32

right? There's a student loan crisis

1:43:35

here in the United States, and

1:43:37

you see where it comes from because

1:43:40

college students are typically, or

1:43:42

very often don't have a lot of money, but

1:43:45

they might have a lot of future earning

1:43:48

power. So basically, colleges figured

1:43:50

out that they can raise tuition

1:43:52

to the point where many

1:43:54

students couldn't actually be able to pay it,

1:43:57

but they can give them a loan on the thought that they

1:43:59

will... be able to pay it over the next

1:44:01

couple of decades because their earning potential will be higher.

1:44:04

Which all sounds good, opening up college

1:44:06

to people who otherwise couldn't afford it,

1:44:09

but it doesn't make people happy because

1:44:11

it makes it very hard to start

1:44:14

your post-college life. You are burdened

1:44:16

with enormous amounts of debt. The

1:44:18

system has gone right to the point

1:44:20

where you will go along with it, but it will

1:44:23

not make you happy to go along with it. There's

1:44:26

a famous, another article that got

1:44:28

a lot of attention recently by Cory Doctorow,

1:44:31

Mindscape guest, on the

1:44:33

en-shitification of the internet. What

1:44:36

he means is that the services that we've been

1:44:38

used to having on the internet, whether it's buying

1:44:41

from Amazon or searching on Google, they've

1:44:43

all gotten worse. Why

1:44:45

do they all get worse? Part of

1:44:47

his explanation is you're first offered the

1:44:49

service for free, streaming

1:44:51

services are now increasingly giving you

1:44:53

ads. You're given a

1:44:56

service at a low cost for

1:44:58

relative ease of

1:45:00

transaction. Once you're hooked,

1:45:03

new costs come in because you don't want to

1:45:05

change because it's annoying, etc.,

1:45:07

and shitification. The world is

1:45:09

getting slightly worse. Anyway, I

1:45:11

went on too long about this because this

1:45:13

is just a feature I think of economics

1:45:16

very generally. Again, it's nothing new. I'm not

1:45:19

claiming any new insights here. What

1:45:21

I want to get at is

1:45:23

that one very obvious

1:45:27

ramification of technological change is

1:45:30

more efficient extraction.

1:45:34

I think this goes beyond economics. It's

1:45:36

not just extraction of wealth. It's extraction

1:45:38

of everything. This is the metaphor of

1:45:41

the human beings in the pods powering

1:45:43

the matrix. The more

1:45:45

technology both is able to analyze

1:45:47

a whole bunch of very complicated

1:45:50

problems, but also bring people together.

1:45:52

Bring people together sounds good, but

1:45:55

increasingly efficient ways to

1:45:57

transfer information, etc. Let's

1:46:02

pause and tell you what I'm thinking about. You

1:46:05

know that every website you

1:46:07

visit collects data about you. You

1:46:10

get personalized ads, right? I

1:46:13

think Google Chrome just

1:46:15

recently tried to convince me to send a

1:46:17

whole bunch of information that would really make

1:46:19

my experience more pleasant because the ads that

1:46:21

I would see would be more tailored to

1:46:23

my interests. They're

1:46:26

doing this efficiency thing, right? Why

1:46:28

give one ad to everybody if

1:46:31

not everyone is interested in this

1:46:34

thing when we can instead target

1:46:36

ads to each individual person? That's

1:46:38

something that technology is allowing us

1:46:40

to do. In

1:46:43

some sense it is more efficient. If

1:46:46

I'm going to see ads, maybe

1:46:48

it's better for me to see ads that I

1:46:50

might actually be interested in the product, right? I'm

1:46:54

not making value judgments about this, but

1:46:57

there is definitely also a part of me

1:46:59

that just doesn't want to give the information

1:47:01

about what I'm doing willy-nilly to a bunch

1:47:03

of companies. There

1:47:06

is this sort of if you're too efficient economically,

1:47:11

you're not happy about your transaction.

1:47:13

An ideal transaction would make both

1:47:15

parties happier, right? If

1:47:18

you're at that perfect equilibrium point,

1:47:22

both parties are just mildly satisfied

1:47:24

or even slightly disgruntled rather than

1:47:26

actually happy. That's the tension between

1:47:29

efficiency and happiness. Perhaps

1:47:32

more profoundly there is a political

1:47:35

version of this, not just an

1:47:37

economic version of this. The

1:47:39

world is big. Population

1:47:42

has been growing. But

1:47:45

we're also more interconnected, right? Not

1:47:49

just in the sense that I can see

1:47:51

videos about what's happening in Sri Lanka

1:47:53

or something like that almost

1:47:55

instantaneously or I can send emails across

1:47:58

the world, but in the sense that

1:48:00

our institutions are getting bigger

1:48:02

because technology is allowing them

1:48:05

to get bigger. Back in the

1:48:07

day I would imagine that

1:48:09

going to a coffee shop would

1:48:11

probably put me in a

1:48:13

coffee shop that was locally owned by the

1:48:16

people running the coffee shop, right? That was

1:48:18

a traditional thing. This is a complicated story

1:48:20

because there are fewer coffee shops back in

1:48:22

the past than you might have imagined, but

1:48:24

they were there, okay? Today,

1:48:27

increasingly, the coffee shops that you're likely

1:48:29

to run into are part of international

1:48:31

chains. They're very, very big, and there's,

1:48:33

again, pluses and minuses about that. There

1:48:36

are economies of scale that make things

1:48:38

better, etc. But one

1:48:40

very definite implication of this

1:48:42

is if you're in a store that is

1:48:44

run by the people who own the store

1:48:47

and there's a small number of people involved

1:48:49

in the entire thing, you can

1:48:51

complain. You have a voice. You

1:48:54

can make a suggestion. You can say, well, how about

1:48:56

carrying this product instead of that product? And people

1:48:58

will listen to you. If

1:49:01

you go into Starbucks and say, I think you should

1:49:03

carry this different kind of coffee and you tell that

1:49:05

to the barista, what are you

1:49:07

doing? You're wasting your time. Your voice

1:49:09

is not that big. And this is

1:49:12

kind of a silly, trivial example of

1:49:14

a much bigger issue, which

1:49:16

is that whether it's politics

1:49:19

or shopping or being employed,

1:49:22

in all these various ways,

1:49:24

we are interacting with in

1:49:26

very intimate ways hugely

1:49:29

large-scale institutions that

1:49:31

we ourselves have no real effect over.

1:49:34

This leads to a feeling of

1:49:37

powerlessness, right? Because technology has made

1:49:39

us so much more connected, it

1:49:41

has made the things that influence

1:49:43

our lives so much larger and

1:49:46

therefore harder for us to

1:49:48

really deal with on

1:49:50

an equal basis. But put

1:49:53

it this way, the world is

1:49:55

growing, institutions are growing,

1:49:57

so relatively speaking, individuals

1:50:00

are shrinking. They're shrinking in

1:50:02

their ability to affect the world around

1:50:04

them. And the

1:50:06

efficiency stuff we just talked about makes

1:50:10

it, in some cases anyway, the case

1:50:13

that it is harder and

1:50:15

harder for future generations to expect

1:50:17

a higher standard of living, more

1:50:20

wealth, right? The wealth is being

1:50:22

extracted at an incredibly efficient rate

1:50:24

because of these technological advances. And

1:50:27

this makes people depressed

1:50:29

and skeptical and less

1:50:32

enthusiastic about the prospects of their

1:50:34

individual lives and the society they

1:50:36

live in. And that

1:50:39

puts a real strain on

1:50:41

democracy and liberal society more

1:50:43

generally because people are being

1:50:45

governed by powers

1:50:47

and systems that they

1:50:50

cannot substantially affect back.

1:50:53

And guess what? In some

1:50:55

cases they will respond to that sort

1:50:57

of loss of power by

1:50:59

seeking a strong man to rescue

1:51:01

them or by taking refuge in

1:51:03

conspiracy theories where they can imagine

1:51:05

something a little more vivid. You

1:51:07

know, the again, there

1:51:09

are no value judgments here and

1:51:12

maybe these impersonal forces that are

1:51:14

running our lives have

1:51:16

no ill intent whatsoever but

1:51:18

nevertheless make us feel bad.

1:51:20

At some psychological level

1:51:22

it would almost make us happier if

1:51:25

there were ill intent, right? That we

1:51:27

can blame somebody who is evil and

1:51:29

bad and that's one of the reasons

1:51:31

why conspiracy theories, etc., are so tempting.

1:51:35

So I

1:51:38

don't know whether this adds up to anything

1:51:40

quite, but my point is that the

1:51:43

pessimistic scenario is kind of

1:51:45

the matrix equilibrium where

1:51:48

your physical body is powering the system

1:51:51

and not really anything else.

1:51:53

That there is no individuality, just

1:51:55

existence and survival. Again,

1:51:58

this is supposed to be the pessimistic scenario. This is not a necessarily

1:52:00

the scenario I think is going to be true, but

1:52:03

you can imagine that AI, gene

1:52:06

editing, brain computer interfaces, all

1:52:08

of these work to squeeze individual

1:52:13

human beings for all the system

1:52:15

can get out of them in

1:52:17

various ways. And not

1:52:20

because there are evil overlords

1:52:22

or supervillains trying to do

1:52:24

it, but because individuals responding

1:52:26

to their own personal self-interest

1:52:28

and the incentive structures of

1:52:30

the system they're embedded in

1:52:32

lead to that kind of

1:52:35

configuration. Can we prevent

1:52:37

it? You know, maybe, but

1:52:39

we'll have to try. It's not obvious

1:52:42

that we will prevent it. The

1:52:44

coming technological revolution could lead things

1:52:46

to be pretty bad if

1:52:48

we don't prevent it. So let's think

1:52:50

about the optimistic solution, shall

1:52:52

we? Because the optimistic scenario, again I'm

1:52:55

not going to tell you it's going

1:52:57

to happen, but the optimistic

1:52:59

scenario is kind of obvious. I don't know

1:53:01

if you remember the podcast with John Danner

1:53:03

where he talked about

1:53:06

our coming automated utopia.

1:53:09

The optimistic scenario is that

1:53:11

all these technological

1:53:14

innovations leave

1:53:16

human beings free

1:53:19

to do whatever they want, right? At

1:53:22

the most basic level, the most sort

1:53:24

of obvious kind of hopeful scenario is

1:53:27

that computers and robots do all the

1:53:29

boring things, all the things we don't

1:53:31

want to do, all the jobs, all

1:53:34

the tasks that are not fulfilling to

1:53:36

human beings. Those are the ones that

1:53:38

we give and automate we give to

1:53:40

the AIs and the cyborgs and the

1:53:43

robots. Whereas we get

1:53:45

to enjoy life and create

1:53:48

things. So in the optimistic

1:53:50

scenario we somehow stabilize climate

1:53:52

change and the environment more

1:53:54

broadly. We invent sustainable methods

1:53:57

of energy and food production.

1:54:00

We make a specific effort not just to

1:54:02

produce food and sell it and make profit,

1:54:04

but to do so in sustainable ways that

1:54:06

leave the environment unscathed over

1:54:08

very long time periods. We

1:54:11

lower the demands of work. As I'm

1:54:13

recording this, it was just a very

1:54:16

few days ago that Bernie Sanders proposed

1:54:18

in Congress legislation that would

1:54:20

mandate a four-day work week. This

1:54:23

is something that has been bouncing around before. He's not

1:54:25

even the first person to propose legislation about it, but

1:54:28

the idea would be literally

1:54:31

less work per week. So still eight hours

1:54:34

of work per day, but only four hours

1:54:36

a week. And the motivation behind this is

1:54:39

workers have become more productive. So they

1:54:41

can produce in four days what a

1:54:43

few decades ago they were producing in

1:54:46

five days worth of work. And

1:54:48

this is one of those schemes. It sounds

1:54:50

maybe a little utopian, a little overly utopian,

1:54:52

but I did look it up. I

1:54:56

Googled four-day work week and what is the

1:54:58

status of the empirical data about this. And

1:55:00

I was a little surprised

1:55:02

at how positive the

1:55:04

data are about the

1:55:06

four-day work week scheme.

1:55:10

Individual companies have tried it, and

1:55:12

it makes everything better, roughly speaking.

1:55:15

So the companies do not suffer loss,

1:55:17

at least in the data sets

1:55:19

that I was able to see.

1:55:21

They don't suffer loss of productivity

1:55:23

overall because people are more

1:55:25

energized to get their work done and

1:55:27

be more productive in those four days.

1:55:31

And they've stuck with it. They do pilot programs,

1:55:33

and they seem to take off. Everyone is happier

1:55:35

with a four-day work week. And then you can,

1:55:38

if you do things right, you can

1:55:41

enjoy yourself for a three-day weekend. Now,

1:55:44

of course, there are exceptions. It

1:55:46

depends on what kind of job you want to have.

1:55:49

My job would not be affected

1:55:51

very much by Bernie Sanders' proposal.

1:55:55

I work more than a five-day work week already,

1:55:57

but I like what I do. The

1:56:00

utopian vision? What if everyone was able

1:56:02

to like what they do for living

1:56:04

as much as I like what I

1:56:06

do for a living? That's my version

1:56:09

of a utopian vision. So. Technology

1:56:11

is going to create excess

1:56:13

value raised in a be,

1:56:15

make us more productive, make

1:56:18

us increase wealth faster. I

1:56:20

think that's very, very plausible.

1:56:22

What? Are we gonna do? With. That

1:56:24

wealth one option is make people's

1:56:26

lives better by making them work

1:56:28

less. The. Four day work

1:56:30

week which you know. It's

1:56:33

not going to pass. Okay, Bernie

1:56:35

Sanders is is very good at

1:56:37

symbolic actions. He's less good at

1:56:39

getting legislation passed. But I do

1:56:41

think that people are talking about

1:56:44

things like a four day work

1:56:46

week more now than they were

1:56:48

a few decades ago. Arguably Were

1:56:50

taking the possibility more seriously and

1:56:52

some number of years down the

1:56:55

road, we will take it very

1:56:57

seriously. Another aspect

1:56:59

of utopian pictures that we

1:57:01

use biotechnology to make us

1:57:04

healthier and happier. so we

1:57:06

don't create monsters, We create

1:57:08

happy, healthy human beings, always

1:57:10

live four hundred ten years

1:57:13

and then painlessly die Now

1:57:15

lose. That's intention of course

1:57:17

with this extractive business, because

1:57:20

one of the most obvious

1:57:22

success stories of attempts of

1:57:24

the large impersonal system to

1:57:26

extract wealth from individuals. Is

1:57:29

the healthcare system here in

1:57:31

the United States? A shocking

1:57:33

number of people live pretty.

1:57:37

Financially successful, lies and die broke because

1:57:39

in the last moments of their lives,

1:57:41

they spend a huge amount of money

1:57:43

on healthcare and then they die when

1:57:46

the healthcare system kind of is in

1:57:48

favor of this. And I say health

1:57:50

care law means doctors and nurses. Mostly

1:57:52

I mean insurance companies and hospitals and

1:57:54

whatever. This is a very, very complicated

1:57:56

story. I don't over simplifying. The point

1:57:58

being that. One way

1:58:01

to transfer wealth from individuals to

1:58:03

bigger conglomerations of people. corporations, or

1:58:05

what have you is at their

1:58:07

weakest moments when they're not healthy,

1:58:10

when they're approaching death, and we

1:58:12

have to decide whether that's gonna

1:58:14

be something that we live with

1:58:17

or try to fix. But in

1:58:19

this utopian, optimistic scenario that I'm

1:58:21

giving you now. We.

1:58:24

Have have vastly improved way of dealing

1:58:26

with death and dealing with serious illness.

1:58:28

I get a very early podcast episode

1:58:30

I did was with Make It Rosenblum

1:58:33

who is one of the people who

1:58:35

works in the Better Death Movement and

1:58:37

What's Called I Forget but the movement

1:58:40

is about facing up to the reality,

1:58:42

the Rise In a Die and the

1:58:44

fact that the here in the Us

1:58:46

I think many other places in the

1:58:49

world we're so scared and reluctant to

1:58:51

accept the fact that we're going to

1:58:53

die. That we do

1:58:55

so badly right? We do so

1:58:58

in very demonizing ways, emotionally, as

1:59:00

well as extracting our wealth and

1:59:02

things like that so. Part.

1:59:05

Of the utopian scenario is

1:59:07

that we wisely choose to

1:59:09

use advances in biology and

1:59:11

medicine to make our lives

1:59:13

healthier while we're here and

1:59:15

make the transition from like

1:59:17

to death a little bit

1:59:19

more pleasant and bearable. Another

1:59:22

aspect of utopian optimistic scenario

1:59:25

is to not just give

1:59:27

us more free time, but

1:59:30

to take advantage of information

1:59:32

technology to find communities of

1:59:34

mutual creativity and support. right?

1:59:37

And this has been part of

1:59:39

the sort of internet utopian vision

1:59:41

for a long time. As much

1:59:43

as we worry about people falling

1:59:46

into epidemic bubbles and and fracturing

1:59:48

their communities and so forth, you

1:59:50

have to admit that. Social.

1:59:53

media another related technologies make

1:59:55

it much easier for people

1:59:57

with quirky little individual in

2:00:00

interests to find like-minded

2:00:02

people. For whatever

2:00:04

reason, I don't

2:00:06

think that we've taken advantage of

2:00:08

this capacity nearly as much as

2:00:11

we could. There is some of

2:00:13

it. There are online communities. There's

2:00:17

people who are interested in playing poker or people

2:00:19

who are interested in basket weaving or whatever can

2:00:21

find their peeps online. There's

2:00:25

individual success stories. Jennifer, my wife and

2:00:27

I found each other by reading each

2:00:29

other's blogs, something that would not have

2:00:31

been able to happen before there were

2:00:34

blogs and social media. But

2:00:36

also we all know that this

2:00:39

sort of ability

2:00:42

to find micro communities

2:00:44

also leads people to

2:00:46

malevolent situation, cults and

2:00:48

conspiracies and whatever. So

2:00:51

how do

2:00:53

we ensure, how do we allow,

2:00:55

how do we give space for

2:00:58

this technology to give us

2:01:00

the optimistic, the good aspects

2:01:02

and prevent the

2:01:04

bad aspects? Yeah,

2:01:07

I don't know. I mean right

2:01:10

now I think that these

2:01:13

information technologies, again the data

2:01:15

suggests that they are more

2:01:17

alienating than uplifting. It's completely

2:01:19

possible in my mind that that's just

2:01:21

because we haven't yet adapted, right? The

2:01:23

pace of technological change is much faster

2:01:26

than the pace of psychological change or

2:01:28

the pace of our changes

2:01:30

and habits and how

2:01:32

we meet people, talk to people,

2:01:34

interact and so forth. So maybe

2:01:36

we're just slow. That's absolutely possible

2:01:38

in my mind. But maybe we're

2:01:41

just, we've opened

2:01:43

up a can of worms and the worms

2:01:46

are going to eat our brains in some

2:01:48

metaphorical way. So look,

2:01:50

I don't know. I told you

2:01:53

at the beginning this was not going to be

2:01:56

systematic. I was not going to tell you the

2:01:58

final answer to anything. hope

2:02:00

that people have the conversation. I

2:02:02

hope that we take these issues

2:02:05

seriously. Technology can

2:02:07

increase value and productivity. We

2:02:10

don't know what we're gonna do with

2:02:12

that value and productivity. We don't know

2:02:15

whether the wealth will be distributed equitably

2:02:17

to lots of people. I'm

2:02:20

a favor of a universal basic income, for

2:02:22

example, but you know, that's expensive. I don't

2:02:24

imagine it's happening soon, but I can imagine

2:02:27

that it's gonna happen as part

2:02:29

of this big upcoming transition to a

2:02:31

different mode of human life. The

2:02:33

utopian vision is one in which so

2:02:36

much of the stuff that we've had

2:02:39

to reluctantly do as part of human

2:02:41

life is handed off to technology,

2:02:46

computers, semi-agential

2:02:51

programs and apps that don't mind

2:02:53

doing the dirty work, leaving

2:02:56

we human beings to live more

2:02:58

fulfilling lives. And these, by

2:03:00

fulfilling lives, I don't mean writing

2:03:03

poetry or composing symphonies. Maybe you can do

2:03:05

that if you want to do that. I'm

2:03:08

a huge believer that there are

2:03:10

much more straightforward

2:03:12

everyday ways to leave fulfilling

2:03:15

lives. Maybe your way of leading fulfilling

2:03:17

lives is to be a good

2:03:19

person to your family or to help others.

2:03:21

Maybe you really just want to play video

2:03:24

games or watch other people play sports or

2:03:26

something like that. Maybe you just want to

2:03:28

barbecue on weekends and watch movies at night.

2:03:31

All that is completely fine. And

2:03:34

I think that there will always

2:03:36

be heterogeneity in what people choose

2:03:38

to do as individual human beings.

2:03:41

The point is you can allow for it. Whatever

2:03:44

individual's version of their best

2:03:46

lives can be, in principle,

2:03:49

technology can give us the space to

2:03:51

let people do that without being worn

2:03:53

down by the

2:03:56

need to work, by worries about

2:03:58

their jobs. I haven't

2:04:00

said this out loud, sorry, because it's just so implicit

2:04:02

in my mind, but of

2:04:05

course technological change

2:04:07

is going to get rid

2:04:09

of many very common

2:04:11

jobs. That's always

2:04:14

been the case with technological

2:04:16

change. People used to be

2:04:18

horseback riders, you know, horse

2:04:21

and buggies and things like that. There's a

2:04:23

lot more candle makers back in the day. Jobs

2:04:26

change over time. That's going to happen.

2:04:30

So if your plan to adapt to

2:04:32

the future is to

2:04:34

invent artificial ways of keeping

2:04:36

the same old occupations in the same

2:04:38

numbers that we used to have, I

2:04:41

think you are doomed to failure. But

2:04:45

it's the fact that things are

2:04:47

changing that are really problematic. If

2:04:50

there is a new equilibrium on the

2:04:52

other side of the singularity, then we

2:04:54

can settle into a set of either

2:04:56

jobs or no jobs, lack of jobs,

2:04:58

if we're completely supported by society

2:05:01

as a whole, and that will be

2:05:03

better. That's part of the optimistic scenario.

2:05:06

What's so alienating right now

2:05:08

is the uncertainty. Like, you don't know if

2:05:10

you're going to have your job 10 years,

2:05:12

maybe an entire industry is being disrupted. That

2:05:15

is legitimately difficult to deal

2:05:17

with. An economist can tell you,

2:05:19

well, there'll be other jobs, but

2:05:22

as an actual human being, changing

2:05:24

jobs or finding that the career

2:05:27

that you had prepared yourself for

2:05:29

over the course of decades now

2:05:31

is no longer viable, that is

2:05:33

really difficult. And that is a

2:05:36

real human cost of technological change.

2:05:39

And we're facing it right now,

2:05:41

absolutely and undoubtedly. So the question

2:05:43

is, does that continue forever? Or

2:05:47

can we adapt these technological changes

2:05:49

to get rid of that uncertainty,

2:05:51

that lack of ability to plan

2:05:53

more than a few years ahead?

2:05:57

I don't know whether we have... communally

2:06:02

the wit and willpower to

2:06:04

invent the equitable system, the

2:06:07

optimistic scenario. We're

2:06:11

not trained for this. This is

2:06:13

humanity facing a situation it has

2:06:15

never faced before, right? The scale

2:06:17

of the problem is completely

2:06:20

unprecedented and people don't

2:06:22

always make wise choices. So I'm

2:06:24

not optimistic about the optimistic scenario.

2:06:26

The optimistic scenario is there for

2:06:28

the taking, I think. If

2:06:30

we choose to, there

2:06:32

will be hiccups along the way, no doubt. We'll make

2:06:35

some bad choices and need to fix them, but we

2:06:39

need to collectively decide

2:06:42

to avoid the pessimistic possible

2:06:44

outcomes and work for the

2:06:46

more optimistic ones. Okay,

2:06:49

so there's a whole bunch of things that I didn't mention. You

2:06:52

know, I promised to talk about technological changes

2:06:54

and there's other kinds

2:06:56

of changes too, right? There's political changes.

2:06:58

I've talked about democracy too much already

2:07:00

on the podcast. I don't need to

2:07:03

remind you of my worries about that. I

2:07:06

did not talk that much about the possibility being

2:07:08

uploaded to the computer because I don't think that's

2:07:11

very interesting. Moving

2:07:13

human beings into space, you

2:07:15

know, expanding humanity off

2:07:18

of the planet, I think, is

2:07:20

potentially a big one. I think that's

2:07:22

potentially a very big transition. I

2:07:24

just don't quite see it happening

2:07:27

realistically on the same time scales

2:07:29

as these other technological changes that

2:07:31

we're facing right now.

2:07:34

So for the moment, it

2:07:36

seems, and I could be wrong about this, but at

2:07:38

the moment it seems sensible to me to

2:07:40

focus on the changing life here on Earth.

2:07:42

So that's what I tried to do. Anyway,

2:07:46

I hope you enjoyed and were given some

2:07:48

thoughtful moments in this little exploration of the

2:07:50

possibilities. The only thing we can be absolutely

2:07:53

sure of is the future is going to

2:07:55

be different, which by the way is new by

2:07:58

itself, right? I mean the

2:08:00

future was always different throughout human history, but only

2:08:02

by a little bit. From generation

2:08:04

to generation, you could imagine that life

2:08:07

was more or less the same. You

2:08:10

and I right now live in a

2:08:12

world where that's not true anymore.

2:08:14

We can absolutely not imagine that the world

2:08:16

100 years from now is going to be

2:08:18

more or less the same than the world

2:08:20

now. We are not

2:08:22

equipped. We are not trained. We

2:08:25

are not educated or practiced to

2:08:27

think about this very real possibility.

2:08:29

I'm sure that my own thoughts are

2:08:32

sort of hopelessly scattered and naive and

2:08:34

incomplete. I'm sure that 24 hours

2:08:37

from now, much less a year from now, I'm going

2:08:39

to be thinking, oh, why didn't I say that? Or,

2:08:41

oh, it was so silly that I said the other

2:08:43

thing. That's okay. This

2:08:45

is absolutely meant explicitly as

2:08:47

a tentative exploration. I

2:08:50

hope it's giving you some food

2:08:52

for thought, and I hope that

2:08:54

we collectively choose the wise optimistic

2:08:56

path. Thanks.

Unlock more with Podchaser Pro

  • Audience Insights
  • Contact Information
  • Demographics
  • Charts
  • Sponsor History
  • and More!
Pro Features