Podchaser Logo
Home
167 - Eliezer is Wrong. We’re NOT Going to Die with Robin Hanson

167 - Eliezer is Wrong. We’re NOT Going to Die with Robin Hanson

Released Monday, 17th April 2023
Good episode? Give it some love!
167 - Eliezer is Wrong. We’re NOT Going to Die with Robin Hanson

167 - Eliezer is Wrong. We’re NOT Going to Die with Robin Hanson

167 - Eliezer is Wrong. We’re NOT Going to Die with Robin Hanson

167 - Eliezer is Wrong. We’re NOT Going to Die with Robin Hanson

Monday, 17th April 2023
Good episode? Give it some love!
Rate Episode

Episode Transcript

Transcripts are displayed as originally observed. Some content, including advertisements may have changed.

Use Ctrl + F to search

0:00

One of the most striking features of our world are the mechanisms

0:03

we use to keep that piece and to coordinate

0:05

among all these divergent conflicting things.

0:08

And one of the moves that often AI people

0:10

make to spin scenarios is just to assume

0:12

that AIs have none of that problem. AIs

0:15

do not need to coordinate. They do not have conflicts

0:17

between them. They don't have internal conflicts. They

0:20

do not have any issues in how to organize

0:22

and how to keep the piece between

0:24

them. None of that's a problem for AIs

0:27

by assumption. They're just these other

0:29

things that have no such problems. And

0:30

then, of course, that leads to scenarios

0:32

like Then They Kill Us All. Welcome

0:37

to Bankless, where we explore the frontier of internet

0:39

money and internet finance and also

0:41

AI. This is how to get started, how to get

0:43

better, how to front run the opportunity. This

0:45

is Ryan Sean Adams. I'm here with David Hoffman, and

0:48

we're here to help you become more bankless.

0:50

Guys, we promised another AI episode

0:53

after episode with Eliezer. Well, here

0:55

it is. Here's the sequel. The last episode

0:57

of Eliezer Dkowski we titled Correctly,

1:00

We're All Gonna Die, because that's basically what

1:03

he said. I left that episode with

1:05

a lot of misgivings. Existential

1:08

dread. Yeah, existential dread. It

1:10

was not good news in that episode, and I

1:12

was having a difficulty processing it. But

1:15

David and I talked, and we knew we had to have

1:17

some follow-up episodes to tell the full story,

1:19

bankless style, and go on the journey of

1:22

AI, its intersection with our lives,

1:24

with the world, and with crypto. So here it is. This

1:26

is the answer to that. This is Robin Hanson

1:29

on the podcast today. Let me go over a few takeaways.

1:31

Number one, we talk about why Robin thinks

1:33

Eliezer is wrong. We're not all gonna die

1:36

from artificial intelligence, but we might

1:38

become their pets. Number two, why

1:40

we're more likely to have a civil war with AI

1:42

rather than being eaten by one single artificial

1:45

intelligence. Number three, why Robin

1:47

is more worried about regulation of AI than

1:50

actual AI. Very interesting. Number

1:52

four, why alien civilizations spread

1:54

like cancer? This is also related

1:56

to AI and super interesting. Number

1:59

five, finally.

1:59

we get to what in the world does

2:02

Robin Hanson think about crypto?

2:04

David, why was this episode significant

2:06

for you? Robin Hanson is such a

2:08

great thinker. He's absolutely a polymath

2:11

and really like Eliezer progresses

2:14

in his thoughts in the very linear

2:16

logical fashion. So he's easy

2:19

to follow along with. And so

2:21

the first half of this episode, maybe the 45 minutes, 50

2:23

minutes is all about just

2:26

the AI alignment debate and Eliezer

2:28

versus Hanson, which is

2:29

a debate that has actually been going on

2:32

for many, many years now. This is not the... Well, over

2:34

a decade. Yeah, you're right. This is not the first time that

2:36

Eliezer has heard about Robin Hanson

2:39

or Robin Hanson has debated Eliezer. This is an ongoing

2:41

saga. And so this is just course

2:43

material for Robin Hanson. And so we

2:46

really focus on this AI alignment problem and

2:49

how these thinkers think

2:51

that AI will develop and progress here

2:53

on planet earth and how they

2:56

will in friendly or unfriendly ways

2:58

ultimately collide with humanity. So that's the

2:59

first half of this episode. The second half

3:02

of this episode, I think is when this gets really, really

3:04

interesting. If you just listen to the first half

3:06

of this episode, you would just think like, oh, this is the

3:08

other half of the conversation to the AI debate,

3:11

which it is. The second half connects

3:13

this to so many more rabbit holes

3:15

and so many more topics of conversation

3:18

that are actually, I would say deeply

3:20

ingrained to bankless content themes,

3:23

the themes of competition versus

3:25

coercion, the themes of exploring frontiers,

3:28

the thing of Moloch and the prisoner's dilemma

3:30

and how

3:31

things coordinate across species. And

3:34

so we connect

3:35

AI alignment to Robin Hanson's famous

3:38

idea that he calls grabby aliens.

3:40

If you haven't heard about grabby aliens,

3:42

you're in for a treat. So this goes from

3:45

what is a simple counter argument to

3:47

a debate that we've had to a

3:49

multifaceted exploration

3:52

that is just so cursory of many, many deep

3:54

subjects that I hope to explore further

3:56

on bankless. Yeah. And honestly, David, I'm dying

3:58

to record the debrief with the AI.

3:59

because I want to get your take on

4:02

this episode that was and contrast it.

4:04

You can see how giddy I was in the second half of that. I know.

4:06

And I want to contrast it with our Eliezer

4:08

episode and how these two thinkers think

4:10

and who do you think has the stronger case. The

4:13

debrief episode is the episode David and I

4:15

record after the episode where we just talk about what

4:17

just happened, give our raw, unfiltered thoughts.

4:19

So we're about to record that now. If you are

4:21

a bankless citizen, then you

4:24

have access to that right now. If you'd like

4:26

to become a citizen, click the link in the show notes and

4:28

you'll get access to our premium RSS

4:31

feed where you'll have access to that. Also,

4:33

this episode will become a collectible next

4:36

Monday, I believe. I'm collecting this episode

4:38

so hard. Me too. I've got

4:40

the Eliezer episode in my collections. I'm

4:43

also collecting this. We release episode

4:45

collections for our key episode of

4:47

the week every Monday. The mint time is 3 p.m.

4:50

Eastern and whatever time zone you're in, you

4:52

have to convert that. That's it. We're

4:54

going to get right to the episode with Robin Hansen.

4:56

But before we do, we want to thank the sponsors

4:59

that made this possible, including our

5:02

favorite crypto exchange, Kraken,

5:04

our recommended exchange for 2023. Go

5:07

set up an account.

5:08

Kraken has been a leader in the crypto industry

5:10

for the last 12 years. Dedicated to

5:12

accelerating the global adoption of crypto,

5:15

Kraken puts an emphasis on security, transparency,

5:17

and client support, which is why over 9

5:20

million clients have come to love Kraken's

5:22

products. Whether you're a beginner or a pro,

5:24

the Kraken UX is simple, intuitive, and

5:26

frictionless, making the Kraken app a great

5:28

place for all to get involved and learn

5:30

about crypto. For those with experience, the redesigned

5:33

Kraken Pro app and web experience is completely

5:35

customizable to your trading needs, integrating

5:38

key trading features into one seamless

5:40

interface. Kraken has a 24-7, 365

5:42

client support team

5:44

that is globally recognized. Kraken support is

5:46

available wherever, whenever you need them,

5:49

by phone, chat, or email. And for all of

5:51

you NFTers out there, the brand new Kraken

5:53

NFT Beta platform gives you the best

5:56

NFT trading experience possible. Rarity rankings,

5:58

no gas fees, and the ability to to buy an

6:00

NFT straight with cash. Does your crypto

6:03

exchange prioritize its customers the way that

6:05

Kraken does? And if not, sign up with Kraken

6:07

at kraken.com slash bankless. Hey,

6:09

Bankless Nation. If you're listening to this, it's

6:12

because you're on the free Bankless RSS

6:14

feed. Did you know that there's an ad-free version

6:16

of Bankless that comes with the Bankless Premium subscription?

6:19

No ads, just straight to the content. But

6:21

that's just one of many things that a premium

6:23

subscription gets you. There's also the token

6:26

report, a monthly bullish, bearish, neutral

6:28

report on the hottest tokens of the

6:29

month. And the regular updates from the token

6:32

report go into the token Bible, your

6:34

first stop shop for every token worth

6:36

investigating in crypto. Bankless Premium also

6:38

gets you a 30% discount to the permissionless

6:41

conference, which means it basically just pays for itself.

6:43

There's also the airdrop guide to make sure

6:45

you don't miss a drop in 2023. But

6:48

really, the best part about Bankless Premium

6:50

is hanging out with me, Ryan, and the rest of

6:52

the Bankless team in the Inner Circle Discord

6:55

only for premium members. Want the alpha?

6:57

Check out Ben the Analyst's DGen Pit, where

6:59

you can ask him questions about the token report. Got a

7:01

question? I've got my own Q&A room

7:03

for any questions that you might have. At Bankless,

7:06

we have huge things planned for 2023, including

7:09

a new website with login with your Ethereum

7:11

address capabilities, and we're super excited to ship

7:13

what we are calling Bankless 2.0 soon

7:16

TM. So if you want extra help exploring

7:18

the frontier, subscribe to Bankless Premium.

7:20

It's under 50 cents a day and provides a wealth

7:23

of knowledge and support on your journey west.

7:25

I'll see you in the Discord. The Phantom Wallet

7:27

is coming to Ethereum. The number one wallet

7:29

on

7:29

Solana is bringing its millions of users

7:32

and beloved UX to Ethereum and Polygon.

7:34

If you haven't used Phantom before, you've been missing

7:36

out. Phantom was one of the first wallets to pioneer

7:39

Solana staking inside the wallet, and will

7:41

be offering similar staking features for Ethereum

7:43

and Polygon. But that's just staking. Phantom

7:45

is also the best home for your NFTs.

7:47

Phantom has a complete set of features to optimize

7:50

your NFT experience. Pin your favorites,

7:52

hide your uglies, burn the spam, and

7:55

also manage your NFTs sale listings

7:57

from inside the wallet.

7:59

Phantom is of course a... multi-chain wallet, but it makes

8:01

chain management easy, displaying your

8:02

transactions in a human readable format

8:04

with automatic warnings for malicious transactions

8:07

or phishing websites. Phantom has already saved over 20,000

8:09

users from getting scammed or hacked. So

8:12

get on the Phantom waitlist and be one of the first

8:14

to access the multi-chain beta. There's

8:16

a link in the show notes or you can go to phantom.app.waitlist

8:20

to get access in late February.

8:22

Bankless Nation, we are excited to introduce you to

8:24

Robin Hanson. He is a professor of economics

8:26

at George Mason University and

8:29

a research associate at the Future of Humanity

8:31

Institute at Oxford.

8:32

This takes an interdisciplinary research

8:35

center approach that investigates big-picture

8:37

questions about humanity and its prospects. And

8:40

I think explaining exactly who Robin

8:43

is and what he's doing is not a trivial task because

8:45

he is a polymath, certainly spans many things.

8:47

He's provided many different mental models

8:50

across various disciplines, but

8:52

I would not call him conventional by any

8:54

means. And I'm sure, Bankless listener,

8:56

you will see what we mean here today. Robin,

8:59

welcome to Bankless. Glad to be here. I think I

9:01

can try to explain the kind of weird that

9:02

I am. Oh, oh please. Yeah, go

9:05

ahead. What's your kind of weird? I can't explain

9:07

the kind of weird I am. Tell us how are you weird. So

9:09

I think I'm conventional on methods

9:12

and weird on topics. So

9:17

I tend to look for neglected

9:19

important topics where

9:21

I can find some sort of angle, but I'm usually looking for

9:23

a pretty conventional angle that is some

9:25

sort of usual tools that just haven't been

9:27

applied

9:29

to an interesting important topic. So

9:31

I'm not a radical about theories

9:33

or methods, but I am about

9:36

topics. So you use things like science

9:38

and math and statistics and all

9:41

of those normal non-radical things. Right.

9:43

I've spent a lifetime collecting all these usual

9:46

tools, all these systems really, and

9:48

I'm more of a polymath in that I'm trying to combine

9:51

them on neglected important topics.

9:54

So if you go to a talk where everybody's arguing and

9:56

you pick a side, I mean the chances you're

9:58

right are kind of small in the sense that that there's all

10:00

these other positions. Maybe

10:03

you'll be right, but probably you'll be wrong because you're picking

10:06

one of these many positions. If you go pick a

10:08

topic where nobody's talking about it, you just

10:10

say anything sensible,

10:12

you can probably be right.

10:14

We've, I think, recently ran into

10:17

somebody who follows that

10:19

path of sorts, somebody who thinks

10:21

very logically and rationally, but is applying

10:24

it to more unique frontiers of

10:26

the place that humanity is. And that

10:28

is our recent episode with Eliezer, who

10:31

followed a decently logical path that

10:33

was relatively easy to follow, that

10:36

unfortunately led us into a dead

10:38

end for humanity. And

10:42

so it was something that me and Ryan, as

10:44

co-hosts

10:44

of this podcast, but then also

10:46

many of the listeners, felt trouble with

10:48

because Eliezer was able to guide

10:50

us in a very simple and logical path

10:53

onto the brink. And so we're

10:55

hoping to continue that conversation with you, Robin,

10:58

as well as being able to explore some new frontiers. Yeah,

11:00

Robin, I'm just wondering if we could just wade right into the deep

11:02

end of the pool here, because what happened

11:05

is basically Eliezer came on our podcast,

11:07

we thought we were going to talk about AI and safety

11:09

and alignment, all of these things. We know he talks about

11:12

that a lot, and we thought we were going to tie that onto

11:14

crypto.

11:14

What ended up happening midway through

11:16

that podcast, Robin, is I got an existential

11:19

crisis. So did David. The rest

11:21

of the agenda seemed meaningless and

11:23

unimportant, because here is Eliezer telling

11:26

us basically, that the AI was

11:28

imminent. He didn't know whether it would happen in two

11:30

years, in five years, in 10 years, in 20 years, but

11:33

he knew the final destination,

11:35

which is that AIs would kill all

11:37

of humanity and that we didn't have a chance. And

11:39

basically, and I'm not being hyperbolic here,

11:42

Robin. I know you haven't had a chance to go through that episode, but

11:44

he basically says, you know, spend time with

11:46

your loved ones, because you do

11:48

not know how much time you actually have. And

11:51

so this left like

11:53

me and I think many bankless listeners

11:55

on kind of a cliffhanger of like, oh my God,

11:57

are we all going to die?

11:59

David tried to talk to me after that episode. He's like, Ryan, it's okay.

12:02

Like, you know, but we knew we also

12:04

had to like find someone who

12:07

could give us another interpretation of

12:09

what is going on with AI. And Robin, we

12:11

have chat GPT for it

12:14

looks incredibly sophisticated. It

12:16

looks like it's advancing at breakneck speed

12:19

and we're worried about the scenario. So

12:21

when Eliezer, you'd Koussi says, we're

12:23

all going to die. What do you make of that?

12:26

Do you think we're all going to die? So

12:28

AI inspires a lot

12:31

of creativity regarding fear.

12:35

And I think honestly, most people

12:37

as they live their lives, they aren't really thinking about

12:39

the long-term trajectory of civilization

12:42

and where it might go. And if you

12:44

just make them think about that, I think just

12:46

many people are able to see

12:48

scenarios they think are pretty scary

12:51

just based on, you know, projection

12:54

of historical trends toward the future and

12:56

things changing a lot. So

12:58

I want to acknowledge there are some

13:00

scary scenarios if you just think about things

13:03

that way.

13:04

And I want to be clear what those are. But

13:06

I want to distinguish that from the

13:08

particular extra fear you might have about

13:11

AI killing us

13:13

all soon. And

13:16

I want to describe the particular

13:18

scenario Eliezer has in mind, as

13:21

I understand it, as a

13:23

very particular scenario, you have to pile

13:25

on a whole bunch of assumptions together to get

13:27

to a particular bad end.

13:29

And I want to say those

13:31

assumptions seem somewhat

13:33

unlikely and piling them all together makes

13:36

the whole thing seem quite unlikely. But

13:39

nevertheless, you just think

13:41

about the long-term trajectory of civilization,

13:43

it may well go places that

13:46

would scare you if you thought about that. And

13:48

so that'll be the challenge for us to separate

13:50

those two. So which one would you like to go with first?

13:53

I would like to start with understanding

13:55

what you think his assumptions are. All

13:58

right, let's do that. And maybe starting there. Okay,

14:00

so the scenario is you

14:02

have an AI system, like

14:05

some coherent system, it's got an owner

14:07

and builder, people who sponsored it, who

14:09

have some application for it, who are watching

14:12

it and using it and testing

14:14

it

14:15

and the way we would do for any AI system,

14:17

right? There's an system. And

14:20

then somewhere along the line,

14:23

the system decides to try

14:25

to improve itself. Now this isn't something most

14:27

AI systems ever do. And

14:29

people have tried that and it usually doesn't work very well.

14:33

So usually when we improve AI systems,

14:35

we do it another way. So we train them on our data,

14:37

get the more hardware, use a new algorithm. But

14:39

the hypothesis here is we're going to train,

14:42

this system is going to be assigned the task, figure

14:45

out how to improve yourself.

14:47

And furthermore, it's going

14:49

to find a wonderful way to do that.

14:52

And the fact that it found this wonderful way makes

14:55

it now special compared to all the other AI

14:57

systems. So this is a world with lots of AI systems. This

14:59

is just one, it's not the most powerful or the most

15:02

impressive or interesting, except for

15:04

this one fact that

15:06

it has found a way to improve itself.

15:09

And this way that it can improve itself is

15:11

really quite remarkable. First of all,

15:14

it's a big lump. So most innovation, most

15:16

improvements in all technology is lots of little things.

15:19

You gradually learn lots of little things and you get better.

15:21

Once in a while we have bigger lumps. And that

15:23

scenario here, there's a really huge lump.

15:27

And this huge lump means the system can all

15:29

of a sudden be much better at

15:31

improving itself than not only it could

15:34

before, but in essence than all

15:37

the other systems in the world put together. It's

15:39

really quite an achievement, this lump it

15:41

finds and a way to improve itself.

15:44

And in addition, this way to improve itself has

15:46

two other unusual features about innovations.

15:49

First, it's a remarkably broad innovation,

15:52

applies across a very wide range of tasks. Most

15:54

innovations we have and how to improve things are relatively

15:56

narrow. They let it improve in a narrow

15:58

range of things, but not...

15:59

over everything this innovation lets you

16:02

improve. A really wide range of

16:04

things. And in addition,

16:06

most innovations you have let you improve things

16:08

and then the improvements run out until you'll

16:11

find some other way to improve things again. But this

16:13

innovation doesn't run out. It allows

16:15

this thing to keep improving over many orders

16:18

of magnitude.

16:19

You know, maybe 10 orders of magnitude or something. Like

16:21

it's just really a huge

16:24

innovation that just keeps last, just keeps playing

16:27

out. It just keeps improving. It doesn't run into

16:29

errors while it improves itself. Even

16:31

then as it discovers errors, it fixes

16:33

those. Or it doesn't run obstacles or

16:36

things that slow it down and then get stuck for a long

16:38

time. It just keeps working. Okay.

16:41

And whatever it does

16:43

to pursue these innovations, these self

16:46

modifications will change it. They

16:48

probably will change its software configuration.

16:51

Maybe it's relative use of resources, the kinds

16:53

of things it asks for, how it spends its

16:55

time and

16:57

money that it has doing things,

16:59

the kind of communication it has, you

17:01

know, it's changing itself and

17:05

its owners, builders, the

17:07

ones who are, you know, sponsored it and made it and

17:09

have uses for it, they don't

17:12

notice this at all. It

17:14

is vastly improving itself and its owners is just

17:16

oblivious. Now, initially it's

17:19

just some random obliviousness.

17:21

Now at some point the system will get so capable, maybe

17:23

it could figure out how to hide its new

17:26

status and its new trajectory. And

17:29

then it might be more plausible that it succeeds

17:32

at that if it's now very capable at hiding things.

17:34

But before that, it was just doing

17:36

stuff, improving itself and

17:38

its owner managers were just oblivious. Either

17:41

they saw some changes, they didn't care, they

17:45

misinterpreted the changes,

17:47

they had some optimistic interpretation

17:49

of where that could go, but basically they're

17:51

oblivious. So if they knew it was actually improving

17:54

enormously,

17:55

they could be worried, they could like step it, maybe

17:58

pause it, try variations, try to step it. It's

18:00

to make sure they understand it, but they're not doing that. They are

18:03

just oblivious. And then the system

18:05

reaches the point where it can either hide

18:07

what it's doing or just wrest

18:09

control of itself from these owners,

18:12

builders.

18:13

And in addition, like if it weren't at rest to control

18:15

itself, presumably they would notice that. But

18:18

then, and they might try to retaliate against it

18:21

or recruit other powers to lock

18:23

it down. But

18:25

by assumption, it's at this point able

18:27

to resist that. It is powerful enough

18:30

to either hide what it's doing or

18:32

just wrest control and resist attempts

18:34

to control it,

18:36

at which point then it continues to improve, becoming

18:40

so powerful that it's more powerful than all

18:42

the other, everything in the world, including

18:44

all the other AIs.

18:47

And then soon afterwards, its

18:50

goals have changed. So

18:53

during this whole process, I mean, two

18:55

things have to have happened here.

18:57

One is that have to become an agent. That

18:59

is, most AI systems aren't agents. They

19:01

don't think of themselves as, I'm this person

19:03

in the world who has this history and these goals, and

19:06

this is my plan for my future. They

19:08

are tools that do particular things.

19:10

Somewhere along the line, this one became

19:12

an agent. So this one says,

19:14

this is what I want, and this is who I am, and this

19:16

is how I'm going to do it. And

19:19

in order to be an agent, it needs to have some goals.

19:22

And during this process by which it improved, at

19:25

some point it became an agent, and then at

19:28

some point, its goals changed

19:30

a lot, not just a little. In

19:33

effect now, so any system we can think

19:35

in terms of its goals, if it takes actions among a

19:37

set of options, we can interpret those

19:39

actions as achieving some goals versus others.

19:42

And for any system, we can assign it some

19:44

goals, although the range of those goals might be

19:46

narrow if we only see

19:48

a range of narrow actions. So

19:51

we might not be able to interpret goals more generally.

19:54

So if we have an AI system that is a taxi

19:56

driver, we'll be able to

19:58

interpret the various routes it takes people to do. on and

20:01

how carefully it drives in terms of some overall

20:03

goals, respect to how fast it gets people there

20:05

and how safely it does, but maybe we

20:07

can't interpret those goals much

20:10

more widely as what would it do if it were

20:12

a mountain climber or something because it's not

20:14

climbing mountains, right?

20:15

But still, with respect to a

20:17

certain range of activities, it had some

20:19

goals, and then by assumption, basically,

20:22

in this period process of growing, its

20:24

goals just become, in effect, radically

20:27

different. And then

20:29

by assumption, radically different goals

20:31

through this random process are just arbitrarily

20:35

different. And then

20:37

the final claim is arbitrarily different goals. When

20:39

they look at you as a human, you're

20:42

mostly good for your atoms. You're

20:44

not actually useful for much anything else at

20:46

some point, and then you are recruited

20:48

for your atoms, i.e. destroyed.

20:51

And that's the end of the scenario here,

20:54

where we all die. So to

20:56

recall the set of assumptions we've piled on together,

21:00

we have an AI system that starts out with some

21:02

sort of owner and builder.

21:04

It is assigned the task to

21:07

improve itself. It finds this fantastic

21:09

ability to improve itself, very lumpy,

21:11

very broad, works over many orders

21:14

of magnitude. It applies this

21:16

ability. Its owners do not notice

21:18

this

21:20

for many orders of

21:22

magnitude of improvement, presumably, at

21:24

some point. Or, and it happens really, really quickly,

21:27

potentially. Well, that would be, presumably, the most likely

21:29

way you could imagine the owner's not noticing,

21:32

perhaps. But the fundamental thing

21:34

is the owners don't notice. If it was slow and the owners didn't notice,

21:36

then these are and still plays out.

21:38

So the key reason we might postulate

21:41

fast is just to create the

21:43

plausibility that the owners don't notice.

21:46

Because otherwise, why

21:48

wouldn't they notice? But that's

21:50

also part of the size of this innovation,

21:52

right? We're already improving AI

21:54

systems at some rate. And

21:56

so, if this new method of

21:58

improvement was only AI systems at

22:00

the rate they're already improving, then

22:03

this AI system won't actually stand out compared

22:05

to the others. In order for this to stand out, it'll

22:08

have to have a much faster rate of improvement

22:11

to be distinguished from the others, and

22:13

this will then have to be substantially

22:16

faster. So that

22:18

would set the time scale there for what it would

22:20

be to be in the scenario, so it both

22:22

needs to be faster than the rate of growth of other AI

22:24

systems at the time, substantially, and fast

22:27

enough that the owner builders don't

22:29

notice this radical change

22:31

in its agenda. Priorities,

22:35

activities, they're just not noticing

22:37

that.

22:37

And then they don't notice it to

22:40

the point where this thing acquires the ability to

22:43

become an agent, have

22:45

goals, hide

22:47

itself, or you

22:49

know, free itself and

22:52

defend itself. And then

22:55

the last assumption

22:57

and its goals radically change, even

23:00

if it was friendly and cooperative with humans initially,

23:02

which presumably it was. Later

23:04

on, it's nothing like that. It's just a random

23:07

set of goals at which point then,

23:10

by assumption, now it kills itself. So

23:13

the question is, how plausible are

23:16

all those assumptions? And

23:17

I, so we could walk through analogies

23:19

and prior technologies and histories in the

23:22

last few centuries, and I think FUM

23:24

advocates like Ellie Eiser will

23:26

say, yeah, this is unusual

23:28

compared to recent history, but

23:30

they're going to say recent history is irrelevant for

23:32

this. This is nothing like recent

23:35

history. The only things that are really relevant

23:37

comparisons here is, you know, the rise of the human

23:39

brain and maybe the rise of life

23:41

itself and everything else is irrelevant. So then

23:43

they will

23:44

reject other recent

23:47

few centuries technology trajectories

23:49

as not relevant analogies. What did

23:51

you just call Ellie Eiser Robin? What advocate?

23:54

A FUM advocate? FUM. FUM. What is FUM? FUM

23:56

is just another name for this explosion that we've been

23:58

talking about.

23:59

Common word to describe it. Yeah,

24:02

the super intelligence explosion. Kurzweil

24:04

takes stuff like that kind of thing. Singularity,

24:07

that sort of thing. So singularity is a different

24:09

concept than FUUM. Different concept. In

24:11

some sense, a FUUM is a kind of singularity, but not all

24:13

singularities are FUUMs. Robin, thank you for

24:15

guiding us, because we're still learning in this. Bankless,

24:18

we had never done an AI podcast previously.

24:21

We covered a lot with crypto and coordination economics, and

24:23

now we're doing this AI podcast. I feel like

24:25

we just got punched in the face. So we're

24:28

articulating. We're walking slower.

24:29

You're rearticulating Eliezer's assumptions

24:32

is, I think, very helpful to me. And

24:34

so we want to get to

24:36

why you think those assumptions are unlikely

24:39

to be true. But I do think you are right.

24:41

In the episode with him, he basically

24:43

sort of painted this fantastical

24:45

story of these assumptions. And he basically

24:48

said, yeah, those assumptions, the things that you're

24:50

describing, I think. And I don't want to put words in his mouth, so

24:52

maybe this is what I was hearing him say. You're

24:55

just describing intelligence, Robin. That's

24:57

what intelligence does. And I'll give you

24:59

exhibit A. It's called human beings. And

25:02

I'll give you the algorithm. It's called evolution

25:04

gradient descent over millions

25:06

of years

25:06

and hundreds of millions of years.

25:09

And we end up with a super like an

25:11

intelligence, but relatives have

25:13

made the animal kingdom a super intelligence that

25:16

exerts its dominance and its will has

25:18

changed from just procreating and spreading

25:20

its genes and memetic material to

25:23

something that evolution would have never, the

25:25

evolutionary algorithm would have never envisioned

25:28

it actually doing. And so I think

25:30

maybe what I was hearing the criticism would

25:33

be like, we already have an example of this,

25:35

Robin. It's called intelligence

25:36

and it's called humans. What do you think about this? So

25:40

as I said, if we just think about

25:42

the long run future we're in, we can

25:44

generate some scenarios of concern

25:48

independent of this particular set of

25:50

assumptions, LES or it's set up.

25:52

So, you know, the scenario

25:55

where humans arise and then humans change

25:57

the world, I guess you could imagine

25:59

as. It's scary to evolution if

26:02

evolution could be scared, but evolution doesn't

26:04

really think that way.

26:07

But certainly you can see

26:09

that in the long run, you should expect to

26:11

see a lot of change and

26:14

a lot of ways in which your

26:16

descendants may be quite different from you

26:19

and have agendas that are different

26:21

from you and yours. I

26:23

think that's just a completely reasonable expectation

26:26

about the long run.

26:27

So we could talk about that in general

26:30

as your fear.

26:31

I just want to distinguish that from this particular

26:33

set of assumptions that were piled

26:36

on as the foom star. Because the foom

26:38

star is like something that might happen in the next

26:40

few years, say, and

26:42

it would be a very specific event. A

26:44

particular computer system suffers this particular

26:47

event and then a particular thing happens.

26:50

That's a much more specific thing to be worried

26:52

about than the general trajectory of

26:55

our descendants into the long-term future.

26:58

So again, which one would you like to talk about? I'm

27:01

trying to summarize really just the perspective

27:03

differences here. And I know you've had this debate

27:05

with Eliezer before, so this is like review

27:08

for you. I think Eliezer's conclusions

27:11

is that while the future is

27:13

unwritten and the paths of

27:15

our future can be many and multivariate

27:17

and we can have different possible outcomes,

27:20

Eliezer is like, well, all

27:22

roads lead to the super intelligence

27:24

taking over. And I think just to summarize,

27:27

your position is like, that

27:28

is a possible path and

27:30

it is something to consider and

27:33

it is still less likely than the

27:35

many, many, many other possible paths that

27:38

are also perhaps an

27:41

aggregate much more likely. Is that a fair

27:43

summary of your position? So let's talk about

27:45

this other

27:46

more general framing and argument. So

27:48

we could just say in history,

27:51

humanity has changed a lot, not

27:54

just a little a lot. We've not

27:56

just changed some particular

27:58

technologies. We've changed. our culture in

28:01

large ways.

28:02

We've changed the sort of basic values

28:04

and habits that humans have. And

28:07

our ancestors from 10,000

28:09

or 100,000 or a million years ago, if they looked at us and saw what

28:11

we're doing,

28:15

it's not at all clear they would

28:17

embrace us as, you

28:20

know, they are proud descendants, they are proud of

28:22

and happy to have replaced them. That's

28:25

not at all obvious. You know, even

28:27

just in the last thousand years or even shorter,

28:29

we

28:30

have changed in ways in which we have repudiated

28:33

many of our ancestors' most deeply held

28:35

values. We've rejected their religions,

28:38

we've rejected their patriotism, we've

28:40

rejected their sort of family

28:42

allegiance and family clan sort of allegiances.

28:45

We have just rejected a lot

28:48

of what our ancestors held most dear.

28:51

And that's happened

28:53

over and over again through

28:55

a long-term history. That is, each

28:58

generation we have tried to train our children

29:01

to share our culture. That's just a common thing humans

29:03

do, but our children have

29:05

drifted away from our cultures and

29:08

continue to just be different.

29:12

And, you know, over a million years, humans

29:15

fundamentally ourselves changed. And

29:17

one of the things that happened is we became very culturally

29:19

plastic. And so culture now is really

29:21

able to change us a lot because we are, we have

29:24

become so able to be molded by our

29:26

culture.

29:28

And even if our genes haven't changed

29:30

that much, well, they've changed substantially, say, in the last 10,000

29:33

years, our culture has enormously changed

29:35

us. And

29:37

if you project the same trend into the future,

29:39

you should expect that this

29:41

will happen again and again. Our descendants

29:43

will change

29:45

with respect to cultural evolution

29:48

and their technology and the structure of their society

29:50

and their priorities. And then,

29:52

of course, at some point, not too

29:54

distant future, we will be able to

29:56

re-engineer what we are

29:58

or even what our descendants

30:01

are, and that will allow even more

30:03

change. That

30:05

is, once we can make artificial

30:07

minds, for example, there's a vast space

30:10

of artificial minds we can choose from, and we will

30:12

explore a lot of that space, and

30:15

that allows even more big possibilities

30:18

for how our descendants could be different from us. So

30:21

this story says

30:23

our descendants will become, yes, super

30:25

intelligent, and yes, they

30:27

will be different from us in a great many

30:30

ways,

30:31

which presumably also include values.

30:34

And

30:35

if what you meant by alignment was, how

30:37

can I guarantee that my distant descendants

30:39

do exactly what I say and believe exactly

30:41

what I believe and will never disappoint

30:43

me in what they do because they are fully

30:45

under my control? I

30:47

got to go, gee, that looks kind

30:49

of hard compared to what's happened in history.

30:53

So now, if that's the fear you

30:55

have, I got to endorse

30:57

that. That's not based

30:59

on any particular scenario

31:01

of a particular computer system soon and what

31:04

trajectory of an event it'll go through. That's just

31:07

projecting past trends into the future in a very straightforward

31:10

way. So then I have to ask,

31:12

is that what you're worried about? No, that

31:15

is not what I'm worried about. That is my base case

31:17

that we're going to get more intelligent,

31:20

technology is going to change us culturally,

31:22

it's going to change the trajectory of how

31:24

we interact. Okay, but I got to add one zinger to this.

31:27

What if change speeds up a lot?

31:30

So that this thing you thought was going to happen in a million

31:32

years happens in 100? Well, I mean,

31:34

for me personally, I'm more of a techno

31:37

optimist. So I would be

31:39

more on the side of like within reason,

31:42

of course, more embracing of these types

31:44

of change.

31:44

I know others aren't quite as embracing.

31:47

And also, this was not the scenario at all

31:50

that Eliezer presented. He presented the scenario

31:52

of not rapid change that

31:54

you might not like in the future. And it could come

31:56

within your lifetime, but actual obliteration

31:59

of humanity.

31:59

Like literally rearranging our

32:02

atoms for some other artificial

32:04

intelligence purpose. And

32:07

while you agree with like, there will be lots

32:09

of change as there has been in the past, perhaps that

32:11

change will even accelerate as we delve

32:13

deeper into the kind of the technology that

32:16

is in our future. You do not think

32:18

that an AI will simply, the

32:20

super intelligent artificial intelligence will simply

32:23

obliterate humanity and kind of wipe us from

32:25

creation entirely. It won't be

32:27

quite as drastic as that. Let's be careful

32:29

about noticing exactly what's

32:32

the difference between the scenario I presented

32:34

and the scenario he presented. Because

32:36

they're not as different as you might think. In

32:39

both scenarios, there's a descendants. In

32:42

both scenarios, the descendants have values

32:45

that are different from ours. And

32:48

in both scenarios, there's certainly the possibility

32:51

of some sort of violence

32:54

or, you know, disrespect of property

32:56

rights such that the descendants

33:00

take things instead of asking

33:02

for them or trading for them.

33:04

Because that's always been possible in history and

33:06

it can remain possible in the future. You

33:09

know, today, most changes, peaceful,

33:12

lawful.

33:13

And there are, of course, still

33:16

big things that happen, but mostly it's via

33:18

trade and competition.

33:21

And if the AIs displace

33:23

us, it's because they beat us fair and square

33:25

at our usual

33:28

contest that we've set up by which we compete

33:30

with each other. So

33:33

these scenarios aren't that different,

33:35

I'm trying to point out. And then the key

33:37

difference here is one is the time

33:39

scale.

33:40

How fast does it happen? Another

33:43

is how spread out is it? Is there the

33:45

single agent who takes over everything or are there

33:47

millions of descendants, billions of them

33:49

who slowly went out and displace

33:51

us?

33:53

How far do their values differ from ours?

33:55

Just how much do they become indifferent to

33:57

us? And then do they remain

33:59

different? respect property rights. Is

34:01

this a peaceful, lawful

34:04

transition, or is there a revolution

34:06

or war? Those are the main

34:08

distinctions between these two scenarios we've described.

34:11

L.E.I. is used to scar, scenario

34:13

is a very fast, there's a single agent,

34:16

its values change maximally and

34:19

it doesn't respect previous property rights.

34:22

Whereas the scenario I'm describing is ambiguously

34:24

fast.

34:26

Hey, it could happen much faster than you think.

34:29

Of millions or billions of descendants of

34:32

a perhaps gradual

34:34

and intermediate level of value

34:36

difference, but substantial. But

34:39

primarily, I would think in terms of

34:41

peaceful, lawful

34:43

change. I think there's a missing component

34:45

to this conversation that we've been having

34:47

recently. And I understand that there are things

34:50

about the evolution of this AI and things

34:53

that are about the evolution of humanity

34:55

that are all basically

34:57

synonymous, right? There's iteration,

34:59

there's development, there's progress. And

35:01

Robin, you gave the account for that when we

35:04

raise our kids, we try and imbue them with

35:06

our values and our cultures. And

35:08

there is transcription errors in that,

35:11

in that only so much of our values and

35:13

cultures get passed along to our kids. And perhaps

35:16

as technology advances, even

35:19

less passes long from generation to generation,

35:22

and our culture changes over time. And this

35:24

is what we call progress. And when

35:26

we go back to the

35:28

AI innovating on itself, there,

35:30

you also presented a scenario of improvement

35:33

errors with that as well. Like we don't know how perfectly

35:35

it can improve. And so as it develops, it changes and

35:38

adapts. And these are all similar structures.

35:40

And so this is what we know. And maybe the timescales

35:42

throw us

35:43

off a little bit, but these are similar patterns.

35:45

There's one component missing that I'd like to highlight and dive

35:47

into.

35:49

When we have our generations

35:51

of kids and humanity that progresses,

35:54

and even if it changes, it still started

35:57

from us in the first place, right?

35:59

There's a lot

35:59

of parent to kid, parent to kid, parent

36:02

to kid. And so

36:04

it at least starts from a place of continuation.

36:07

I

36:08

think the problem with this AI alignment and

36:11

super explosion issue is

36:13

that in the moment that we create this

36:15

AI, it actually doesn't upload

36:18

our value system because we

36:20

are creating a completely new life form.

36:23

And so it is not biological life. It is

36:25

not DNA that is growing up to an adult to

36:28

combine with somebody else's DNA to create a

36:30

kid who then grows up. It's like that isn't being

36:32

carried forth. So in the moment that we create

36:34

AI, it has no trail of evolutionary

36:37

history to imbue

36:40

it with values and judgment and how to perceive the

36:42

world in an aligned fashion. And so

36:45

in that creation moment, it is

36:47

completely rogue and we don't know how to

36:49

understand it and it doesn't know how to understand

36:51

us because it is a completely new

36:54

form of life with a completely new

36:56

form of appreciating and understanding

36:59

values. And I think that's the missing

37:01

component even though there are similarities in how these

37:03

things progress. The bootloader

37:06

for values and alignment is

37:08

missing in this AI and I think that we haven't touched on

37:10

yet. So

37:13

I do some work on aliens. We could

37:15

talk about that later if you want. I'm looking forward

37:17

to that part of the conversation, by the way. But I'm quite

37:19

confident

37:21

that compared to all the aliens out

37:24

there in the universe and all the alien AIs that

37:27

they would make, the AIs that we

37:29

will make will be correlated with us

37:31

compared to them. We aren't

37:34

making random

37:36

algorithms from the randomly from the space

37:39

of all possible algorithms and machines. That's

37:41

not what we're making. We are making

37:44

AIs to fit in our world.

37:47

So, you know, like the large language

37:49

models made recently the most impressive things, those

37:52

are far from random algorithms and all

37:54

possible algorithms.

37:55

They are modeling after

37:58

us. And most

38:01

in the next few decades, as we have more AI

38:03

applications, machines will

38:06

be made by firms trying to make

38:08

profits from those AIs. And what they'll

38:10

be trying to do is fit those AIs into

38:12

the social slots that humans had

38:15

before.

38:16

So they'll be trying to make the AIs like

38:18

humans in the sense that they

38:20

will have to look and act like humans

38:23

well enough to sit in those social slots. If you want

38:25

an AI lawyer, it'll have to talk to you somewhat

38:27

like a human lawyer would. And similarly

38:30

for an AI housekeeper, etc.

38:33

We will be making AIs that

38:35

can function and act like humans

38:37

exactly so that they can be most

38:40

useful in our world. And we are the ones making

38:42

them. And so

38:44

just out of habit, we're making them like

38:46

us in some abstract sense. Now, there's

38:49

a question of how much like us, and then

38:51

there's the question of, well, how much did

38:53

you want and how much is

38:55

feasible and how really close are your kids anyway?

38:59

Or your grandkids? Because just

39:01

remember how much we humans have changed. I

39:04

think when you look at historical

39:06

fiction or something, it doesn't really come across

39:08

so clearly. We humans have

39:10

changed a lot

39:13

and are changing a lot, even in the last

39:15

century. If you just look at the rate

39:17

of change of human culture and attitudes

39:19

and styles in the last century,

39:22

project that forward 100 more centuries,

39:25

you got to be imagining our

39:27

descendants could be quite different from us, even

39:30

if they started from us. And it's interesting,

39:32

mostly software changes, would you say, at

39:34

the cultural level? I mean, human hardware hasn't

39:37

really changed that much. Recently, yes,

39:39

because

39:40

although we have substantially changed

39:42

the hardware too, but yes, most often,

39:44

but in the future, we will be able to make

39:46

hardware changes to our descendants. I

39:49

have this book called The Age of M Work, Love, and Life from Robots

39:52

Rule the Earth, and it's about brain emulations.

39:54

And so this is where we make very human-like creatures

39:57

who are artificial, using artificial hardware.

40:00

But then they can modify themselves and become

40:02

more alien

40:04

more easily because they can more easily

40:06

modify their hardware and software as

40:08

They are basically computer simulations

40:11

of human brains So if that

40:13

happens soon then even that human

40:15

line of descendants will be able

40:17

to become quite different

40:19

in a relatively short time Ryan if you thought

40:21

the AI alignment problem would throw you for

40:23

it to the I can't wait until we get into the conversation

40:26

about synthetic biology separating

40:28

humans to some be gods and

40:30

others not be gods But that'll be a different podcast

40:34

Robin I think in your argument here you Baked

40:37

in the belief the assumption that these

40:40

ais will adopt our values

40:43

merely by like osmosis from

40:45

the devs and the engineers

40:48

who are coding them up because They

40:50

will code them up to do certain things and behave

40:52

in certain ways using characters

40:56

on our English or our keyboards

40:58

for example and just merely by being

41:00

Association of being created by us.

41:03

It's actually impossible to not imbue

41:05

them with our culture and our values Is

41:07

that what you're saying? Well, there's a big element.

41:09

How is it that you think your children are like you? I

41:13

Mean they are basically growing

41:15

up in your society. Well mainly because they're biological

41:18

cells not computers humans are really quite

41:20

culturally plastic maybe that's another thing people

41:22

really don't quite get so anthropology has gone

41:24

out and looked at a really wide

41:27

range of human cultures and Found

41:29

that humans are capable of behaving

41:31

and thinking very differently depending

41:33

on the culture. They grew up in

41:35

That's the basic result of anthropology

41:39

there are some rough human universals,

41:42

but

41:43

Mostly we're talking variation

41:45

The fact that you seem very similar with all the other

41:47

humans around you is not about sort

41:49

of the innate human similarity You have it's

41:51

because you are in a similar culture to them. So

41:54

to just rearticulate your position

41:56

here I think we are saying that le azer

41:58

is perhaps fearful that that

42:00

this super intelligent AI and humans are

42:03

so far apart

42:04

that they can never come to

42:06

coexist. And what you're saying is that,

42:09

eh, life as a whole has

42:11

similarities no matter how it manifests

42:13

or how it is expressed. Is that how you

42:15

would say it? I was trying to tell you

42:17

that your descendants could be really different from you. I

42:20

wasn't trying to convince you that there was a

42:22

bound on just how different your descendants could get.

42:24

I was trying to show you that in fact, your descendants

42:27

could get really different.

42:29

Not through this Foom Snare, just through

42:31

the simple default way that

42:33

society could continue to change. If

42:35

you're going to be scared about the Foom Snare, maybe you should

42:37

be scared about that one too. We

42:40

could start to talk about what we might know in

42:42

general about intelligent creatures

42:44

and what might be the common features across

42:46

them for all alien species through all of

42:48

space-time or something. There probably are

42:51

some general things they have in common, but

42:54

they might be fewer than would comfort you. I

42:56

definitely want us to get there, but really quick, just picking

42:59

apart the assumptions that you laid out. And I

43:01

want to see which ones more specifically you might disagree

43:03

with or state in a different way than L.E. user. You

43:06

said, you know, assumption one is that the AI

43:09

improves itself. It seems core to

43:11

what L.E. user thinks. Assumption two, the

43:13

owners, that is the people who program it, don't take

43:15

control, don't try to stop it. Assumption

43:18

three, the AI becomes an agent.

43:20

And assumption four, the agent's

43:22

goals change, the AI's

43:24

goals change and it ends up destroying humanity. I

43:26

find some of these harder to believe

43:28

than others, particularly assumption four. I

43:31

didn't understand in L.E. user's argument the

43:33

reason that suddenly the AI destroys humanity,

43:37

that maybe we could talk about. But let's start at the top,

43:39

actually. Do you have a disagreement with

43:41

assumption one that an AI will recursively

43:43

start to improve itself? Well, remember, I tried

43:45

to break one into multiple parts to

43:48

show you that it requires multiple

43:50

things all to come together there. So

43:52

not only does it try to improve itself, it finds

43:55

this really big lumpy improvement,

43:58

which has enormously unusual. scale

44:01

in terms of how far it goes before it runs out, and

44:03

scope in terms of how many things it allows the improvement

44:06

of, and magnitude

44:08

is just a huge win over previous

44:11

things. Those are all a prior

44:13

unlikely things. So

44:15

it's not. The fact that it tries to improve

44:17

itself, that seems quite likely, sure. Somebody might,

44:20

well, ask a system to try to

44:22

improve itself,

44:23

but then it would find such a powerful

44:25

method and then still

44:28

not be noticed by its owners, that

44:31

gets pretty striking

44:34

as an assumption. I understand. And so that's what's

44:36

tied into like, you find it hard to believe that

44:39

the owners, the creators of this AI wouldn't

44:41

be able to stop it from doing something

44:43

nefarious or devious. That is

44:45

also a difficult assumption. Well,

44:47

it's first just noticing that is by

44:50

assumption, this thing starts out at a modest

44:52

level ability, right? By assumption, this thing

44:54

is comparable to many, many other AIs

44:56

in the world.

44:57

So by assumption, if you could notice

44:59

a problem early on, then you can stop

45:01

it because you

45:04

can bring together thousands of other AIs against

45:06

this one to help you stop it,

45:09

if you want to stop it.

45:10

So at some point later on

45:13

in this evolution, it may no longer be something

45:15

you could stop, but by assumption,

45:17

that's not where this starts. It starts

45:19

at being comparable to other AI systems,

45:22

and then it has this one advantage, it can

45:24

improve itself better.

45:25

And then it does. And then this other assumption,

45:28

what I'd labeled number three, the AI becomes an

45:30

agent. So how likely is an

45:32

AI to become a self-interested

45:35

acting agent? Is that difficult

45:37

to foresee? Well, of course, some owners

45:40

might make it that way, but

45:43

most won't. So

45:46

we're narrowing down the set here. So my

45:49

old friend, Eric Drexler, for example, has

45:52

argued that we can have an advanced

45:54

AI economy where most AIs have pretty

45:56

narrow tasks.

45:58

They aren't generally agents trying to do everything.

46:01

They drive cars to the

46:03

airport or whatever. They each do

46:05

a particular kind of task. And that's in fact how

46:07

our economy is structured. Our economy

46:10

is full of industries made of firms who

46:12

do particular tasks for us.

46:14

And so a world where those firms

46:16

are now much more capable and even artificially

46:20

intelligent capable, but even more than superhuman capable

46:22

can still be a world where each one does a

46:24

pretty narrow task

46:26

and therefore isn't a general agent

46:28

that would do enormous

46:32

change things if it became more powerful. So if you had

46:34

a system that was really good at route planning, say

46:36

cars to get from A to B,

46:38

if it was superhuman at that, it might just be

46:40

really good at route planning. But if that's all

46:42

it does, it's

46:45

not plausibly going to suddenly transition

46:47

to an agent who sees itself as having history

46:49

and whole goals for the world and trying to

46:52

figure out how to preserve itself and make itself go.

46:54

That's pretty implausible for

46:57

a route planning AI.

47:00

So

47:01

in a plausible future, most AIs would

47:03

be relatively narrow and have relatively tasks,

47:05

but sometimes somebody might make more general

47:07

AIs that had more general scope and

47:10

ambitions and purposes.

47:12

And then those might be the basis of a

47:14

scenario here. But the people

47:16

who created those AIs, they would know

47:18

it's unusual feature. They would know this one

47:21

is an agent.

47:22

And they would presumably take that into account

47:24

in their monitoring and testing of this

47:27

thing. They're not ignorant

47:29

to this fact. So the

47:32

scenario whereby the route planning one just accidentally

47:34

becomes an agent, that's logically possible.

47:37

But now we got to say how often do

47:39

design systems for purpose A suddenly transform

47:42

themselves into something that does all different things

47:44

B? It happens sometimes, but

47:46

it's pretty rare. BRIAN KARDELL-MASSER Let's say

47:48

it gets through all of these gates. We have

47:50

an AI that improves itself in broad

47:52

ways and in ways that are somewhat

47:54

lumpy. The owners, for whatever reason, aren't

47:57

able to take control of the AI, you know, strict

47:59

them in some Maybe the

48:01

owners have programmed this AI to become an agent,

48:03

so it's an agent acting in its free will. This

48:06

last point then, Eliezer's conclusion is

48:08

like the point that was most concerning, of course, is

48:10

that then this AI comes and destroys

48:13

humanity. I think

48:15

his rationale is basically because why

48:17

not? It would have other purposes for humanity.

48:20

It would just step over them. What

48:22

about this assumption?

48:24

Imagine instead of one

48:26

AI, we have a whole world of AIs

48:29

who are improving themselves and

48:31

their values are diverging. That's more of a default

48:34

scenario. If that happens

48:36

in a world of property rights, then

48:38

say humans are displaced and

48:40

no longer at the center of things. We're not in much demand.

48:43

We basically have to retire. Humans

48:45

go off to our retirement corner and spend

48:47

their retirement savings.

48:50

If that stays as a peaceful scenario,

48:52

then all these AIs who change

48:54

and have other purposes, they don't

48:57

have to kill us. They can

48:59

just ignore us off in the corner spending

49:01

our retirement savings, but there's a possibility

49:04

of a revolution, say, whereby they decide, hey,

49:06

why let these people sit in the corner? Let's grab their stuff.

49:09

The possibility

49:11

of a violent revolution has always been there and it's there

49:13

in the future, but in

49:15

the world we're living in, that's a rare thing.

49:18

That's good. We understand

49:20

roughly why it's rare. The

49:23

thing that's happening different in L.A.I.s scenario

49:26

is because it's the one AI

49:28

you see, it's not in a society

49:31

where revolutions are threatening. It's

49:33

just the one power. Then from its

49:36

point of view,

49:37

why let these people have their property rights? Why

49:39

not take it? Now, I would say that

49:41

the main thing there is not that it has

49:44

different goals, but

49:47

that it's singular,

49:49

and therefore not in a world where it needs to keep

49:51

the peace with everybody else and be lawful

49:53

for fear of offending others or the retribution,

49:56

that it can just go grab whatever

49:58

it wants. That's the distinction.

49:59

your future of the scenario he's describing. In

50:02

a more decentralized scenario, again, I

50:04

think there's much more hope that even if AI

50:06

is displaced, even if their goals become

50:08

different from us,

50:11

they could still keep the peace because

50:13

plausibly they could be relying on the same

50:16

legal institutions to keep the peace with each

50:18

other as they keep with us.

50:20

And that's in some sense, why we don't

50:22

kill all the retirees in our world and take their stuff. Today,

50:26

there's all these people who are retired and like, what

50:28

have they done for us lately? We could all

50:30

go like kill the retirees and take their stuff, but we

50:32

don't. Why don't we do that? Well,

50:35

we share these institutions with the retirees.

50:38

And if we did that, that would

50:40

threaten these institutions that keep the peace between

50:42

the rest of us. And we would each have to wonder who's next

50:45

and this wouldn't end well.

50:47

Okay. And that's why

50:49

we don't kill all the retirees and take their stuff,

50:52

not because they're collectively

50:54

powerful and can somehow resist their efforts to kill

50:56

them. We could actually kill them and take their stuff. That

50:58

would actually physically work. That's not the problem

51:00

with that scenario. The problem is what

51:02

happens next after we kill

51:05

them, takes their stuff. Who do we go for next? And

51:07

where does it set?

51:09

So a future of AIs

51:12

who

51:12

become different from us and acquire new goals

51:15

and our agents

51:17

threatens us if they have a revolution

51:20

and kill us and take our stuff. That's the problem

51:23

there. And so, LAI's

51:25

new solution you see makes that seem

51:27

more likely by saying there's just the one agent.

51:29

It has no internal coordination problems. It has

51:31

no internal divisions. It's

51:33

just the singular thing.

51:35

And honestly, we could add that as another implicit

51:38

assumption in this scenario.

51:39

He assumes that as this thing grows, it has

51:42

no internal conflicts.

51:45

It becomes more powerful than the entire

51:47

rest of the world put together. And yet, there are

51:49

no internal divisions of note. Nothing

51:52

to worry about. There's no code forking. Right. It

51:54

doesn't have different parts of itself that fight each

51:56

other and that have to keep the peace with each other. Because

51:59

that's why.

51:59

we have law and property rights you

52:02

see in our world is because we have conflicts

52:04

and this is how we keep the peace with each other. And he's

52:07

setting that aside by assuming that it

52:10

doesn't need to keep the peace internally

52:12

because it's the singular thing. So

52:15

we should really hope for a pluralistic world of

52:17

many AIs. And in fact, you think that's

52:19

a more likely world anyway. Of course, yes.

52:22

So we're already in a world of great

52:24

many autonomous parts, right?

52:27

We have not only billions of humans,

52:29

but we have millions of organizations

52:31

and firms

52:33

and even nations and

52:34

government agencies. And one of the most striking

52:37

features of our world is how it's hard

52:39

to coordinate among all these differing interests

52:41

and organizations. And one of the most

52:43

striking features of our world are the mechanisms we use

52:46

to keep that peace and to coordinate among all

52:48

these divergent conflicting things. And

52:51

one of the moves that often AI people make

52:53

to spin scenarios is just to assume that

52:55

AIs have none of that problem. AIs

52:58

do not need to coordinate. They do not have conflicts

53:00

between them. They don't have internal conflicts. They

53:02

do not have any issues in how to organize

53:04

and how to keep the peace

53:07

between them. None

53:08

of that's a problem for AIs by assumption.

53:10

They're just these other things that has no such

53:12

problems. And then, of course, that leads to scenarios

53:15

like then they kill us all.

53:16

You know Uniswap as the world's largest

53:19

decks with over 1.4 trillion

53:21

dollars in trading volume, but it's so

53:23

much more. Uniswap Labs builds products

53:25

that lets you buy, sell, and use

53:28

your self-custody digital assets in a

53:30

safe, simple, and secure way. Uniswap

53:32

can never take control or misuse your funds

53:35

the bankless way. With Uniswap, you can

53:37

go directly to DeFi and buy crypto

53:39

with your card or bank account on the

53:41

Ethereum layer one or layer two. You can also

53:44

swap tokens at the best possible prices on

53:46

Uniswap.org. And you can also find

53:48

the lowest floor price and trade NFTs

53:51

across more than seven different marketplaces

53:53

with Uniswap's NFT aggregator. And

53:55

coming soon, you'll be able to self-custody your assets

53:58

with Uniswap's new mobile wallet. So, go

54:00

bankless with one of the most trusted names in DeFi

54:03

by going to Uniswap.org today to

54:05

buy, sell, or swap tokens and

54:07

NFTs.

54:09

Arbitrum One is pioneering the world of

54:12

secure Ethereum scalability and is

54:14

continuing to accelerate the Web3 landscape.

54:16

Hundreds of projects have already deployed on

54:18

Arbitrum One, producing flourishing DeFi

54:21

and NFT ecosystems. With a recent addition

54:23

of Arbitrum Nova, gaming and social

54:25

dApps like Reddit are also now calling

54:28

Arbitrum home. Both Arbitrum One and

54:30

Nova leverage the security and decentralization

54:33

of Ethereum and provide a builder experience

54:35

that's intuitive, familiar, and

54:37

fully EVM compatible. On Arbitrum,

54:39

both builders and users will experience

54:41

faster transaction speeds with significantly

54:43

lower gas fees. With Arbitrum's recent migration

54:46

to Arbitrum Nitro, it's also now 10 times

54:48

faster than before. Visit arbitrum.io,

54:51

where you can join the community, dive into

54:53

the developer docs, bridge your assets,

54:55

and start building your first app. With Arbitrum,

54:57

experience Web3 development, the

55:00

way it was meant to be. Secure, fast, cheap, and friction

55:02

free.

55:03

How many total airdrops have you gotten? This last

55:05

bull market had a ton of them. Did you get them

55:07

all? Maybe you missed one. So here's what you should do. Go

55:09

to Earnify and plug in your Ethereum wallet,

55:11

and Earnify will tell you if you have any unclaimed

55:14

airdrops that you can get. And it also does PoApps

55:16

and mintable NFTs. Any kind of money

55:18

that your wallet can claim, Earnify will

55:21

tell you about it. And you should probably do it now because

55:23

some airdrops expire. And if you sign up for Earnify,

55:25

they'll email you anytime one of your wallets

55:28

has a new airdrop for it to make sure that you never

55:30

lose an airdrop ever again. You can also

55:32

upgrade to Earnify

55:33

Premium to unlock access to airdrops that

55:35

are beyond the basics and are able to set reminders

55:37

for more wallets. And for just under $21 a month, it

55:40

probably pays for itself with just one airdrop.

55:42

So plug in your wallets at Earnify and

55:44

see what you get. That's E-A-R-N-I

55:47

dot F-I, and make sure you never lose another

55:49

airdrop.

55:50

Learning about crypto is hard. Until

55:52

now, introducing Metamask Learn, an

55:54

open educational platform about crypto, Web3,

55:57

self-custody, wallet management, and all the

55:59

other... topics needed to onboard people into

56:02

this crazy world of crypto. Metamask Learn

56:04

is an interactive platform with each lesson

56:06

offering a simulation for the task at hand,

56:09

giving you actual practical experience

56:11

for navigating Web3. The purpose of Metamask

56:13

Learn is to teach people the basics of self-custody

56:16

and wallet security in a safe environment. And

56:18

while Metamask Learn always takes the time to

56:20

define Web3-specific vocabulary,

56:22

it is still a jargon-free experience

56:25

for the crypto-curious user. Friendly, not

56:27

scary. Metamask Learn is available in 10 languages

56:30

with more to be added soon, and it's meant to

56:32

cater to a global Web3 audience. So

56:34

are you tired of having to explain crypto concepts

56:36

to your friends? Go to learn.metamask.io

56:40

and add Metamask Learn to your guides to

56:42

get onboarded into the world of Web3.

56:45

Right, like AIs are a monolith.

56:47

But I think one of the reasons why I appreciate just

56:49

your line of reasoning, Robin, and how you think is

56:52

that you tap into what seems to be fundamental

56:55

truths of this universe that you would find here

56:57

on planet Earth or in a galaxy

57:00

far, far away. Certain things, I

57:02

think, can be assumed no matter what the environment

57:04

is. And then I think a lot of your logical conclusions

57:07

are just like natural extensions of that. I

57:09

was just going to say, I

57:11

think a lot of disagreements in the world

57:13

are often based on people having sort of different

57:15

sets of abstractions and mental tools and

57:18

then finding it hard to merge them across

57:20

topics. So

57:21

I think when a community has a shared set

57:23

of abstractions and mental tools, even when they disagree

57:25

about details, they can use those shared abstractions

57:28

to come

57:29

to an agreement. But when you have people

57:31

with just different sets of abstractions,

57:33

that's true. So I'm bringing a lot of economics

57:36

to this. Other people might be bringing a

57:38

lot of computer science, but I'm

57:39

going to play my polymath card and say,

57:42

I've spent a lifetime learning a lot of different

57:46

sets of conceptual tools and intellectual

57:48

systems, including computer science, certainly

57:51

big chunks of. And so I'm

57:54

trying to integrate all those tools into a

57:56

overall perspective where I

57:58

can sort of pull in each.

57:59

observation or insight into this sort

58:02

of overall structure. So is this the economic

58:04

reason that the robots aren't going to come kill us

58:07

then maybe? Is that what you're kind of providing?

58:09

Or just if they kill us, they would do us in

58:11

the usual economic way. So

58:15

economics doesn't ensure that nobody will

58:17

ever kill you, okay? They

58:20

have to have good reasons. People have been killed in the world

58:22

in the past, but you know, we have an understanding

58:25

of the main ways that in the last few centuries

58:28

people have been killed. That's

58:29

been something people have paid attention to. How do people get

58:32

killed? How does that happen? And so theft

58:35

is like murder is one kind of way people

58:38

get killed. War is another way. Revolution

58:40

is another way.

58:42

Or sometimes just displacement where

58:45

something outcompetes you and then you don't

58:47

have any place to survive. So in some

58:49

sense, like horses got outcompeted

58:51

by cars at some point and they

58:53

suffered substantially. We understand how that

58:56

works out.

58:57

So that's the sort of thing that can happen to humans. We could

58:59

suffer like the way horses did. That's interesting

59:01

though. I'm not trying to tell you nothing could go wrong. Did

59:03

horses suffer though? I mean, they are...

59:06

By population standards. Yes. They

59:08

diminished significantly. Did any

59:10

individual horse suffer and

59:13

feel suffering as a result of cars?

59:15

Probably not. Seems like a good life to be on

59:17

an equestrian farm rather than sort of slaving

59:20

in a, you know, a cityscape being whipped by a

59:22

buggy master. My understanding is horse population

59:24

is now,

59:25

you know, as high as it ever was. But

59:28

of course, you know... This is not a fact that I keep

59:30

writing in my head. It's not as high as you might have projected had

59:32

they continued previous growth rates. So

59:35

there was a substantial decline and then reverse.

59:37

But now most horses are pets and not

59:39

work horses, but still. I'm not sure if I'm

59:41

ready to be a pet, but that's a problem for my

59:43

kids probably. Hopefully. Just quick

59:45

scenario, Robin. What's more likely? A

59:48

single monolithic super

59:51

intelligent AI does the Eliezer

59:53

thing. Or

59:55

we have humans have a

59:57

robot. human

1:00:00

conflict war. And it's more

1:00:03

like kind of maybe in the traditional sense where we

1:00:05

have two sides and there's and

1:00:07

what's more likely. So that second one seems far

1:00:09

more likely to me.

1:00:11

But you should just put it into context. That is humans

1:00:14

are at the moment

1:00:17

vary by an enormous number of parameters. We vary by

1:00:19

gender and age and profession and

1:00:21

geographic location and wealth and

1:00:23

personality. And

1:00:25

in politics,

1:00:27

especially we try to divide ourselves

1:00:30

up and form teams and coalitions

1:00:32

by which together we will then oppose other

1:00:34

coalitions.

1:00:36

And this is just an ancient human behavior

1:00:38

where which you know we form coalitions

1:00:40

and fight each other. And we

1:00:43

expect that will continue. So arguably

1:00:45

say democracy has allowed us to have more

1:00:47

peaceful conflicts, or coalitions

1:00:49

fight in elections rather than in wars.

1:00:52

But even in say firms, you

1:00:55

know, there's often political coalitions that are fighting

1:00:57

each other. And there's

1:00:59

always the question, what is the basis

1:01:02

of the dominant coalitions? So

1:01:04

there's this wide range of possibilities, you could have

1:01:07

a gender based, you know, the men fighting the

1:01:09

women, you could have an age one, the old people

1:01:11

fighting the young, you could have

1:01:13

an ethnicity one, you could have a professional

1:01:16

one, so an affirm it might be the engineers versus the

1:01:18

marketers.

1:01:20

Right. And so humans versus robots,

1:01:23

or a robotic descendants is one possible

1:01:25

division on which future conflicts could be based.

1:01:28

That's completely believable. And I can't

1:01:31

tell you that can happen. The main thing I'll just

1:01:33

point out that will be competing with

1:01:36

all these other divisions.

1:01:38

So will it be the humans versus

1:01:40

robots conflict? Or will it be the old versus

1:01:42

young? Or will it be the word cells

1:01:44

versus the, you know, shape rotators?

1:01:46

I mean, there's all these different

1:01:48

divisions. And it could well be that there's an alliance

1:01:51

of

1:01:52

human word cells and AI word

1:01:54

cells versus human

1:01:56

shaped rotators and AI shaped rotators

1:01:59

and that

1:01:59

becomes a future conflict, you see, because in

1:02:02

some fundamental sense,

1:02:04

the division of the conflict is indeterminate.

1:02:07

That is, a fundamental thing we understand about politics

1:02:09

is whatever division you have, it's unstable

1:02:12

to the possibility of some new coalition forming

1:02:14

instead. That's a basic thing

1:02:16

we understand about politics. It's hard to

1:02:19

keep stable coalitions because they're so

1:02:21

easily undermined by nuance.

1:02:23

At least with the human versus robot

1:02:25

coalition, like looking into past

1:02:27

human behavior, we tend to be pretty

1:02:29

racist. But I think when we have

1:02:32

robots, it would be really easy to

1:02:34

forget our internal conflicts

1:02:36

when there's a completely different

1:02:39

resource. Why do we fight? Why

1:02:41

do humans fight? It's usually over resources, like economic

1:02:43

resources. And when there is a new species

1:02:45

that is subdividing and iterating

1:02:48

and growing, as humans do, that's

1:02:50

also sucking up the resources and

1:02:52

they look like, I don't know if they're going to be metal

1:02:55

in the future, but that's my current vision of them

1:02:57

is like metal, silicon, terminator type robots

1:02:59

walking around. And there's only so many resources

1:03:01

on the planet. And so that would be a pretty

1:03:04

easy dividing line between

1:03:07

humans and robots that I could imagine

1:03:09

would make that conflict much

1:03:11

more likely. And so regardless

1:03:14

of how, maybe it's the Eliezer

1:03:16

way in which a super monolithic, super intelligent

1:03:19

robot comes and we have to fight that, or

1:03:21

at some point there's conflict potentially,

1:03:24

and I might even say likely if there is a different,

1:03:27

like to call it a species. Recently

1:03:29

this is kind of a side, this is going back to like

1:03:32

the super intelligent stuff, but I think we can now

1:03:34

call this just AI conflict.

1:03:37

The Future of Life organization released an open

1:03:39

letter calling for the pause of all general

1:03:41

AI experiments. A few people signed

1:03:44

it, Elon Musk, Steve Wozniak, Yuval

1:03:46

Noah Harari, Andrew Yang. It's basically

1:03:48

a call

1:03:49

on all AI labs to immediately

1:03:52

pause for at least six months. The

1:03:54

training of AI systems more powerful

1:03:56

than chat GPT for. So

1:03:59

this letter says

1:03:59

Don't go beyond chat GPT

1:04:02

for beyond there. It gets

1:04:04

even scarier. Let's pause. Let's halt.

1:04:06

Let's figure out this AI alignment issue first.

1:04:09

I just want to get your reaction to this letter

1:04:12

and people signing it, Robin. Like would

1:04:14

you sign this letter or are you against signing

1:04:16

this letter? And just what do you think about the idea

1:04:19

of this letter? So first, just

1:04:21

notice that we've had

1:04:23

a lot of pretty impressive AI for a while

1:04:25

now.

1:04:26

It's when the AIs are the most human

1:04:28

like with these large language models that people

1:04:31

are the most scared and concerned. So

1:04:34

that suggests that maybe a very advanced

1:04:37

AI will

1:04:38

look pretty human like in many ways. And

1:04:40

don't forget that our descendants will start to add

1:04:42

metal to themselves and become different

1:04:45

and money is just like their brain. And the machines are metal

1:04:48

and quite different. So again,

1:04:50

it's not so obvious where the division line would

1:04:52

be. But

1:04:53

to go to this particular letter,

1:04:55

first of all, with respect to the general

1:04:57

concerns they have,

1:04:59

if we had a six months pause

1:05:01

at the end of that, we really wouldn't know much more than

1:05:03

we know now. The main purpose

1:05:05

of the pause would seem to be to allow, say, the time

1:05:07

for government to get its act together and have institute

1:05:10

some more official law that enforces

1:05:13

such a pause to continue. That would be the main

1:05:15

purpose for the pause. You'd be wanting

1:05:18

to support the pause if you were wanting that

1:05:20

further event to happen. It's

1:05:22

not like we're going to learn that much in six months. Or how

1:05:24

about if maybe you were a competitor and wanted to catch

1:05:26

up? So then we go to the is this,

1:05:28

you

1:05:29

know, so there's first if we could do the pause, would it

1:05:31

be a good idea? And then one of the issues is like

1:05:34

who would be participating and who not? So

1:05:37

the ideal thing is say we could get a global pause

1:05:39

somehow.

1:05:40

Would that be a good idea? Now

1:05:42

we're basically talking about should

1:05:44

we basically shut down this technology for

1:05:47

a long time until

1:05:49

people feel safer about it. So

1:05:51

for that issue, I think the comparison

1:05:54

with nuclear energy is quite striking. Basically,

1:05:57

around 1970, the world decided to. off

1:06:00

a nuclear energy. And we basically instituted

1:06:02

regulatory regimes that

1:06:04

allowed the safety requirements asked

1:06:06

of nuclear power plants to escalate arbitrarily

1:06:09

until they started to cut into costs. And

1:06:12

that basically guaranteed this would never become lower

1:06:14

cost than our other ways of doing things.

1:06:17

And people were okay with that because they were just scared

1:06:19

about nuclear power. So basically, the generic

1:06:22

fear didn't go away. And they just generically

1:06:24

said, this just can never be safe

1:06:26

enough for us. Whatever extra budget we have, we

1:06:28

want it to be safer. And that's the way they put

1:06:30

it. And so I would think a

1:06:32

similar thing would happen with AI. The kind

1:06:35

of reassurances people are asking for are

1:06:37

just not going to be feasible for decades,

1:06:39

at least. So you'd basically be

1:06:41

asking for this to be paused for decades.

1:06:44

And it's even hard to imagine

1:06:46

them eventually overcoming that. Because the

1:06:48

fundamental fears, as we've been describing, is just the idea

1:06:51

that they might be different. And they might

1:06:53

have different agendas, and they might outcompete us, and

1:06:55

that's just not going to go away. So I

1:06:58

would say this is basically, do you vote for

1:07:00

substantial technological change or not?

1:07:03

And I get why many people might think, look,

1:07:05

we're rich enough. We're doing okay. Let's not risk the

1:07:07

future by changing stuff. And

1:07:09

they voted that way on nuclear power, and they might well

1:07:11

vote that way on AI.

1:07:14

I would rather we continue to...

1:07:16

I think we have enormous far

1:07:18

we can go if we continue to improve our tech

1:07:20

and grow. But

1:07:21

I can understand why many people think, nope, we

1:07:24

got lucky so far, things didn't go too bad, we're

1:07:26

in a pretty nice place, why take a chance and change

1:07:28

anything? So

1:07:29

that's all if it was possible to actually have

1:07:32

a global enforcement of such a pause

1:07:34

and then a further law. But of course, that

1:07:36

just looks really hard. That

1:07:38

is, this technology is now pretty

1:07:40

widely available. It

1:07:42

might be that the best new systems are from the biggest

1:07:45

companies that can afford the most

1:07:46

hardware to put on it. But the basic software

1:07:49

technology here is actually pretty public

1:07:51

and pretty widely available. And

1:07:53

so over the next

1:07:55

few decades, even if you managed to say

1:07:58

no more than a billion dollar project, doing

1:08:00

this, you're going to have a lot of less than billion dollar

1:08:02

projects doing this.

1:08:05

And of course, it'll be hard

1:08:07

to have a global ban.

1:08:10

And so the US now has a commanding lead,

1:08:12

and the main effect

1:08:14

of a delay if it's not global would be to take

1:08:16

away the US lead. And

1:08:19

it's just this looks like a hard technology

1:08:21

to ban, honestly. You know, you might be able to

1:08:23

get Google and OpenAI and Microsoft or

1:08:25

something to pause

1:08:27

their efforts, because, you know,

1:08:29

they are big companies with pity public

1:08:31

activities. But- And Robin, I'm trying to understand.

1:08:33

So even if it was enforceable, understand

1:08:36

the reasons you gave why it's not enforceable, and

1:08:38

very difficult to do some sort of a, you know,

1:08:40

global ban of some sort. Let's

1:08:43

say it was for a minute, would you support it? No. Do you

1:08:45

think this is worth pulling the fire alarm over? Again,

1:08:47

I think it's comparable to say, genetic engineering

1:08:49

or nuclear energy or some other large technologies

1:08:52

that we've come across in the last few decades

1:08:54

where

1:08:55

there really is huge potential, but there's also

1:08:58

really big things you could be worried about.

1:09:00

And honestly, I think

1:09:03

you just have to make a judgment on the overall

1:09:05

promise versus risk framing.

1:09:07

You can't really make a judgment here based on very

1:09:10

particular things, because that's not

1:09:12

what this is about.

1:09:13

We made a judgment on nuclear energy to just

1:09:15

back off and not use it that much. That's

1:09:18

a judgment humanity made 50 years ago. Within

1:09:21

the last few decades, we made a similar judgment on

1:09:23

genetic engineering, basically. Nope,

1:09:26

we just don't want to go there for humans at

1:09:28

least. And we

1:09:30

may be about to make a similar decision about AI. But

1:09:34

honestly, this trend looks bad

1:09:36

to me because many people

1:09:38

think social media is a mistake, and maybe we

1:09:40

should undo that and go back on

1:09:42

that. The trend of

1:09:45

blocking technological progress

1:09:47

is bad to you in general, whether it's nuclear or genetic

1:09:50

engineering or social media or AI

1:09:52

or any of these things. Right. I actually am

1:09:54

concerned that this is the future of humanity

1:09:57

actually here. So I did

1:09:59

the this other work on grabby

1:10:02

aliens, on the distributions of aliens

1:10:04

to space-time. In that framework,

1:10:07

the most fundamental distinction between alien

1:10:09

civilizations is the one between the quiet

1:10:11

ones who stay in one place and live out their

1:10:14

history and go away without

1:10:17

making much of a mark on the universe and the loud

1:10:19

ones who expand and then keep expanding

1:10:21

until they meet other loud ones.

1:10:24

I can see many forces

1:10:26

that would tend to make a civilization want to be quiet.

1:10:30

That's what we're talking about here. That is,

1:10:33

even in the last half century, the world has

1:10:35

become a larger integrated

1:10:37

community, especially among elites, whereby

1:10:40

regulatory policy around the world has

1:10:42

converged a lot even though we have no world government.

1:10:45

You certainly saw that in COVID, but you also see it in

1:10:47

nuclear energy and medical ethics and many other

1:10:49

areas. Basically the elites around

1:10:51

the world in each area talk mainly to each other.

1:10:54

They form a consensus worldwide about what

1:10:56

the right way to deal with that area is and then they all

1:10:58

implement that. There's not actually that

1:11:01

much global variation in policy

1:11:03

in a

1:11:04

wide range of areas. People

1:11:07

like that, I think, compared to the old world. Certainly

1:11:09

it's reduced civil wars of various kinds.

1:11:12

People like the idea that instead of nations

1:11:15

fighting and competing with each other, that

1:11:17

we're all talking together and deciding what

1:11:19

to do together and that

1:11:21

that sort of talking may deal with global

1:11:23

warming, it may deal with inequality, it may deal

1:11:26

with overfishing. There's just a bunch of world problems

1:11:28

that these people talking together feel like

1:11:30

they're solving.

1:11:32

People will like this world we're moving into

1:11:34

where we all

1:11:36

talk together and agree together about what to do about

1:11:38

most big problems.

1:11:40

That new world will just be much

1:11:43

more regulated in the sense that they will look at

1:11:45

something like nuclear energy and then everybody say, nope, we don't

1:11:47

want to do that and let's shame anybody who tries

1:11:49

to do that and slowly

1:11:51

together limit

1:11:52

humanity's future.

1:11:55

That could go on for thousands

1:11:57

of years and then if we ever have a point where

1:11:59

it's possible to send out an interstellar

1:12:01

colony to some other star, we will

1:12:03

know that if we allow that, that's the end of this

1:12:05

era.

1:12:07

Once you have a colonist go somewhere else that they

1:12:09

are out of your control, they are no

1:12:11

longer part of your governance sphere, they

1:12:13

can make choices that disagree with what you've

1:12:15

done, they can then have descendants who disagree,

1:12:17

they can evolve and become

1:12:20

different from the center

1:12:22

and come back eventually to contest

1:12:24

control over the center.

1:12:26

So that becomes a future world

1:12:28

of competition and evolution that could be go

1:12:30

very strange and stark places, but

1:12:33

if we would all just stay here and not let anyone

1:12:35

leave, then we can stay in this world of

1:12:37

us. We talk together, we decide things together, we

1:12:39

only allow our descendants to become as weird

1:12:42

as we want them to be. If we don't

1:12:44

want a certain kind of weird descendants, we just shut it down

1:12:47

and that's the quiet civilization

1:12:49

that we may become.

1:12:51

And that's kind of what's at stake here, I

1:12:54

would say, with

1:12:56

banning AI. It's one of

1:12:58

many questions like that that we are answering

1:13:01

about, do we want to allow change

1:13:04

and new large capacities

1:13:06

that might threaten

1:13:08

strangeness and conflict? So

1:13:11

I think this is actually the moment where

1:13:13

this podcast episode goes from

1:13:16

continuing the conversation that we had about

1:13:19

AI with Eliezer and all of those alignment

1:13:21

problems in that conversation. And this actually

1:13:23

becomes a part of a larger conversation that we've been

1:13:25

having on Banklist for a while now.

1:13:28

And this has to do with the

1:13:30

status quo versus innovation

1:13:32

and progress, as well as it does

1:13:34

with what you were just saying, Robin, about grabby

1:13:36

aliens. And so I want to try and connect these dots

1:13:38

really quick. This idea of AI

1:13:41

and AI innovation, along with crypto

1:13:43

innovation, and whether or not it should

1:13:46

be regulated by the elites, by

1:13:48

the status quo, and whether it should be contained.

1:13:51

And are the elites happy with the harmony

1:13:53

of the social order? And perhaps we

1:13:55

shouldn't have new competition and

1:13:57

new exploration into the frontier.

1:13:59

because that is how we maintain the social

1:14:02

order because there's nothing new that's happened.

1:14:04

What you're saying that this does is this keeps

1:14:07

us in a, it's like an isolationist

1:14:10

approach, except it's an isolation approach

1:14:12

from like inside of planet earth. And

1:14:15

I think being the future tech optimist

1:14:17

that Ryan and I are and I think you are as well,

1:14:20

you aren't for that. You would like

1:14:22

to penetrate that isolationism

1:14:25

that has from like the social elite saying,

1:14:28

hey, let's not experiment with

1:14:29

crypto or AI or

1:14:32

longevity or synthetic biology

1:14:35

research. Let's just like keep everything

1:14:37

harmonized and in control and we will use

1:14:39

our large centralized power

1:14:42

to keep the world under control.

1:14:44

And then we have this other conversation that

1:14:47

we're about to go into, which is grab the aliens,

1:14:49

which is

1:14:50

whoever is these alien species

1:14:53

that is expanding out into the world

1:14:55

chose to not do that. They chose

1:14:57

to explore the frontier. They chose to innovate

1:15:00

under the guise of competition, of

1:15:03

capitalistic competitive competition to

1:15:05

innovate and start to expand outwards

1:15:08

into space. And I think baked in your argument is that

1:15:10

you actually do need competition

1:15:12

in order to explore the frontier.

1:15:14

And so I'm wondering if A, if that was a good summary

1:15:17

and B, kind of like, do you see that picture

1:15:19

of just like how

1:15:21

this concern about AI or concern

1:15:23

about progress in general is

1:15:25

also links to like the grabbiness or

1:15:28

quietness that you call in aliens and maybe you

1:15:30

can characterize these different kinds of aliens

1:15:32

and the choices that they make of it as a civilization.

1:15:35

Yes, I thought that was a reasonable summary.

1:15:37

I think when we see people today

1:15:39

discuss the possibility of our descendants

1:15:42

spreading into the galaxy,

1:15:44

they are often wary

1:15:46

and a bit horrified by

1:15:49

the impact it might have.

1:15:51

That is the sort of people we've become

1:15:54

over the last century

1:15:55

are people who find that a jarring

1:15:58

and even unpleasant scenario. because

1:16:02

it is actually fundamentally

1:16:04

jarring and unpleasant. So I

1:16:06

am with you in wanting to allow

1:16:09

such changes, but I want to be fully honest

1:16:11

about the costs that

1:16:13

we are asking the world to accept.

1:16:17

That is, if you wanted our descendants to

1:16:20

just stay pretty much like this

1:16:22

indefinitely, that's not what we're

1:16:24

talking about here. If the

1:16:27

cost of allowing our descendants

1:16:29

to expand into the universe and explore

1:16:31

technologies like AI and nuclear power,

1:16:33

etc., is literally

1:16:35

alienation. That

1:16:38

is, we are now alienated from our

1:16:40

ancestors. Our world and lives are different,

1:16:42

and we feel that at some level that

1:16:45

we were not built for the world we're in. This

1:16:47

is an alien world that we're in compared to the

1:16:50

world we were built for. We feel that deep inside

1:16:52

us,

1:16:53

and that will continue. It will only get worse. And

1:16:57

the time it will get better is when we can

1:16:59

go change who's inside us to

1:17:02

become more compatible with these alien

1:17:04

worlds, but that will make those descendants

1:17:07

even more different from us.

1:17:09

So that's really the cost

1:17:11

you have to be asking. So this future world

1:17:13

of strange new technologies is also

1:17:16

a competitive world,

1:17:17

and that competitive world includes conflict.

1:17:20

It includes some kinds

1:17:22

of things displacing others, some things

1:17:24

just being shunted aside and marginalized,

1:17:27

and it may even include war, violence.

1:17:31

It certainly probably includes radical

1:17:34

change to nature.

1:17:36

Not just biology on Earth, but our

1:17:38

descendants who go out into the universe

1:17:40

would likely not just pass

1:17:42

by and plant flags. They

1:17:44

will take things apart, rearrange

1:17:47

them, and radically change them. And sometimes

1:17:49

that'll be ugly, and sometimes it'll be violent,

1:17:51

and sometimes it'll leave

1:17:53

crude ugly waste and

1:17:55

be inefficient, where if

1:17:58

they could have done it better, that will be the core of it. and

1:18:01

this universe we see now that's pristine

1:18:04

and

1:18:06

the way it was from long

1:18:08

ago will just

1:18:11

be erased. That's

1:18:15

the cost.

1:18:16

So I want to explore this idea of

1:18:18

grabby aliens, and I'm sure listeners

1:18:20

who are being thrown into this odd adjective,

1:18:23

grabby, might be a little bit confused. And so I'm

1:18:25

hoping we can explain the nature of grabby-ness,

1:18:27

but I'm hoping we can actually do it inside of the context

1:18:30

of planet Earth and human history, because

1:18:33

I think that naturally extrapolates into

1:18:35

the galaxy because this is where the place

1:18:38

of grabby aliens, that's where they play.

1:18:41

And first, I think I want to ask

1:18:43

you the question, humans, are

1:18:45

we grabby? Because if you look

1:18:48

back in history, you have some

1:18:50

sort of quiet human species, human tribes,

1:18:53

that were found by the grabby humans,

1:18:55

you can call these the conquistadors or the

1:18:58

conquerors, right? The Roman Empire,

1:19:01

very grabby empire, any sort of

1:19:03

empire that like looked outward at expanded, I

1:19:05

would under your trying to understand like

1:19:07

Robin Hanson's like works of grabby-ness, I

1:19:09

would call any sort of empire that expanded grabby.

1:19:12

And then these grabby empires found the

1:19:14

quiet, like tribes that were peaceful

1:19:17

and grabbed them and then assimilated

1:19:20

them into the grabby-ness. And

1:19:22

so this is kind of how I would like present

1:19:24

this inside of a context that we understand because

1:19:26

we understand human history. But I want to ask

1:19:28

you this very basic question of just like human

1:19:30

nature.

1:19:32

Are we grabby? So almost

1:19:34

all biology has been grabby, and

1:19:36

therefore almost all humans, but

1:19:38

it's not so much about our nature. So

1:19:42

the fundamental point here is there's just a selection

1:19:44

effect. That's the key point. That

1:19:47

is, if you have a range of creatures

1:19:49

with different cultures or biological tendencies,

1:19:52

and some of them go out and expand and others

1:19:54

don't,

1:19:55

if there is a place they could expand to and

1:19:58

they would actually, you

1:19:59

know, could reproduce there,

1:20:01

then there's a selection effect by whichever ones do

1:20:03

that, they then come to dominate

1:20:06

the larger picture. That's

1:20:08

just the key selection effect. So there may be many

1:20:11

alien species and civilizations

1:20:13

in the universe, and maybe most of them choose not

1:20:15

to expand,

1:20:16

but the

1:20:17

few ones who do allow expansion, they

1:20:19

will come to dominate by space-time volume

1:20:22

the activity of the universe. And that's

1:20:24

how evolution has worked in the past. It's not that

1:20:27

all animals or all plants are aggressive

1:20:30

and violent and hostile,

1:20:32

it's that they vary.

1:20:35

And some of them

1:20:37

have a habit of sticking one place and hiding, another

1:20:39

have a habit of jumping out and going somewhere else

1:20:41

when they can.

1:20:42

And the net effect of the variation in all

1:20:44

their habits is when there's a new island

1:20:46

that pops up, it gets full

1:20:49

of life because some of those things that

1:20:51

move land there and grow. And

1:20:54

any new mountain grows higher and then

1:20:56

new life shows up at the top of the mountain and a

1:21:00

new niche opens up of any sort

1:21:02

where life is possible there and then some

1:21:04

life goes there and

1:21:06

uses it. That's just the

1:21:09

selection effect. So that's what we should expect

1:21:11

in the universe. There's the question of which

1:21:13

way we will go. And if

1:21:15

I focused on humans, I'd say

1:21:18

it's a trade-off between what would happen

1:21:20

if we don't coordinate and how hard

1:21:22

we will try to coordinate.

1:21:24

So an uncoordinated humanity,

1:21:26

there's certainly enough variation within humanity,

1:21:29

some of us would go be grabby.

1:21:31

It might not be most of us, but certainly some of

1:21:34

us, given the opportunity, would go grab mercury

1:21:36

or Pluto or whatever else it is and

1:21:38

then go out and grab farther things.

1:21:42

We might choose to prevent that. We might choose

1:21:44

to organize and coordinate so as to

1:21:46

not allow those things to happen and we

1:21:49

might succeed at that.

1:21:51

We have enough capability perhaps to do that. And

1:21:54

so then it becomes a choice, will we

1:21:56

allow it? But basically,

1:21:58

whatever you're talking about, something that only

1:22:01

takes a small fraction of us to do, and

1:22:04

we vary a lot, then the question is, will we allow

1:22:06

that variation to make it happen or will we somehow

1:22:08

try to lock it down? The Benkless audience is

1:22:11

pretty familiar with the idea of Moloch. It's a

1:22:13

topic that we've revisited a number of times.

1:22:15

Are you familiar with Moloch? I'm familiar with the famous

1:22:18

Scott Alexander essay on it. Yeah. Although

1:22:20

I think

1:22:21

the concept isn't entirely clear in

1:22:23

that essay. Sure.

1:22:24

Yeah, so Moloch, just being like the

1:22:26

idea of the prisoner's dilemma, say you have two for almost

1:22:29

any number of human tribes on

1:22:31

the earth, and most of them decide

1:22:33

to be quiet and peaceful, it really

1:22:35

only takes one to be grabby,

1:22:38

and that one will come to dominate the earth because

1:22:41

it chose to be grabby and it grabbed everything

1:22:43

else. So it's almost this prisoner's dilemma about

1:22:46

if you choose to not be grabby, you

1:22:48

are implicitly making the choice of being

1:22:50

grabbed by the larger tribe

1:22:53

that has elected to be grabby. And

1:22:55

I think this is how we extrapolate this into

1:22:57

the future with your grabby aliens

1:22:59

thesis, where they're sure, there are

1:23:01

many civilizations out there, maybe there

1:23:04

are many like us that only exist on one

1:23:06

planet, and we have a bunch of elites

1:23:09

on the planet that say, hey, let's not

1:23:11

investigate AI, and let's not

1:23:13

investigate longevity or genetic

1:23:16

engineering. Let's just stay put. And

1:23:18

we would call these quiet aliens,

1:23:20

or us being the quiet

1:23:21

aliens. What the choice

1:23:24

being made is that

1:23:25

grabby aliens are eventually going to arrive

1:23:27

on earth and grab us. And so if

1:23:30

you don't become a grabby alien, you are

1:23:32

going to be grabbed by somebody else. And

1:23:34

so this is why I think this moment in

1:23:36

human history when we have this letter saying,

1:23:39

hey, let's pause AI research, is what

1:23:41

you are focusing on as like, well,

1:23:43

this is a very important decision point for

1:23:45

humanity as to whether we choose to be quiet

1:23:49

or not quiet. And of course, this

1:23:51

isn't the only choice, but this is one of the many

1:23:53

choices, down a long list of choices, that

1:23:55

could actually decide culturally what we want to be

1:23:57

in, at least for the short term. Is this how you

1:23:59

see that?

1:23:59

this fork in the road as we currently are? Well,

1:24:02

let's just clarify, say in a peaceful

1:24:04

society like ours, we could think of

1:24:06

a thief as gravity.

1:24:09

Then we could say, well, if

1:24:11

we don't steal, somebody else will steal, so I guess

1:24:14

we should steal. You could imagine

1:24:16

a world where that was the equilibrium. But if we

1:24:18

coordinate to make law,

1:24:21

then we can coordinate to watch for

1:24:23

a thief and then

1:24:25

repress them sufficiently so as to discourage

1:24:27

people from being thief.

1:24:30

A universe of sufficiently powerful

1:24:32

aliens could coordinate to prevent

1:24:35

grabbing if they wanted.

1:24:38

The claim, which I believe is true, that in fact, the

1:24:40

universe hasn't done that. It might

1:24:42

be that within our human society, we have coordinated

1:24:45

to enforce particular laws, but

1:24:48

out there in the rest of the universe, it's just

1:24:50

empty. And there's pretty much nobody doing

1:24:52

anything through most of it that we can see.

1:24:54

And so it is really just there for the grabbing.

1:24:57

No one's going to slap our hands down for grabbing

1:24:59

the stuff.

1:25:00

We can just keep grabbing until we reach

1:25:03

the other grabby aliens, at which point then

1:25:05

we

1:25:06

might try to set up some peaceful law to

1:25:09

keep the peace between us and them. But we don't

1:25:11

have to fight

1:25:13

wars with other grabby aliens per se,

1:25:15

but there's all this empty stuff between here and

1:25:17

there. Then it seems like

1:25:19

you either grab it or somebody else does. I'm

1:25:21

wondering if we may have blown past some listeners

1:25:24

here who heard us just talking about alien

1:25:26

civilizations. They're coming to like grab Earth

1:25:28

and they're like, what are you guys talking about? Where's

1:25:31

like all of these alien civilizations?

1:25:33

Robin, David, we don't see

1:25:36

them anywhere when we look up the stars, but that

1:25:38

is what your grabby aliens paper is

1:25:40

all about. I think the synopsis of the grabby

1:25:42

aliens paper packs this punch.

1:25:45

If loud aliens explain human earliness,

1:25:47

quiet aliens are also rare. Robin,

1:25:49

can you

1:25:49

sort of explain what your

1:25:52

grabby aliens idea actually

1:25:54

is and why there might be future

1:25:56

alien civilizations that are expansionary

1:25:59

and coming our way?

1:25:59

and why we might want to be a civilization

1:26:03

that rises up and expands in our

1:26:05

own sphere of influence in order to meet them. So

1:26:07

we're going to go through this briefly and quickly. Turns

1:26:09

out there's just a

1:26:11

Kurzgesagt video

1:26:14

that came out yesterday that has 2.6

1:26:16

million views that's

1:26:19

explaining some of the basics of Grabby aliens

1:26:21

in case people want to see that. Kurzgesagt,

1:26:23

the cute animations that do these very technical

1:26:25

things in very nice ways. Congratulations on that, by

1:26:27

the way. So the key idea is

1:26:30

we wonder about the distribution of aliens in

1:26:32

space-time. And

1:26:35

one possible theory you might have is that

1:26:38

we're the only ones at all. And in the entire

1:26:40

space-time that we can see, there'll never

1:26:42

be anybody but us. In

1:26:44

which case, the universe would just have

1:26:46

waited for us to show up whenever we were ready.

1:26:50

We can reject that interpretation

1:26:53

of the universe because we are crazy

1:26:55

early. So our best

1:26:58

model of how Advanced Life Ash should

1:27:00

appear says that

1:27:02

we should be most likely to appear on

1:27:04

a longer-lived planet toward the end

1:27:07

of its history. And our planet

1:27:09

is actually very short-lived.

1:27:11

Our planet will last another billion

1:27:14

years for roughly 5 billion years total

1:27:16

of history. The average planet

1:27:18

lasts 5 trillion years.

1:27:21

And because life has to go through a number

1:27:23

of hard steps to get to where we are, there's actually

1:27:25

a power law in terms of when it appears as

1:27:27

a function of time, the power being the number

1:27:30

of steps. And so, say the steps are six,

1:27:32

then the chance that we would appear toward the end

1:27:35

of a longer-lived planet

1:27:36

rather than now on this planet is basically

1:27:39

that factor of a thousand in their lifetime raised

1:27:42

to the power of six for

1:27:44

this power law, i.e. 10 to the 18 more likely

1:27:46

to

1:27:47

have appeared

1:27:50

later on in the universe. So we're crazy

1:27:52

early relative to that standard. And the best explanation

1:27:54

for that is there's a deadline soon. The

1:27:57

universe is right now filling up

1:27:59

with aliens taking over.

1:27:59

over everything they can. Soon,

1:28:02

in, let's say, a billion years or so, it'll all be full

1:28:04

and all taken, at which point

1:28:07

you couldn't show up later on and

1:28:09

be in advanced civilization. Everything would be used for

1:28:11

other stuff. And that's why you need

1:28:13

to believe they're out there right now.

1:28:15

So now that you've got to believe they're

1:28:18

out there right now, you wonder, well, how close are they?

1:28:20

What's going on out there? And for that, we have

1:28:23

a three-parameter model

1:28:25

where each of the parameters is fit to a

1:28:27

key data point we have.

1:28:29

And this model basically gives

1:28:31

you the distribution of aliens in space time. And

1:28:34

if you like, we can walk through what those parameters are

1:28:37

and what the data point we have for each one is. But the

1:28:39

end story is

1:28:41

civilizations typically expand at a very

1:28:43

fast speed, a substantial fraction

1:28:45

of the speed of light. They appear roughly

1:28:47

once per million galaxies, these

1:28:50

grabby alien civilizations. And

1:28:53

if we head out to meet them, we'll

1:28:55

meet them in roughly a billion years,

1:28:57

spanning near the speed of light. So

1:29:00

they are quite rare,

1:29:01

that rare, but not so

1:29:03

rare as to be empty

1:29:05

in the universe. That is once per million galaxies,

1:29:07

there's many trillions of galaxies. So

1:29:11

that means there are millions of them out there.

1:29:14

And right now the universe is roughly

1:29:17

half full of them. So

1:29:19

that seems strange. The universe looks empty, but

1:29:21

you have to realize there's a selection effect.

1:29:24

Everywhere you can see is

1:29:26

a place where, if there had been aliens

1:29:28

there, they would be here now instead of us.

1:29:32

So the reason things looks empty is because

1:29:35

you can't see a place where they are because

1:29:38

they would move so fast from where they are to get

1:29:40

to here that here would be taken.

1:29:43

The fact that we are not now taken here says that

1:29:45

no one could have gotten here

1:29:47

and therefore I think so. If you were able

1:29:49

to look out into the stars and see the aliens,

1:29:52

that's nonsensical because if that would be possible,

1:29:54

they would have already grabbed you by that

1:29:56

time. Right, because they move so fast.

1:29:59

small volume of the universe where you

1:30:02

could see them and they haven't quite got here yet.

1:30:05

Most of the places you could see, they would be

1:30:07

here. And grabbing you doesn't necessarily mean destroying

1:30:09

you. It just means possibly

1:30:11

expanding to the borders such that you can't

1:30:13

expand into their borders. It would

1:30:15

be enveloping you and then changing

1:30:18

how the world around you looks. So we

1:30:20

can be pretty sure we have not now been enveloped

1:30:22

by a grabby alien civilization because we look

1:30:24

around us and we see pretty native

1:30:27

stars and planets which are not

1:30:30

radically changed.

1:30:31

So yes, in the future, we

1:30:33

might be enveloped and other

1:30:35

things out there might be involved. Well, we're not now. We

1:30:38

couldn't see this situation we're in

1:30:40

if

1:30:41

those alien civilizations had come here. See,

1:30:43

and this is actually why this intersects with

1:30:46

the LEAZR AI problem because

1:30:48

the way that you said that the civilizations

1:30:51

that are out there would have come and enveloped us and

1:30:53

then changed the environment that

1:30:55

is around us, hopefully leaving

1:30:57

us at peace. But this is

1:30:59

the AI alignment problem in another form

1:31:02

where another rogue alien civilization

1:31:04

is also another paperclip maximizer

1:31:07

and they're out just gathering all the resources,

1:31:09

doing the things that they do according to their

1:31:11

values. Hopefully their values are

1:31:14

that when they do expand into our

1:31:16

civilization, they leave us alone because some

1:31:18

alignment is still there. But it is the

1:31:21

same fundamental structure of like

1:31:23

there is these goals and alignments

1:31:25

with the universe around them and these aliens

1:31:27

expand and they change the atoms of

1:31:30

the matter that they expand into. And

1:31:32

because we haven't seen that yet, because

1:31:35

that's the assumption that we have, but because we haven't seen

1:31:37

that yet, you are able to and

1:31:39

your grabby alien paper actually

1:31:41

like kind of place us in the arc of history

1:31:44

because of this assumption that grabby

1:31:46

aliens are grabby and that they will attempt to grab things.

1:31:48

Well, to add to that, I mean, they might be artificial

1:31:50

intelligences as well. Wouldn't they Robin? Almost

1:31:53

surely they are. Yeah. You know,

1:31:55

anything, you know,

1:31:57

within a thousand years, I expect our descendants.

1:31:59

to be almost entirely artificial,

1:32:03

and certainly within a million years. And these things

1:32:05

would be billions of years older than us. So

1:32:08

yes, our artificial descendants will meet

1:32:10

their artificial descendants in maybe a billion years,

1:32:13

and they won't have saved something like us. Now,

1:32:15

I can give you a little more optimism in the sense

1:32:17

that if aliens, these gravity civilizations

1:32:20

appear once per million galaxies,

1:32:22

if the ratio of quiet to loud

1:32:24

ones isn't even as high as a thousand to

1:32:27

one, that would mean that in this

1:32:29

expansion that they've been doing, that they

1:32:31

will do, they'll only ever meet a thousand

1:32:34

of these quiets

1:32:35

as they expand through a million galaxies.

1:32:38

And so these rare places where

1:32:40

an alien civilization appeared would be pretty

1:32:43

special

1:32:44

and worth saving and isolating

1:32:46

because gravity alien civilizations

1:32:48

should be really obsessed with what will happen when

1:32:50

they meet the other gravity civilizations. They'd be

1:32:53

really wanting to know what are these aliens like?

1:32:55

Because they'll have this conflict at the border

1:32:57

and they will wonder, are we going to be outclassed somehow?

1:32:59

Will they trick us somehow? What's going to happen when we

1:33:01

meet the border? But they might make a national

1:33:04

park out of us then. Might turn us into a zoo. Right.

1:33:06

So every gravity civilization will be really

1:33:09

eager for any data they can get about what

1:33:11

our aliens like. And so

1:33:14

this small number of quiet civilizations

1:33:16

they come across will be key data

1:33:18

points. They will really treasure those data points in

1:33:21

order to just give us some data about what

1:33:23

could aliens be like? And so that would

1:33:25

be a reason why if aliens came and enveloped us, they

1:33:28

would mainly want to save us as data about

1:33:30

the other aliens. Now, that doesn't mean they don't

1:33:32

freeze dry us all and run experiments, et cetera.

1:33:35

I mean, it's not necessarily going to let us just

1:33:37

live our lives the way we want, but they

1:33:40

wouldn't just erase us all either. Well, Bankless

1:33:42

Nation, I elect Robin Hanson to make

1:33:44

the case for not freeze drying us and

1:33:46

to preserve us to the aliens if they come

1:33:48

at some point in time. But this is not necessarily

1:33:50

in near term that they're coming, but it's more kind of the

1:33:52

rate of spread. One interesting aspect

1:33:54

of the model is, would it be accurate to say, Robin,

1:33:57

that the model predicts alien civilizations

1:33:59

like

1:33:59

spread like cancer. And I

1:34:02

mean that maybe mathematically, you

1:34:04

know, without the negative connotation that that brings.

1:34:06

Well,

1:34:07

alien civilizations are created

1:34:10

even more like cancer.

1:34:12

So in your body,

1:34:15

you have an enormous number of cells. And

1:34:17

in order for one of your cells to become cancerous,

1:34:19

it needs to undergo roughly

1:34:22

six mutations in that one cell

1:34:24

during your lifetime.

1:34:26

So

1:34:27

that's basically the same sort of hard

1:34:29

steps process that planets go through. Planets

1:34:32

are each, in order to achieve an advanced civilization,

1:34:34

they also need to go through roughly six mutations.

1:34:38

That is, the mutations are each

1:34:40

unusual thing has to happen, and then the next unusual

1:34:42

thing happens, and then the next unusual things happens

1:34:44

until all six have happened. And then you

1:34:47

get something like us. So the key idea is there's,

1:34:49

you know, a million galaxies, each of which have

1:34:52

millions of planets. And then

1:34:55

all of these planets are trying to go down this path

1:34:58

of having all these mutations,

1:35:00

but almost none of them do

1:35:03

successfully by their deadline of life

1:35:06

no longer be possible on that planet. And

1:35:08

it's a very rare planets like ours

1:35:11

for which all six mutations happen

1:35:13

by the deadline of life no longer be possible

1:35:15

on the planet. And

1:35:17

that's how cancer is in your body. That is, 40%

1:35:20

of people have cancer by the time they die.

1:35:22

And that means one of their cells went through

1:35:24

all six of these mutations, but that was really unlikely.

1:35:26

Vast majority of cells only had one

1:35:28

or zero mutations.

1:35:31

And so life on

1:35:34

a planet

1:35:35

reaching advanced level that it could

1:35:37

expand the universe is mathematically exactly

1:35:40

like cancer.

1:35:42

And so it follows the same power law with time, actually.

1:35:44

So the probability that you get cancer

1:35:46

as a function of your life is roughly timed

1:35:49

to the power of six because it takes roughly six

1:35:51

mutations. That's why you usually get cancer

1:35:53

near the very end.

1:35:54

And the chance that planet

1:35:57

will achieve advanced life is roughly

1:35:59

the power of six. six for time. And that's why in fact, in the

1:36:01

universe, universe is appearing

1:36:04

over time faster and faster, according

1:36:06

to roughly a power of six, because

1:36:09

of this exact power law. And so very early

1:36:11

universe had almost nothing. And

1:36:13

then recently, we've showed

1:36:15

up but around us, they're all pop, pop, popping.

1:36:19

And the rate at which they're appearing now is much faster

1:36:21

than it was in the past because

1:36:23

of this power law. And it shouldn't be

1:36:25

lost on listeners that cancer

1:36:27

is grabby. Cancer

1:36:30

falls in the grabby category. And

1:36:32

so there's a bunch of quiet cells that are

1:36:34

just minding their own business doing their job in

1:36:36

harmony with their neighbor cells. And then

1:36:38

one cell goes rogue and decides

1:36:41

I'm going to grab everything that I can around me, and I'm

1:36:43

going to grow to the my best ability. And

1:36:45

so like, it's just interesting to see, no

1:36:47

matter what scales or what

1:36:50

mediums we perceive

1:36:52

to be, whether it's a biological cell, it

1:36:55

is human species as a whole, it is

1:36:57

this theoretical AI super

1:36:59

intelligent robot, but like the same

1:37:01

structures continue to show up. And

1:37:04

so Robin, thank you for helping us navigate

1:37:06

all of these different planes of existence and being

1:37:09

able to reason about them all at once. Well,

1:37:11

we did a brief survey, but

1:37:13

happy to come and talk again sometime if you'd like. Right?

1:37:16

Yeah, there's a number of different rabbit holes that

1:37:18

we do not go down in the interest of

1:37:20

time. How about this? Because we've got a crypto audience,

1:37:22

Robin, do you have any hot takes on crypto? What do you think of

1:37:24

this stuff? How about that? I mean, check that

1:37:26

box. I don't have a new take on crypto.

1:37:29

My old take has always been,

1:37:31

for any new technology, you

1:37:33

need both some fundamental algorithms

1:37:35

and some fundamental tech, and then you need

1:37:37

actual customers and some people

1:37:40

to pay attention to those customers and their

1:37:42

particular needs. And you have to adapt the

1:37:44

general technology to particular customer needs.

1:37:47

Crypto unfortunately moved

1:37:49

itself into a regime where most of the credit

1:37:52

and celebration and money

1:37:54

went from having a white paper and an algorithm,

1:37:57

and not so much for actually connecting

1:38:00

to customers. And

1:38:02

so unfortunately there's this huge appetite

1:38:05

for tools and platforms under

1:38:07

the theory that if we make a tool and platform

1:38:10

other people will do the work to connect that to customers.

1:38:13

And unfortunately there's not so many people who

1:38:15

are

1:38:15

sitting in that

1:38:16

next roles but

1:38:18

them succeeding at that task is the main

1:38:20

thing that will make the rest of crypto succeed

1:38:22

or not. There's plenty of tools and platforms.

Rate

Join Podchaser to...

  • Rate podcasts and episodes
  • Follow podcasts and creators
  • Create podcast and episode lists
  • & much more

Episode Tags

Do you host or manage this podcast?
Claim and edit this page to your liking.
,

Unlock more with Podchaser Pro

  • Audience Insights
  • Contact Information
  • Demographics
  • Charts
  • Sponsor History
  • and More!
Pro Features