Podchaser Logo
Home
#157 – Ezra Klein on existential risk from AI and what DC could do about it

#157 – Ezra Klein on existential risk from AI and what DC could do about it

Released Monday, 24th July 2023
Good episode? Give it some love!
#157 – Ezra Klein on existential risk from AI and what DC could do about it

#157 – Ezra Klein on existential risk from AI and what DC could do about it

#157 – Ezra Klein on existential risk from AI and what DC could do about it

#157 – Ezra Klein on existential risk from AI and what DC could do about it

Monday, 24th July 2023
Good episode? Give it some love!
Rate Episode

Episode Transcript

Transcripts are displayed as originally observed. Some content, including advertisements may have changed.

Use Ctrl + F to search

0:00

My view is you try to slow this down

0:02

to the extent you do through forcing

0:04

it to be better. I don't think, hey, we're going

0:06

to slow you down is a strong or winning

0:08

political position.

0:10

I do think you need to achieve

0:12

X before you can release a product is

0:15

how you slow things down in a way that makes

0:17

sense. So I think it would be possible

0:19

to win a political fight that

0:21

demands a level of interpretability

0:24

of AI systems that basically

0:26

renders the major systems null

0:28

and void right now.

0:30

Maybe explainability, interpretability is not

0:32

possible, but it's an example of something

0:34

where if Congress did say you have to do

0:36

this, particularly for AI that does X,

0:39

it would slow things down because frankly, they don't know how to do it

0:41

yet.

0:44

Hi, listeners. Rob Wiblin here, head of research

0:46

at 80,000 Hours. Over at the

0:48

New York Times, Ezra Klein has been producing

0:51

some great content on artificial intelligence

0:53

this year. So I asked him to come

0:55

on to share his opinions about a number

0:57

of high level strategies for regulating AI

0:59

that have been getting a lot of play recently, or

1:02

at least that I've been seeing discussed a lot recently. I

1:04

think he had some useful takes on what

1:07

approaches are more or less

1:09

viable, which are likely

1:11

to be more or less effective, and also what's

1:13

necessary to make any of those things potentially happen.

1:16

Oh, and also some helpful advice on dealing

1:18

with sleep deprivation when you're the parent of a young

1:20

child. A quick announcement first,

1:23

if you liked episode 149 of

1:25

the show, Tim LeBron on how altruistic

1:27

perfectionism is self-defeating, or

1:30

episode 100, having a successful career with

1:32

depression, anxiety, and imposter syndrome, then

1:34

I can really strongly recommend looking at

1:36

our second podcast feed called 80K

1:39

After Hours for the new interview that

1:41

we have out with Hannah Boettcher about the

1:43

mental health challenges that come with trying to have a

1:45

big impact. People on the team here loved

1:47

it, and it's over on 80K After Hours

1:49

rather than this feed because it was made for people

1:51

with a big and serious interest in

1:54

effective altruism and doing good in particular, which

1:56

I know is only a fraction of the people listening to

1:58

this show these days.

1:59

All right, without further ado, I bring you Ezra

2:02

Klein. Today, I'm

2:05

speaking with

2:08

Ezra Klein. To

2:16

an audience of podcast fans, Ezra probably needs

2:18

a little introduction. He first rose

2:20

to prominence in the mid 2000s for his individual

2:22

blogging, before being picked up to blog for

2:25

the American Prospect and then the Washington Post. In 2014,

2:28

he co-founded Vox.com, where he worked as

2:30

executive director and hosted the enormously popular

2:33

podcast, The Ezra Klein Show.

2:34

In 2020, he moved to the New York Times, where he

2:36

continues to produce The Ezra Klein Show and now also

2:38

writes regular columns. Thanks for coming back on the show,

2:41

Ezra.

2:41

Happy to be here. I hope to talk about

2:44

what governments and labs ought to be doing differently

2:46

in light of recent advances in AI capabilities.

2:49

But for once this time, I'd actually like to start with a question

2:51

we got from a listener, which reads, what

2:54

does Ezra make of the tensions between

2:56

people focused on existing harms caused

2:58

by AI and people focused on harms that

3:00

could occur in the future? It's odd

3:02

to me because in most areas, people who are focused

3:05

on different harms that spring from the same

3:07

thing are naturally political allies,

3:09

because often there will be policy responses

3:12

that can help address both concerns

3:14

simultaneously. What do you think of that

3:16

one? That's interesting. I'd want to think more if I

3:19

think that's true, that people who

3:21

are focused on harms from the same thing are often allies

3:23

as opposed to... I

3:25

often find that the deepest political divisions are

3:28

between the people nearest to each other on

3:30

the political spectrum. So I would not be surprised

3:32

if it's a more generalizable problem than you think.

3:35

But I do think what you're talking about

3:37

here, as I understand it, is the tension between

3:39

the AI ethics and the high-risk communities,

3:41

and in particular, the sort of long-termist

3:44

community worried about super-intelligent

3:46

AGI

3:46

and the people worried about

3:49

biased AI, disinforming

3:52

AI, et cetera. And I

3:54

think you do have there

3:56

things that on one level could be natural

3:58

alliances.

3:59

But one place where maybe

4:02

that question is missing some

4:04

of the argument is that they're not Focused

4:07

on problems from the same thing. In fact, they're arguing

4:10

about what kind of thing we are facing and I

4:13

take the critique of at least many of the

4:15

AI ethics people as being you long-termists

4:19

Who keep saying we're gonna invent super intelligent

4:21

AGI that can destroy the entire

4:23

world are in fact wittingly

4:26

or not participants

4:28

in a ridiculous hype system

4:30

that is funneling money to this

4:32

like set of two or three or five companies

4:35

and On the one hand maybe

4:37

making more likely the thing you fear But

4:39

at any rate distracting people from

4:41

focusing on the things we should actually fear and

4:44

vice versa I think that there is a critique

4:47

within the sort of more long-term as community

4:49

that yeah sure Algorithmic

4:51

bias might be a problem, but it's sure a pretty

4:54

small problem if you're weighing it up against

4:56

This is gonna kill everybody right and

4:59

then they're just I think cultural

5:00

frictions between the the two communities The

5:03

way I think air regulation is gonna happen is Something

5:06

is going to go wrong. There is going

5:08

to be some event that focuses

5:10

attention again on AI, right? There's been a sort

5:13

of reduction in attention over the past couple months We've

5:15

not had a major new release in the way we did with GPT

5:17

for say and Sort of people

5:19

are drifting on to other topics that at some

5:21

point, you know, there will be a new release Maybe deep minds

5:23

Gemini system is unbelievable or something and

5:26

then at some point there's gonna be a system powerful enough

5:28

or critical enough that goes bad and

5:31

I don't think it's gonna go bad in

5:33

You know foom and then we're all dead or if it

5:35

does, you know, this scenario is not relevant

5:38

But I think it'll go bad in a more banal

5:41

way. Somebody's gonna die a critical

5:44

infrastructure is gonna go offline there's

5:46

gonna be a huge scam that

5:48

exploits a Vulnerability

5:50

and operating systems all across the internet

5:53

and tons of people lose their money or they lose their passwords

5:55

or whatever And Congress which is nervous

5:58

is gonna like that'll be the moment that people

5:59

begin to legislate. And once you get into

6:02

a process where people are trying

6:04

to work on towards an outcome, not

6:07

just positioned within a debate, I suspect

6:09

you'll find people

6:11

finding more points of common ground and working

6:13

together a little bit more. I already

6:15

feel like I see from where we were six

6:18

or eight months ago, people coming a little bit

6:20

more to earth and a little bit nearer to each other in

6:22

the debate, not every sort of loud voice on Twitter,

6:25

but just in the sort of conversations I'm

6:27

around and in. And

6:29

I think you'll see something like that eventually.

6:32

I just don't think we're there yet. Yeah.

6:34

If legislation is going to happen here through this kind of

6:36

crisis model where something goes

6:38

really obviously wrong and that causes everyone

6:41

to disagree that there's a problem, at least

6:43

like there's at least this one problem that has to be solved.

6:46

What does that imply maybe about what

6:48

people who are worried about these issues should be doing

6:50

now? I guess one approach you might take is just

6:52

to have a whole lot of quite ambitious ideas in

6:54

your drawer that you're ready to pull out if

6:57

your predictions about the ways that things could go wrong do

6:59

actually play out in some way. And then people are going to be very

7:01

interested to hear what

7:03

ideas you have for

7:04

them. Yeah, you need a couple things. You

7:06

need ideas on the shelf, not in

7:09

your drawer. Don't put them in your drawer.

7:11

They need to be on a shelf where other

7:13

people can reach them to shift metaphor

7:15

a little bit here. You need ideas

7:17

that are out there. So this is a governing

7:20

model that in the political science literature

7:22

is called punctuated equilibrium.

7:24

Nothing happens, and then all of a sudden it does.

7:26

All of a sudden there's a puncture

7:28

in the equilibrium and new things are possible. Or

7:31

as it's put more commonly, you never let

7:33

a crisis go to waste. And when there is a crisis,

7:36

people have to pick up the ideas that are around.

7:39

And a couple things are important for that.

7:41

One is that the ideas have to be around. Two

7:44

is that they have to be coming from a source people trust

7:47

or have reason to believe they should trust.

7:50

And three, they have to have some relationship with

7:53

that source. So what

7:55

you want to be doing is building relationships with the

7:57

kinds of people who are going to be making

7:59

these things. decisions. What you

8:01

want to be doing is building up your own

8:03

credibility as a source on these issues.

8:07

And what you want to be doing is actually building

8:09

up good ideas and battle testing them

8:11

and getting people to critique them and putting

8:14

them out in detail. Right. I

8:16

think it is very unlikely that air regulation is going to come

8:18

out of a less wrong post. But

8:21

I have seen a lot of good ideas from less wrong posts

8:23

ending up in, you know, different white paper

8:25

proposals that now get floated around. And

8:28

you need a lot more of those. It's funny because,

8:30

you know, and I've seen this happen in Congress again

8:32

and again and again. You might wonder, like, why

8:34

do these think tanks produce all these white

8:36

papers, you know, or reports that truly

8:38

nobody reads? And there's a panel that nobody's at. It's

8:42

a lot of work for nobody to read your thing and nobody

8:44

to come to your speech.

8:45

But it's not really nobody. It's it.

8:48

It may really be that only seven people

8:50

read that report, but five of them

8:52

were congressional staffers who had to work on this issue.

8:55

And like, that's what this whole economy is.

8:57

It is amazing to me.

8:59

The books that you've never heard of that have

9:01

ended up hugely influencing national

9:04

legislation.

9:04

Right. Most people have not read Jump

9:06

Starting America by John Gruber and Simon Johnson.

9:09

But as I understand it is actually a pretty important part of the

9:11

chips bill. And so you

9:13

have to build the ideas. You have to

9:15

make the ideas legible and credible

9:18

to people. And you have to know the people you're trying to

9:20

make these ideas legible and credible to like

9:22

that is like the process by which you

9:25

become part of this when it happens.

9:27

Back in March, you were when you interviewed

9:30

Kelsey Piper, you were

9:33

kind of positive on the idea of just trying to slow down

9:35

advances in AI capabilities so that society

9:38

would have more time to notice the problems

9:40

and fix them. Do you have any view on

9:43

what might be the best mechanism by which to

9:45

slow down the rate at which the frontier

9:47

advances?

9:48

My view is you try to slow this down

9:50

to the extent you do

9:51

through making forcing it to

9:53

be better. I don't think, hey, we're going to

9:56

sell you down is a strong or winning political

9:58

position.

9:59

I do think you need to achieve

10:02

X before you can release a product

10:04

is how you slow things down in a way that makes

10:07

sense. So I've used the example,

10:09

and I recognize this example actually

10:11

may be so difficult that it's not possible.

10:14

But you could really imagine, I think

10:16

it would be possible to win a political fight

10:18

that demands a level of interpretability

10:21

of AI systems that

10:24

basically renders the major systems

10:26

null and void right now.

10:28

If you look at Chuck Schumer's

10:31

speech that he gave on safe innovation,

10:33

which is his pre-regulatory

10:36

framework, his framework for discussion of a regulatory

10:38

framework. One

10:41

of his major things is explainability.

10:44

And he has talked to people, I know

10:46

I've been around these conversations, and people have

10:49

told him, this may not be possible.

10:52

And he's put that in there, but he still wants it there.

10:54

Frankly, I want it too. So maybe

10:57

explainability, interpretability

10:58

is not possible. But it's

11:00

an example of something where if Congress

11:02

did say, you have to do this, particularly for

11:05

AI that does X,

11:07

it would slow things down because frankly, they don't know how to do it

11:09

yet. And there are a lot of things

11:11

like that that I think are less

11:13

difficult than interpretability. And

11:15

so I think the way you will end up slowing some of these systems

11:17

down is not, we need to pause

11:20

because we think you're going to kill everybody. I don't think that's

11:22

going to be a winning position. So you

11:24

need to slow down because we need to be confident

11:27

that this is going to be a

11:29

good piece of work when it comes out. I mean,

11:31

that's something we do constantly. You can't just

11:33

build, I mean, in this country, you kind

11:35

of can't build a nuclear power plant at all. But

11:37

you definitely can't build one as quickly

11:40

as you can cutting all the corners. And

11:43

then there are other things you could do that would slow people down. One

11:45

of the things that I think should get more,

11:47

I have written about this, at least some

11:50

attention, is a question of where

11:52

liability sits in these systems.

11:54

So if you think about social media, we basically said

11:56

there's almost no liability on the

11:58

social media companies.

11:59

They've created a platform, the liability

12:02

rests with the people who put things on the platform.

12:05

I'm not sure that's how it should work for AI. When

12:08

you're, I think most of the question is how the

12:10

general underlying model is created.

12:13

And so if OpenAI sells our model

12:16

to someone and that model is used for

12:18

something parable, is that just

12:20

the buyer's fault or is that OpenAI's

12:23

fault? I mean, how much

12:25

power does a buyer even have over the model? But

12:28

if you put a lot of liability

12:29

on the core designers

12:32

of the models, they would have to be pretty

12:34

damn sure these things work before they release them.

12:37

And so things like that could slow people down. So

12:40

forcing people to make things up

12:42

to a higher standard of quality or reliability

12:46

or interpretability, et cetera, that

12:48

is a way of slowing down the

12:50

development process and slowing it down for a

12:52

reason, which is, to be fair, what

12:54

I think you should slow it down for. Yeah, you've

12:57

now brought up, yeah, most

12:59

of the different kind of regulatory philosophies that

13:02

I was going to ask about. So maybe we can go through them

13:04

one by one. It's on the liability one.

13:06

It's a really interesting question to me. So

13:08

if a company trains a model that

13:11

then is used by someone else

13:12

down the line to successfully create a buyer

13:14

weapon or successfully harm people in a really

13:17

big way, who should be legally

13:19

accountable for that? I think currently our

13:21

idea with product regulations is that if you manufacture

13:24

a weapon and then someone uses it, it's the person

13:26

who uses it who's responsible, but you're not

13:29

on the hook. But maybe the incentives for these

13:31

firms would be a lot better and a lot more aligned with

13:33

society. If we said, no, if you

13:35

train and release a technology

13:37

that is then used to harm people in a massive

13:39

way, you've been negligent and

13:42

you should be held accountable in some legal

13:44

framework for the harm that has resulted

13:46

from your decisions of what to do. What

13:49

do you think of that?

13:50

The way a lot of legal regimes work around

13:52

questions like this is they put a lot of weight

13:55

on words that are like

13:57

reasonably or

13:58

predictably.

13:59

or something like

14:02

that, right? So if you think about liability

14:04

in this context, even

14:06

if what you were doing was shifting liability a little

14:09

bit back on to the core model builder, I

14:11

think the way it would work is not to say that, you

14:13

know, anything that happens is their fault, but

14:16

it would be some language like anything

14:19

that happens that reasonably should

14:21

have been predictable or prevented or tested

14:23

for is their fault. And then

14:26

what you would have functions is court cases over what is reasonable,

14:28

right? Which is what you have all the time in different areas

14:31

of law. And, you know,

14:33

I wouldn't try to

14:35

decide that perfectly at

14:37

the outset, but I

14:39

think what you would, the way you would think about that as a company if

14:41

something like that happened

14:43

is you would say, okay, we need

14:45

to have done a level of red teeping here that

14:48

if a court needs to see what we

14:50

did, it is extremely impressive,

14:53

right? It is extremely thorough. And

14:57

if they see it and it's not impressive,

14:59

like we could, you know, be on the hook for a lot of

15:01

money. And so I don't think you can,

15:03

I think it'd be crazy on some level to

15:06

create a level of liability that, you

15:08

know, open AI or Google or whomever is

15:12

liable for anything that is done with our models.

15:15

But, you know, this is a place where

15:18

we actually have a lot of experience in consumer law.

15:20

I mean, if I pick up my microwave

15:23

and I hit you with it, my microwave maker

15:25

is not on the hook for that.

15:27

If my microwave blows up because they

15:30

made it poorly, they actually are. And

15:32

the difference there is like they don't need to,

15:34

you know, take into account that somebody

15:36

might pick up the microwave and use it in a totally

15:39

unintended way to bash somebody else's head in. But

15:41

it is on them to make sure that, you know,

15:44

the thing doesn't explode if my

15:47

four-year-old begins, you know,

15:49

pounding his hand on all the buttons all

15:51

at once. And so I don't think this

15:53

is actually as weird as sometimes

15:56

people suggest. Around

15:58

consumer products, we have a lot of people

15:59

lot of experience saying that,

16:02

you know, this has to be pretty well tested to not, you

16:04

know, create a problem under

16:07

normal parameters, even of misuse

16:10

or imprecise use. And it's actually

16:12

social media, I think, in the internet that got this

16:14

huge carve out from liability to slightly

16:17

reset people's expectations, such

16:19

that it's like, oh, well, things that are digital, like the

16:21

core company has almost no relevance

16:23

to it. But like, that's not how we've done other things. Yeah,

16:27

it's I think it's interesting that today, if

16:29

one of these

16:29

models was used to create a bioweapon,

16:32

then I'm not sure it would pass a

16:34

common sense standard of reasonableness and foreseeability,

16:37

or at least to get off the hook, that you could say, well,

16:39

people were shouting about how this was

16:41

a potential risk. It was all over the

16:43

media. And you know, what have you

16:45

and you know, there's all of these jailbreaks that allows people

16:48

to get around all of the controls that you have on your

16:50

model. So maybe you have

16:52

been negligent here in releasing this

16:54

this product in this way.

16:55

I don't think there's any doubt. I mean, there is no doubt

16:57

in my mind, at least, that if these models

17:00

currently were good enough to provide real

17:02

help at building bioweapons, which I don't think they are,

17:05

they'd be negligent to be releasing them in their current forms.

17:08

Right. I mean, that's just I think that is a totally

17:11

clear thing. They know they cannot protect

17:14

their models from being jailbroken.

17:16

So the saving grace here is the models

17:19

are not good enough to actually do

17:21

that much damage if they're jailbroken. But

17:23

if they were, then you cannot release a model that can easily

17:25

be jailbroken. Like that is

17:28

what a liability standard like that is

17:30

trying to get at. It is on you

17:32

to make sure you can't do

17:34

this. And if you release something

17:36

where actually we know, you know, when

17:39

we do discovery, turns out there are a bunch of emails,

17:41

you know, inside open AI or people like, look, like,

17:44

I don't think this is quite ready. Like, we still think there

17:46

are a lot of ways to jailbreak it. But, you know, the leadership

17:48

is like, no, we got to get this out. We got to beat Google to market.

17:51

That's where we get into a lot of trouble. Yeah.

17:53

So that's kind of creating

17:55

incentives through the tort law system, perhaps

17:57

a different philosophy. But

18:00

often it's called independent auditing,

18:02

evaluations, licensing,

18:04

and so on. On that approach,

18:06

basically before a product could go to market, before

18:09

a model could be trained and released, it

18:11

might have to be sent to a third-party auditor who

18:13

would actively try to get the model to

18:15

do something like spread autonomously to new

18:17

servers in order to avoid being turned off, or

18:19

to help someone produce a bioweapon, or

18:22

commit crimes of various other kinds if it's

18:24

instructed to do that. And if it could successfully

18:27

be made to do those things, then it's determined

18:29

that it's clearly not yet ready for the general

18:32

public, and it would just have to go back for further refinement

18:34

to fix those problems. Yeah, what do you think

18:36

of that broad approach to regulation?

18:39

I think the question there is it depends

18:42

on how good we think the auditors are, where

18:44

that auditing happens.

18:46

And yeah, just how much we believe there's a process

18:49

there that can stay ahead of systems

18:52

that are getting released, even as we don't understand

18:54

them, number one. And then as we get

18:56

systems that have more working memory, and

18:58

so there are systems that are learning post-release,

19:01

how are you auditing a system

19:03

that is changing in theory in a dynamic

19:06

way on the market?

19:08

I learn things every day.

19:10

Right now the systems don't really learn things every day, or

19:12

at least a lot of them don't. They're

19:14

not reabsorbing the data of my conversation

19:17

with them and using that to get smarter.

19:20

But if they were, or if they were

19:22

doing that in real time rather than in batches, what would

19:25

that apply for the auditing? So I think auditing is

19:27

a good idea. And to a point I was making

19:29

earlier about building institutions, I think you want to think about

19:31

building institutions for things like auditing, and

19:33

you want to get a lot of talent into things like auditing.

19:36

But I've talked to some of the auditors,

19:39

and I

19:40

personally am very far from convinced

19:43

that we understand these models well enough to

19:45

audit them well. And if you believe what

19:48

is basically lurking in your question, which

19:51

is huge

19:51

exponential continued

19:54

curves in model capability,

19:56

then I'm even more skeptical. So

19:59

I'm not skeptical.

19:59

skeptical of the idea in a theoretical

20:02

way, if we could audit, auditing's great. I

20:04

am skeptical, I'm a little worried about basically

20:07

audit washing AI capabilities. Like, oh,

20:09

this went through audit, so now we know it's fine. Like,

20:11

do we? Like, how would we know that? So

20:13

that's a little bit of my concern there. And

20:15

that's a place where we probably just need to do a lot more work and

20:17

research and spend money and get great people

20:20

into that field and so on.

20:21

If that's right, though, that's, we

20:24

can't tell what these models are capable of doing. And

20:26

they're constantly changing, so it's a moving target. So

20:29

we're never really going to have solid

20:31

answers. Isn't that completely alarming?

20:33

It seems like that itself should give us massive

20:36

pause about rolling these things out.

20:39

I mean, I do think it's quite alarming. Yeah, I guess

20:41

that's- I don't even want me

20:43

to tell you. Yeah. Yeah.

20:46

I mean, I think the thing, I think that the place

20:48

where it's very alarming is if you believe

20:50

in a very, very, very rapid

20:53

capabilities curve.

20:54

And this is the thing that I'm currently watching

20:57

to see,

20:57

right? I want to see if,

21:01

you know, I don't know when, if, et cetera,

21:03

GPT-5 is coming or, you know, Gemini

21:06

is coming or whatever.

21:08

I want to see if the next jump is big.

21:10

Yeah. I'm not totally convinced

21:12

yet that at least on the large language models, it will be.

21:15

And so I'm just interested to see that because

21:18

one thing that I think lurks in the head of the

21:21

AI risk people is, foom,

21:24

right? Is this constant sense that

21:26

we're going to be on this curve that it's going to get

21:28

better so quickly we can't keep up with it. If

21:31

that's not true, then actually auditing

21:33

makes a ton of sense. Yeah. If

21:35

it is true, yeah, then we're in a really, really

21:37

weird

21:39

place where probably we don't have a lot of very

21:41

good policy options.

21:42

Yes. I think it's just a really open question

21:45

whether we'll see the rate of progress speed

21:47

up or slow down. But

21:49

both seem really live options. Policy does not

21:51

stay ahead of exponential progress curves.

21:54

Let me just say that as a flat, a flat

21:57

finding from my years doing this work. Policy

21:59

is a low.

21:59

lagging field.

22:01

Yeah. Yeah, on that point of

22:04

general lessons that you've learned from following

22:06

policy debates, I imagine you've probably

22:09

seen a lot of cases of ideas being

22:11

turned into legislation and then gradually being converted

22:13

into agencies, which then actually

22:15

have to take actions that impact people. Have

22:18

you learned any kind of general lessons about

22:20

what factors people need to keep in mind at the idea

22:22

generation stage that seem relevant here? Yes,

22:25

but I'm going to do this in a weird way. Let me ask you a question.

22:27

Of the different proposals that are floating around Congress right now,

22:30

which one do you find ... which should be

22:31

found most interesting? I

22:34

guess the interpretability

22:36

stuff does seem pretty promising

22:38

or requiring transparency, I

22:40

think in part simply because it would

22:42

incentivize more research into how

22:44

these models are thinking, which could be useful from a

22:47

wide range of angles. But from who? Like

22:49

whose package are you most interested in or who do you think is

22:51

the best on this right now? Yeah, I'm not following

22:53

the US stuff at a sufficiently fine-grained

22:55

level to know that. So

22:57

this is the thing I'm getting at here a little bit. I

23:00

think this is a very weird thing happening to me when

23:02

I talk to my AI risk friends, which

23:05

is they on the one hand are so

23:07

terrified of this that they truly think,

23:09

right, that all of humanity might die

23:12

out.

23:12

And they're very excited to talk to me about it. But

23:15

when I'm like, what do you think of what Alondra

23:17

Nelson has done? They're like, who? Well,

23:19

like she was a person who ran the AI Blueprint Bill

23:22

of Rights. I mean, she's not in the administration now. Or

23:24

did you read Schumer's speech? No, didn't read Schumer's speech.

23:26

Like, are you looking at what Ted Lieu is doing? Who's

23:29

Ted Lieu? Like, where is he?

23:32

And one answer to your question in terms of how policy gets

23:34

done is it gets done by policymakers.

23:37

And I am really

23:39

struck, like really struck and have been

23:41

now for many months by the distance

23:44

between the community that understands

23:46

itself as so worried about this and policymakers

23:50

that they're not really trying to reach out.

23:52

They're not really trying to familiarize with them. And

23:54

so what you actually are having happen, which

23:57

I don't really think is great, but I

23:59

think there's

23:59

actually a weird level of reliance by

24:02

the policymakers on the people building the AI systems

24:04

right now.

24:05

Right? Like, who does Biden have to talk to? You know,

24:08

he talks to Sam Altman, he talks to

24:11

Demis Asabi, he talks to, you know, other

24:13

people kind of making the systems. And

24:16

you know, so one just like very basic

24:18

thing is that there is a beginning

24:21

right now of like a, like, this is

24:23

a kind of relational in what gets called on

24:25

the hill, like educational phase. So

24:28

what Schumer really announced was not

24:31

that he's going to do interpretability or anything else, but

24:34

he's going to convene a series of functionally

24:36

forums through

24:37

which he's going to try to get him and other

24:39

members educated on AI.

24:42

And it's like, if I was worried about this, you know,

24:44

around the clock, I would be trying to get my people

24:46

into these forums, like I'd be trying to make sure Chuck

24:48

Schumer's people knew that they should

24:50

be listening to us. And this person in particular,

24:53

like we think this is like the best articulator

24:55

of our concerns. And

24:57

I would just say that it is unbelievable

25:00

how human and relational

25:03

a process policymaking is. It

25:05

is crazy how small a number

25:07

of people they rely on, right? It

25:10

is just nuts that a key policy

25:12

will just be because like the person in charge

25:14

of the subcommittee happened to

25:17

know

25:17

this policy analyst

25:19

going way, way, way, way, way back. And

25:22

that that's a big part of it. I think that there's a lot more

25:25

weirdly interest right now in sort

25:28

of

25:28

like people want to talk to other people who share

25:30

their level of concern and I think are not

25:33

really enjoying the process or not really

25:35

engaging that much in the process of

25:37

trying to get beyond that.

25:39

Right. I know you've been in like a little bit of a spat with Tyler

25:41

Cowan about, you

25:43

know, you're I saw you sort of tweet like,

25:46

you know, the people who think, you know, who are worried

25:48

about X risk of one and we don't need to talk to the deniers

25:50

anymore. And like he says, no, they haven't. I'll

25:53

say I'm a little bit more on his side of the know they haven't.

25:55

But even putting that aside, the

25:57

question really, which actually a lot

25:59

of us. don't even know the answer to is, what

26:02

even do the key members of Congress here believe?

26:04

What are their intuitions? Who does

26:06

need to be convinced? Because a couple

26:09

members of Congress are going to be the people all the other members

26:11

of Congress listen to on this. And

26:14

I just cannot emphasize enough to people who've

26:16

not covered policy, which for my sense I have

26:18

for many years, it's really,

26:20

it ends up being on everything. It's like seven people

26:23

end up mattering. And it's really important

26:25

to identify the seven people and then figure out who they're listening

26:27

to.

26:28

Yeah, the message I was trying to send

26:30

with those tweets that you're referring

26:32

to was that my impression was that

26:35

at least so for me as someone who's been worried

26:37

about this for 10 or 15 years, there's now been such

26:39

an increase in awareness and concern among

26:42

you know, in other communities about the possibility

26:44

that AI could go really wrong. That

26:47

now I feel there's a sufficient level of

26:49

interest and concern that it's possible to

26:51

make a whole lot of progress potentially. And

26:53

that rather than try to convince everyone,

26:56

you know, rather than try to go from like, you know, 50% support

26:59

to 100% support, people should be trying to come up with

27:01

ideas now trying to actually come up

27:03

with concrete ideas for what people ought to be doing

27:05

and harnessing the support that is out

27:07

there. Do you think that is a

27:09

is a kind of sensible attitude to have that, you

27:12

know, enough people are troubled and on board now

27:14

that useful work can be done and it doesn't

27:16

all have to be advocacy in the way that it used to be?

27:19

I do think a lot of useful work can be done. I

27:21

think I've seen more things and covered

27:23

more things where you would have thought

27:26

the consensus for action had existed

27:28

for a very long time and yet nothing

27:31

happened year after year after year after year.

27:33

And so this feels a bit like

27:35

that to me right now.

27:37

When I listen to the policymakers,

27:39

what I would say in general is

27:41

there is much more fear

27:44

of slowing innovation down

27:47

or getting the wrong regulation in place

27:49

than

27:49

there is of what happens if innovation

27:52

moves too fast.

27:54

And so, you know, if you look at, say, Schumer,

27:56

and I think the single most important statement here is

27:59

Schumer's speech.

27:59

That's the Senate Majority Leader, and he's taken a personal

28:02

interest in this issue. And he

28:05

calls it safe innovation. And his point is that the innovation

28:07

has to be the thing we protect. And

28:10

I'm not saying he's wrong on that,

28:12

but I do think that's an interesting signal, right?

28:14

He is more worried, I think, in a

28:16

lot of ways that you will get

28:18

the innovation side of this wrong than that you will get the safety

28:21

side of this wrong. And maybe that's

28:23

unfair because I don't want to say

28:25

I'm seeing into his mind here, but

28:28

it is always much harder to have anything

28:30

happen in Congress and not happen. And

28:32

right now where we are is in the not happening side.

28:35

And so the

28:36

fact that there are a lot of news articles and the fact

28:38

that more extreme opinions on this get a lot of attention,

28:41

I just take that as a

28:43

much more distant signal from anything happening

28:46

than I think it might look like. In

28:48

many cases, that's actually a reason

28:51

things don't happen. It

28:53

would in some ways be likelier you would get

28:55

strong legislation if there was

28:58

a special committee who was

29:00

working on this already and there

29:03

wasn't a ton of attention around it, but for whatever

29:05

reason, there was a process, then

29:07

for it to be like such a huge point of contention.

29:11

The more polarizing and the more like

29:13

heated a disagreement or question gets,

29:15

oftentimes the harder it is to get anything done on

29:18

it. So the dynamics here, I think, are less linear

29:20

from attention to action than

29:22

one might hope. And that's true on a lot of things like climate

29:25

change has been like that. I mean, immigration

29:27

is like that. Like making something a big issue does

29:29

not signal

29:30

that it will become a successful issue.

29:32

Yeah. It's interesting to me that here

29:35

living in London, it seems like the extinction

29:38

risk from AI is more prominent

29:40

in the policy conversation than it is in DC.

29:43

And I think in the EU as well. I mean, you've

29:45

got like soon act taking meetings with people who are worried

29:47

about extinction. It's like higher on the

29:50

agenda for the global summit on AI safety. They've

29:52

appointed someone to lead the foundation models task

29:55

force who's definitely concerned about extinction

29:57

risks, among other things. If

29:59

that all goes. very well in the UK. I wonder whether

30:02

that would have an influence on the US or the EU or

30:04

whether these are just separate ecosystems

30:07

largely.

30:07

I've been interested in this, what also

30:10

kind of looks to me like a cultural divergence. And

30:12

I get the sense that the EU and

30:14

particularly the UK sees

30:16

itself as playing the

30:19

more regulatory role, right? I think

30:21

they, even though DeepMind

30:23

is based in London, it's owned by

30:25

Google. So functionally, the

30:28

AI race to the extent it is race is

30:30

between the US and China. And

30:33

Europe doesn't see itself as dominating

30:35

the technology or having the major corporations

30:38

on this. And as such, they can

30:40

be more worried about the harms of it. But

30:42

because the technology is going to be developed in the

30:44

US and China,

30:46

what happens there is going to be more meaningful.

30:48

Yeah, there's another big cluster

30:50

of proposals, maybe the largest that is

30:53

a combination of requiring organizations

30:55

to seek government licenses if they're

30:57

going to be training really large or very

30:59

general AI models. And in the

31:01

process of getting a license, they'd have to demonstrate

31:04

that they know how to do it responsibly or at least as responsibly

31:06

as anyone does at the time. And

31:08

those rules could potentially be assisted by legislation

31:11

saying that only projects with

31:14

those government licenses would be allowed to access

31:16

the latest and most powerful AI specialized

31:18

supercomputers, which is sometimes

31:20

called compute governance. How does that approach,

31:23

how do you think that would come out of the messy

31:25

legislative process? I'm interested in

31:27

that. I don't know. I could

31:29

see this going a lot of ways.

31:32

And that one in particular,

31:34

I really got back and forth

31:36

on this because I've talked about it with a lot of people.

31:39

And the reason

31:43

you're hearing me hesitate is that I think it's actually

31:45

a very...

31:47

So here's the question, right? On the

31:49

one hand, yeah, if you take AI, take the metaphor

31:51

basically, that what you're developing

31:53

now is a very, very powerful weapon,

31:55

right? Well of course, if you're developing

31:57

a very powerful, very secret weapon, you want that done.

31:59

in a highly regulated

32:02

facility.

32:03

Or you want that done by a facility

32:05

that is highly trusted, right? And workers

32:08

who are highly trusted and everything from their technical

32:10

capacity to their cybersecurity practices.

32:13

So that makes a ton of sense.

32:15

On the other hand, if

32:17

what you say is you're developing the most

32:19

important consumer technology

32:22

of this era,

32:24

and in order to do that, you're going

32:26

to need to be a big enough company to get

32:29

through this huge regulatory gauntlet that

32:31

is going to be pretty easy for Google or a meta

32:34

or Microsoft to do because they

32:36

have all the lawyers and they

32:39

have the lobbyists and so on.

32:41

I could imagine as that goes through Congress,

32:43

people get real antsy about the idea

32:46

that they're basically creating almost

32:49

government protected monopoly, entrenching

32:51

the position of these fairly small number

32:54

of companies

32:55

and making it harder to

32:57

decentralize AI if that's

33:00

something that is truly possible, right?

33:02

And some people believe it is, right? I mean, there's a Google

33:04

thing about how there's this internal Google document

33:06

that leaks about how there's no moat. Metis

33:08

tried to talk about open sourcing more

33:10

of their work, right? Who knows where it really goes over

33:12

time. But I think the politics

33:16

of saying the government is going to centralize

33:18

AI development

33:19

in private actors is pretty

33:22

tough. There's a different

33:24

set of versions of this, and I've heard many of the

33:26

top people in these AI companies

33:28

say to me, oh, what I really

33:31

wish is that as we get closer to AGI,

33:33

that all this gets turned over to some kind

33:35

of international public body,

33:37

right? You hear different versions and

33:39

different metaphors, a UN for AI,

33:42

CERN for AI, you pick the

33:46

group, an IAEA for AI.

33:49

So I don't think it's going to happen because it's

33:52

first and foremost a consumer technology or

33:54

is being treated as such. And

33:56

the idea that you're going to nationalize

33:59

or international

33:59

a consumer technology that

34:02

is creating all these companies and spinning

34:04

all these companies off is very, there's

34:07

functionally no precedent for that anywhere.

34:10

So this is a place, and this goes maybe

34:12

back a little bit to the AI ethics versus AI

34:14

risk issue, where

34:18

it looks really, really, really reasonable

34:20

under one dominant

34:23

internal metaphor. We're creating

34:25

the most dangerous weapon humanity has ever held. And

34:27

it looks really, really unreasonable

34:29

if your view is this is a

34:32

very lucrative

34:35

software development project that we

34:37

want lots of people to be able to participate in. And

34:40

so yeah, I imagine that I think

34:42

that will have a harder time

34:45

in a

34:46

legislative process once it gets out

34:48

of the community of people who are operating off

34:50

of this sort of shared, this

34:52

is the most dangerous thing humanity has ever done, sort

34:55

of internal logic. I'm not saying

34:57

these people are wrong, by the way, that's just my assessment

34:59

of the difficulty here. Yeah, it

35:01

does seem very challenging to get the level

35:03

of support that you'll require to get

35:06

the level of

35:06

coverage to truly be safe if

35:09

you think that these are incredibly

35:11

dangerous weapons. But I wonder

35:13

if, as you were saying earlier, there's some kind of catastrophe.

35:16

What if someone does use AI technology

35:18

as a weapon and a million people end up dead?

35:20

Has that changed the game enough

35:23

that these things that currently seem not

35:25

really viable might become viable?

35:28

Yeah, I mean, if a million people end up dead, then yes, it

35:30

does. If a couple people at

35:32

a time, I mean, we'll look at look

35:34

at US gun control laws.

35:36

Yeah, so it would just depend

35:38

on the nature of the... Yeah, I mean, there

35:40

and it would depend on the nature

35:42

of the problem. Also, I mean, it's not crazy for

35:44

the solution to be proportioned

35:47

to the size of the problem. If

35:49

what you have is a critical infrastructure

35:51

failure, but the outcome of that

35:53

is that Houston, Texas

35:55

has no electricity for three days.

35:58

I mean, that'd be bad.

35:59

But that would not lead to the nationalization

36:02

of all AI. That would lead to

36:04

a set of regulatory safeguards and testing

36:06

and so on about putting AI or some

36:09

kind of system in charge of critical infrastructure. Or

36:11

a

36:12

cybersecurity thing would have

36:14

a different set of ideas. I

36:16

think the thing where there's an AI

36:18

powerful enough that somebody uses it to somehow

36:21

get

36:23

in touch with a wet lab somewhere that doesn't know

36:25

what it's doing and print a synthetic biology

36:29

super weapon and we only break

36:31

up that plot at the last minute or it does kill

36:33

a bunch of people and then we, whatever it is, then

36:36

you could get into scenarios like that.

36:38

Yeah. I mean, so right now

36:40

it makes sense that the frame that people are thinking

36:42

about this through usually is the consumer

36:45

product frame.

36:47

But looking forward, I guess we don't know how long it

36:49

will be, but like five, 10, 15, 20, 35 years,

36:53

at some point

36:54

these models presumably will be capable of causing

36:56

a lot of problems. A lot of havoc. They will

36:58

be up to that task. And then I want like, what

37:00

will the national security establishment think once it just

37:03

becomes

37:04

very clear that these

37:06

could be used for terrorism or they can be used for

37:08

military purposes in a way that's really troubling?

37:12

At that point, do they jump into action and this is

37:14

now like packs a punch within

37:16

their framework? Yeah, but does it pack

37:18

a punch in the sense that they want to regulate

37:21

it or that they want to have the most of it and control it?

37:23

Right? And that's the danger of how the national

37:26

security system

37:28

operates around these things. On the one hand,

37:30

yeah, there are international treaties

37:32

and work governing nuclear

37:34

weapons. And on the other hand, we sure have a hell of a lot

37:36

of nuclear weapons because

37:38

the main lesson a bunch of the

37:40

countries took is we need to have the most

37:42

or we at least need to have deterrence power. So

37:44

I think that's one reason to worry a little bit

37:47

about that sort of metaphor

37:49

or approach. National security

37:51

tends to think in terms of dominance over others,

37:53

not really in terms, I think, of just

37:55

like generalized risk to the population. And

37:58

so...

37:59

doesn't necessarily help. I have a lot of concerns

38:02

about national security here. Yeah. Yeah.

38:05

I think that's true about the competition between countries

38:07

aspect. But I suppose if you're trying to

38:09

limit access within a country, then

38:12

I mean, the national security establishment

38:14

is familiar with the idea of wanting to limit

38:16

access to really dangerous biological

38:19

weapons, for example, for people

38:21

who are inside the United States. I guess what

38:23

we're kind of dancing around, a lot of people have suggested,

38:25

including Sam Altman and actually

38:28

the Secretary General of the UN, they've been pointing to what is this

38:30

idea of doing the International Atomic Energy

38:32

Agency before AI. And

38:35

the bargain of the International Atomic Energy

38:37

Agency is that under the Nuclear

38:39

Nonproliferation Treaty, the IAEA

38:43

inspects nuclear facilities

38:45

in countries, basically all countries,

38:47

to ensure that they're only being used for peaceful purposes.

38:50

And in exchange, the nuclear superpowers

38:53

transfer peaceful nuclear

38:55

applications to other countries to allow

38:57

them to use it for medical purposes or for

38:59

energy purposes. I guess that's something

39:01

that the superpowers wanted because they didn't

39:03

want proliferation of this. They wanted to

39:05

maintain their monopoly. And I wonder, yeah, could we imagine

39:08

a bargain like that in future at the point where

39:10

it is just very clear to everyone how

39:12

these could be used as very dangerous weapons in a war?

39:15

I have a lot of questions about this,

39:17

to be honest. So let me

39:19

carve out the part that I think we should definitely have it

39:21

and that, you know, it'd be very high on my list right now

39:24

because I think you want to begin building these institutions

39:26

nationally. You need really strong

39:28

national institutions stocked. I mean, they

39:30

should have high pay scales given, you know, how

39:32

much money you can make in AI right now. You

39:34

need really strong national institutions with people

39:36

who understand this technology really well

39:39

and can be in an advisory, a regulatory,

39:42

an auditing, etc., capacity,

39:45

right? Maybe even are creating, you know, autonomous

39:47

public capacities, right? You know, just

39:49

like AI models for the, you know, public good

39:51

oriented for things the public wants that don't have a business

39:54

model. But whatever it is, right? I think it's actually really

39:56

important to begin standing up and

39:59

probably on its own. known, just

40:01

places in the government where it's like

40:03

you just have 300 excellent AI experts from

40:09

different domains. So that's one thing.

40:12

The question of the international IAEA

40:14

model, it's just really tough because

40:18

I'm

40:18

not saying I oppose it or I just, when

40:20

I try to think about how it would work,

40:23

on the one hand, a lot of what makes it

40:25

possible to do that is that uranium is kind of hard

40:27

to get and hard to enrich. Right.

40:30

And also that system

40:32

has only been so effective. I mean,

40:35

look at Israel, look

40:37

at Iran, look at North Korea, look

40:39

at Pakistan. So that's

40:42

a little tricky. Also,

40:45

again, the reason you could do it is that nuclear

40:47

weapons

40:48

were from the beginning nuclear weapons.

40:50

I mean, we dropped the bomb on Hiroshima.

40:53

We dropped it on Nagasaki. And

40:55

that's why you have something like that,

40:57

because from the beginning, what people

40:59

saw here was the unbelievable

41:01

destructive power

41:03

of these weapons.

41:05

Right now, most people, whatever

41:07

the stories are that pop around the media, just don't

41:10

think these are that destructive.

41:12

So I think that one of

41:14

the most worrying things in this whole

41:16

area is that it doesn't look that

41:18

bad till it's too late, till you have something that's

41:21

actually genuinely destructive. But

41:23

I don't think you're going to have a

41:25

powerful preventive

41:28

regulatory structure that

41:31

is going to keep other countries from having

41:33

their own autonomous, like really profound

41:36

AI models. And like what? I

41:38

mean, if Brazil wants

41:41

to create an AI, like

41:43

a really good AI,

41:45

and wants to put it under, you know, give

41:47

it some national defense authority, are we going to bomb

41:49

Brazil? Like, what is the implied threat

41:52

that is being offered here? Because

41:54

in some cases, like we would go to war, right? I

41:56

mean, we went to war to stop Iraq

41:59

from getting...

41:59

nuclear weapons that it wasn't even trying

42:02

to get. So, you know, there are cases

42:04

where we would actually, you know, take

42:06

that as a reason to go to war. In

42:09

the nuclear weapons case, are we really going to go to war

42:11

with other countries on AI or maybe just sanctions?

42:14

It's just and then the more central AI becomes

42:17

to economies, to kind

42:20

of everything, the more countries are going to want

42:22

ones that they control, which is completely natural. It's

42:25

just a hard equilibrium for me to imagine

42:27

working, which doesn't mean it won't.

42:29

And again, specifically in a case

42:32

where you have, you know, these

42:34

kind of super AGI models and there's

42:36

a disaster, you know, you can imagine very different

42:38

worlds coming out of, you know, very, very big disasters.

42:41

But

42:42

in this case, it just it's just,

42:44

you know, it's just very hard for me to picture. Yeah. Another

42:48

broad approach that's out there is sometimes

42:50

branded as a Manhattan Project to

42:52

AI safety, basically the US and UK

42:55

and I guess the EU governments spending billions of

42:57

dollars on research and development to solve

42:59

technical problems that exist around keeping

43:01

AGI aligned with our goals and having sufficiently

43:04

strong guardrails that they can't easily be retrained

43:06

to commit all sorts of crimes, for example. The

43:08

CEO of Microsoft, Satya Nadella,

43:11

has talked in favor of this

43:12

and the economist Samuel Hammond

43:14

wrote an article in Politico that we're linked to. Yeah.

43:17

What do you think of that broad approach? Yeah, that I'm very

43:19

much for. I don't think I would

43:22

choose a metaphor of a Manhattan Project

43:24

for AI safety just because I

43:26

don't think people believe we need that and that's not going to be

43:28

much of a political winner. But it's

43:30

a great thing to spend lots of R&D money

43:32

on and have a really strong public research

43:35

infrastructure around. And a good amount of that

43:37

research should be on safety and interpretability.

43:40

And, you know, we should really want this to work and it should

43:42

happen. And yeah, I mean, I think that makes

43:44

a ton of sense. And I think that's actually a possible

43:46

thing you could achieve. Look, I

43:48

don't trust any view

43:50

I hold about takeoff

43:52

rates. But what

43:54

I do think is that

43:56

if we are in like a sort of vertical takeoff

43:59

scenario,

43:59

The

44:01

policy is just going to lag so far behind that we almost

44:03

have nothing we can do but hope for the best. If

44:05

we're in more modest takeoff scenarios, which

44:07

I think are more likely in general, well

44:10

then building institutions can

44:12

really work and we

44:14

can be making progress alongside

44:17

the increasing capability and capacity and danger.

44:20

And so that's where I think coming up with ideas

44:23

that also just play into the

44:25

fact that

44:26

different countries want to dominate this, different countries

44:28

want to get the most that they can out of this, different

44:30

countries want to make sure a lot of this is done for the public

44:33

good.

44:34

And that it's actually not that expensive.

44:37

It is expensive for most companies, which is

44:39

why OpenAI has to be attached to Microsoft

44:42

and DeepMind had to be part of Google and so on. But

44:45

from the perspective of a country's

44:48

budget,

44:49

it's not impossible to have real traction

44:51

on this. Now, getting the expertise

44:54

and knowing how to get the right engineers

44:56

and so on, that's tougher. But it's doable.

44:59

And so, yeah, I think that's somewhere where there's

45:01

a lot of promise. And

45:04

the good thing about building institutions like that, even if

45:06

they're not focused on exactly what you want them

45:08

to be, is it then when they do need

45:10

to refocus, if they do need to refocus, you

45:12

have somewhere to do that, right? You

45:14

have something that can become, if you

45:17

have a Manhattan Project just

45:19

for AI, well

45:21

then you could have a Manhattan Project for AI safety

45:23

because it was already happening there, you just have to expand it.

45:26

So that's where I think beginning to see yourself

45:28

as in a foundation building phase is

45:31

useful.

45:33

I mean, it's, again, it's why I emphasize that at

45:35

this point, it's good to think about your

45:37

policies, but also think about the

45:39

frameworks under which policy will be made. Who

45:42

are the members of Congress who

45:45

understand this really well and you're

45:47

hoping will be a leader on this and you

45:49

want to have good relationships with and keeping

45:51

their staff informed and so on? And what

45:54

are the institutions where all this work is

45:56

going to be done and do they need to be built from scratch and

45:58

what kind of people go into them and how do you get the

46:00

best people into them. And

46:02

all of that is not like the policy

46:04

at the end of the rainbow, but you

46:06

need all that for that policy to ever happen and to ever

46:09

work if it does happen. I

46:11

guess the dream here would be, I think at

46:13

the moment, the ratio of research that

46:15

enhances capabilities in

46:18

AI versus trying to steer

46:20

them and align them is something like 100 to 1. And

46:22

maybe it would be great if we could get that to 10 to 1

46:24

or something like that. Yeah, I totally

46:27

agree. Yeah. What sort of design

46:29

details might affect whether

46:32

the

46:32

Manhattan Project for AI safety or whatever we end up

46:34

branding it, whether that actually ends up helping

46:37

or I mean, you could imagine a failure scenario where

46:39

almost all of it ends up being co-opted for capabilities

46:41

research anyway, because that's to

46:43

many people more appealing and it's certainly more

46:45

profitable. Yeah. Would you have any advice on

46:47

how people can kind of guide a broad

46:50

project like that towards funding the kinds of things

46:52

that they think is most valuable? I mean, that's pretty,

46:54

I think, straightforward, which is that

46:56

in the appropriation, the goals

46:58

of the research are written into it. I mean,

47:00

that happens all the time. When you think about

47:03

how money is apportioned

47:06

for ARPA-E or different programs of the

47:08

Department of Energy

47:11

or the NIH, when Joe Biden

47:13

has his cancer moonshot from a few years

47:15

back, it isn't any

47:17

kind of new or unsolved political problem. How

47:20

do you tell an agency what this

47:23

appropriation is actually for?

47:25

So that's about getting congressional support to

47:27

do the thing you want it to do as opposed to do the

47:29

thing you don't want it to do. And again, that goes back

47:32

to relationships. And again,

47:34

one thing I am trying to emphasize in this conversation

47:36

a little bit is that there is just a lot of boring

47:38

work here that I don't exactly see happening, right?

47:41

That it's a lot of, you know, making sure that

47:44

the people who eventually are going to write this bill are

47:46

listening to you when they write it.

47:48

Yeah. I mean, the sheer number

47:50

of people

47:51

who have experience on this sort of work, you know, this is

47:53

really very small, I think, relative to the size of the problem

47:56

and certainly maybe relative to the appetite for

47:58

assistance that exists now.

47:59

Do you have any advice on how do you scale

48:02

up a community that's interested in a policy

48:04

problem when maybe it needs to be 10 or 100 times

48:06

bigger than it is? I don't think it's that small actually.

48:10

And again, part of this is my experience of,

48:12

I lived in DC for 14 years, I cover politics. You

48:15

cannot imagine how small

48:17

the organizations that dramatically

48:20

affect what happens in Washington DC are.

48:23

I mean, the Center on Budget and Policy Priorities is

48:25

just one of over a long period of time,

48:27

the most effective consequential nonprofits

48:30

like anywhere. The

48:33

amount of good they have done on the

48:35

social safety net is incredible.

48:37

And there's not 20,000 people working at

48:39

CBPP. I'd be surprised if

48:41

there were more than 100. I mean, there might be more than 100. I

48:43

don't actually know the staffing, but it's

48:46

not going to be more than 500. I mean, it's not going to be more than 200.

48:49

And so I

48:50

don't think this is that small. I

48:53

don't think that people are located in the right place.

48:55

I don't think they've been trying to build

48:58

a bunch of DC institutions. I

49:00

noticed this on crypto a few years ago, and I apologize

49:02

because I'm going to forget the name of the institution that

49:04

I'm thinking of here. But Jerry

49:06

Britto, who is

49:09

in DC trying to do crypto regulatory

49:11

work, and it was like he had like

49:14

a little crypto outfit, little crypto

49:16

regulatory

49:18

nonprofit trying to create

49:20

crypto favorable laws. And

49:23

I think it had like six people in it, a dozen

49:25

people in it. And then when there was this big fight

49:27

over crypto and Congress, all of a sudden this group was important

49:29

and they were getting calls because they'd been there like working

49:32

on building relationships. And

49:34

when somebody needed to call somebody, they were actually

49:37

there. And so

49:39

it is not by any means beyond the

49:41

capabilities of

49:45

this community, these

49:47

companies, these organizations, these nonprofits

49:49

to be setting up shops, fairly

49:52

well-funded shops in Washington, DC, that

49:54

where the point is that they're turning out good

49:56

research and trying to meet people. Yeah,

49:58

this does get a little bit to like.

49:59

Like, how scared are you, right? If

50:02

you're so scared that you want to devote your life to

50:04

this, but not if you have to live and watch it in

50:06

D.C., you're not that afraid. A

50:09

lot of people want to be out in San Francisco where the action is,

50:11

but the regulatory action is going to be in D.C.

50:14

Well, yeah, I guess on the question

50:16

of where to locate, when you were talking about the takeoff

50:18

speeds, it kind of occurred to me that in

50:20

a slow or medium kind of takeoff scenario,

50:23

then the D.C. policy seems really quite

50:25

important. In a fast takeoff scenario,

50:28

what the policy and governance that seems to matter is the policy

50:30

and governance inside the AI lab. I

50:32

mean, it's an extremely bad situation to be

50:34

in in the first place of things that are taking off really quickly. But

50:37

then the organization that can potentially

50:39

react and do something useful is, you know, OpenAI

50:42

itself, perhaps, and, you know, who's making the

50:44

decisions there and on what basis

50:46

and, you know, what sort of information that they have

50:48

to rely on. That stuff seems like it might

50:50

be able to help in that case.

50:52

I find the number

50:55

of AI risk people

50:57

who seem to me to be working inside

51:00

AI shops, building the AIs

51:02

they are terrified of, caught in a competitive

51:04

dynamic. They are perfectly happy to admit to me

51:06

that they cannot stop to just

51:08

be a little bit of a puzzling sociological outcome

51:11

here.

51:12

And I think it's because working on AI is really

51:14

cool and fun. I don't think it's specifically because, like,

51:16

they're motivated by profit, but they do want to work

51:19

on AI,

51:20

where, you know, spending

51:22

your time in D.C. working on AI regulation is like kind

51:24

of a pain in the ass. But

51:27

I don't know. I think there's something a little bit weird about

51:29

this. Like, again, as somebody who's been, you know,

51:32

as you know, like very friendly to this community and

51:34

is probably among, I don't know, national political

51:36

columnists, probably in touch with more AI risk

51:38

people than just about anybody else.

51:41

I find the number of them who

51:44

seem to me to be accelerating the

51:46

development of AGI to be a little weird

51:48

compared to the number who seem to have set

51:50

up shop in Washington to try to convince

51:53

Washington to not let AGI happen. It

51:56

doesn't look to me like it's working out the way they wanted it

51:58

to, but I don't see people, you know,

51:59

radically all leaving the companies

52:02

and then setting up the shops. There's just something here

52:04

that makes me wonder

52:06

what's actually going on in people's motivation systems.

52:09

We have an article on exactly this question

52:11

of whether it's good or bad to take roles at AI

52:13

labs. That will stick up the link too in the show

52:15

notes. I think one thing that is driving

52:18

that phenomenon is that until recently, I think people

52:20

were just extremely pessimistic about

52:22

whether government would be able to have a useful role

52:25

here. They thought, I think most people thought that

52:27

there was just not going to be significant interest from

52:29

mainstream politics. To me, that seems like it was a

52:32

massive blunder. I think thinking

52:34

through more concretely how this

52:36

would play out would have revealed that there

52:39

was going to be a big policy opportunity here. There was

52:41

potentially going to be a big role for government to make things better

52:43

or worse. That's maybe

52:46

something that I wish had gone differently. One

52:48

thing I will say is that I don't want to suggest that there's

52:50

absolutely nobody doing this work. This is a

52:52

really good group at Georgetown, CSET,

52:55

the Center for Security and Emerging Technology that

52:57

they've been doing this work. It's

52:59

really notable, I think, that when Chuck

53:01

Schumer, the majority leader, wanted to give a

53:04

speech announcing his big safe

53:06

innovation

53:06

framework, he went to them. They're

53:08

not a huge deal. They don't have 6,000 people. They're

53:11

not the Brookings Institution.

53:14

There they were. That's where Chuck

53:16

Schumer gave his speech. He's clearly in touch with them

53:18

and thinking about things they say. There

53:20

are some people doing this. Also

53:23

I know that they were funded by people in the A

53:25

community. I would just

53:27

say that there is payoff to that.

53:29

Hey, everyone. I just wanted to note that when we

53:31

were looking up a link for this one, we realized

53:33

that Schumer had actually given this talk, not

53:36

at CSET, but the very similarly

53:38

named CSIS, which is

53:40

just a different think tank in DC, one that's more

53:42

focused on international relations. CSIS

53:45

is a bit bigger and older than CSET,

53:48

but we kept this question and answer in because we

53:50

thought you should get the chance to hear Ezra's border

53:52

point here, which may well stand, even

53:54

if this isn't a perfect example of the

53:57

phenomenon that he's trying to describe. Okay,

53:59

back to the end.

55:59

really shape it in Washington or in Brussels

56:02

or in some state capitals or

56:04

whatever. Have

56:07

people actually adapted to that world?

56:10

Are people making the investments in terms of their time

56:12

and energy and money and institution building

56:15

that fit where we are now

56:17

as opposed to where we were four or five years ago? Yeah,

56:21

maybe it's hard for me to fully

56:23

buy into that explanation just because just personally,

56:26

I find AI so boring. I feel

56:28

like I've been dragged, again, screaming into

56:30

having to think about AI from a technical point of view

56:33

just because I think it's so incredibly important. But

56:36

yeah, have you ever tried to sit down

56:38

and read an AI safety paper? I guess

56:40

because I'm not a technical person. It

56:44

doesn't get me that excited.

56:46

I don't really believe you. You really don't believe

56:48

me? Listen, I've

56:50

read how do you catch a chinchilla

56:52

and all that. Some of the papers are boring. I

56:55

think this stuff is interesting. It's gotten more interesting

56:57

recently. Maybe we've got to go back

56:59

to the 2017 stuff. Yeah, I have heard

57:02

a lot of your podcasts on AI. I

57:04

think I'm pretty good at telling as a professional here

57:06

when a podcast host is not into the thing they're talking

57:09

about. Even if you don't wish

57:11

you were talking about this, I think you're pretty into it.

57:13

Well, I'm interested in a lot of different topics.

57:18

I guess I'll just have to accept that you're

57:21

not convinced of this one. There's a strikingly

57:23

large number of different mechanisms by which AI

57:25

could end up causing harm which various different people

57:27

have pointed to. I want to, of course, try

57:30

clustering them into groups that have something in common like misalignment,

57:32

misuse, algorithmic bias,

57:34

natural selection perspective, and so on. I

57:36

know from listening to the extensive coverage

57:38

of AI on your show over the last year that you're personally

57:41

engaged with a wide range of these possibilities

57:43

and take many of them pretty seriously. What

57:46

possible ways that advances in AI could go

57:48

wrong? Are you

57:48

likely to prioritize in your coverage of the

57:50

issue of the next year or two? I don't

57:52

know that I'm going to prioritize any one over a set of others.

57:55

I find the whole question

57:58

here to be almost...

57:59

almost unbearably speculative,

58:02

right? That we're operating in a space

58:04

of pretty radical uncertainty. And

58:07

so a number of the

58:10

most

58:11

plausible and grounded ways

58:13

AI could go wrong are also

58:15

in certain ways at least spectacular, right? AI

58:17

will be bad in the ways our current society is bad

58:20

because it is trained in the data of our current society.

58:22

That is both a clear harm

58:25

that is going to happen and is not

58:27

civilization ending. And then as you

58:29

get up the ladder to civilization ending harms

58:32

or civilization threatening harms, you

58:35

are working with obviously more speculative

58:37

questions of how AI will develop, how it will be used,

58:40

et cetera. And so one

58:42

of the things that I'm interested in is

58:44

not so much trying to tell

58:46

policymakers or my audience, you

58:48

know, you should think about this harm and not that harm, but

58:51

that we need a structure.

58:53

We need systems. We need expertise

58:56

and institutions and expertise in the correct

58:59

institutions to have visibility

59:02

on how artificial intelligence is developing.

59:05

We need to be thoughtful about the business

59:07

models and structures around

59:09

which it is being built. So this

59:11

is something I keep emphasizing that I think other people

59:14

really under emphasize the kinds

59:16

of artificial intelligence we have are going to be highly

59:19

governed by the kinds of artificial intelligence

59:21

that get a

59:23

quick market share and that seem to be profitable.

59:25

So already I think it is a kind of harm

59:28

that is emergent that more

59:30

scientifically oriented systems like AlphaFold

59:33

are getting a lot less attention than just

59:35

an endless series of chatbots because the

59:37

chatbots have such a clear path to

59:39

huge profitability. And so systems

59:42

that I think could be better for humanity are much

59:44

less interesting to the venture and financier

59:47

class than systems that could be plugging

59:49

into search engines right now. And

59:51

so being thoughtful about what the

59:53

monitoring systems are, what the business models

59:55

are, you know, how we're doing audits, in

59:58

many ways I think we're in a period more

59:59

institution building and information gathering

1:00:02

than saying like, this is what's going to go wrong and here's

1:00:04

how we're going to prevent it. Yeah. You've

1:00:07

made these points about business models quite

1:00:09

a few times and I think it's a good one and it's

1:00:11

not one that comes up a whole lot elsewhere. Do

1:00:14

you have a view on what sort of business model

1:00:16

would be the best one to

1:00:18

take off if we could affect what sort of business

1:00:21

model AI companies are using?

1:00:23

Yeah, I think I

1:00:25

do on a couple of levels.

1:00:28

One is I just think the competitive race

1:00:30

dynamics between the different companies are worth

1:00:32

worrying about.

1:00:33

I basically

1:00:36

understand

1:00:37

the incentive structure

1:00:40

of AI development right now as being

1:00:42

governed by two separate races,

1:00:44

one between different companies, right?

1:00:47

You have Microsoft versus Google versus Meta, somewhat

1:00:50

versus Anthropic, and then you have some other players,

1:00:53

and then between countries, the

1:00:55

US versus China. You can maybe say,

1:00:58

given that DeepMind is in London, the West versus

1:01:00

China, something like that. Then

1:01:02

of course, as time goes on, you're going to have more

1:01:04

systems coming out of more countries. The

1:01:08

problem, and this is a very banal point

1:01:10

that many other people have made, is that there's

1:01:12

going to be more direct pressure to

1:01:14

stay ahead in the race than

1:01:17

there is to really do anything else. You can have all these

1:01:19

worries and all these concerns, but

1:01:22

it's really a trump card to say, or it

1:01:25

certainly acts in our system like a trump card to say,

1:01:28

well, if you don't do this, or if you slow

1:01:30

down and do that, they're going to get ahead of you over

1:01:32

there. That to me is one

1:01:35

set of problems I think we should worry about around business

1:01:37

models, for instance. If there's a very near-term

1:01:39

path to massive profitability, then people

1:01:41

are going to take that path and they're going to cut a lot of corners to get there.

1:01:44

I think when people think of business models, they're primarily

1:01:47

them thinking

1:01:47

of things like hooking it into advertising,

1:01:50

and I am too. Just think about

1:01:53

algorithmic trading funds that

1:01:55

have billions of dollars to throw at this

1:01:57

and that might want to create, but not really understand.

1:02:00

in what they're creating in terms of some

1:02:02

kind of artificial system that

1:02:04

is inhaling data from the markets, that

1:02:07

is hooked up to a fair number of tools, and that

1:02:09

is turned loose to try to make as much

1:02:11

money as you can in an automated way. Who knows what a

1:02:13

misaligned system like that can end up doing. So

1:02:16

how you make money, that

1:02:17

I think is important. And

1:02:20

in general, one reason I focus on it, I should say,

1:02:22

is that I think it's something that the people who focus

1:02:25

on AI risk somehow

1:02:27

have a bit of a blind spot here. I think there's

1:02:29

a little bit of a weird forgotten

1:02:32

middle between what I think of as

1:02:34

the AI ethics concerns, which

1:02:37

are around algorithmic bias and misinformation

1:02:39

and things like that, and what I think of as the AI

1:02:41

risk concerns, which are more existential.

1:02:44

And I think that the sort of more banal, like,

1:02:46

how is everybody going to make money on this? And

1:02:48

what is that race going to do to the underlying technology

1:02:51

has been a little neglected. Yeah. I

1:02:54

wonder if one reason it might be neglected is that

1:02:57

people aren't sure, even if we would prefer the scientific

1:03:00

AI models to flourish more

1:03:02

than others, say, and to be more profitable. People

1:03:04

might wonder how, like, what

1:03:06

policy options are there to really influence which

1:03:09

of these business models end up being most

1:03:11

successful. Did you have any ideas there for how

1:03:13

one could push things in one direction rather than another?

1:03:16

I think to be given where I am and where I'm

1:03:18

talking, I think one reason it's neglected is that in

1:03:20

general, one blind spot of effective altruism

1:03:23

is around capitalism. And there, for

1:03:27

a lot of reasons, it's just like not that much

1:03:29

interest or comfort with critiquing incentives

1:03:32

of

1:03:33

business models and systems and wealthy people

1:03:36

within effective altruism. So I just want to note that to

1:03:38

not let you and your audience off the hook here.

1:03:40

I don't think it's totally accidental

1:03:42

that this has happened. I think many

1:03:44

people have said more or less it looks like capitalism

1:03:47

is going to plausibly destroy the world, basically,

1:03:49

because of this race dynamic that you described. That's a very

1:03:52

common line. So I think people

1:03:54

at least step into noticing some ways in which

1:03:56

the incentives are poorly aligned. Yeah, I think all of

1:03:58

a sudden people now they see the race.

1:03:59

dynamic art, but I just think in general this is

1:04:02

a slightly neglected space in the A

1:04:04

world. Anyway, the point is not to make this into a critique

1:04:07

of A. Look, I think this

1:04:09

is hard. Do I have a plausible

1:04:12

policy objective in my pocket?

1:04:14

Not really. If it were me

1:04:17

at the moment, and I were king, I would

1:04:19

be more restrictive on business

1:04:22

models rather than less. I would

1:04:24

probably close off a lot of things. I would say you

1:04:26

can't make any money using AI to

1:04:28

do consumer manipulation. I

1:04:31

think the possible harm of

1:04:33

having systems that are

1:04:35

built to be relational, so

1:04:38

think of things like what replica is doing or

1:04:41

I'm very impressed by Po, what

1:04:43

inflection.ai is built. I think it's

1:04:45

a pretty interesting... Is it called Po? I might

1:04:47

have the name of it wrong. PI? You think of the personal name? Pi.

1:04:50

Pi. Maybe it's Pi. But the Reed

1:04:52

Hoffman-oriented Align company.

1:04:55

I think that's a very impressive model. It's very, very

1:04:57

personal. It's really nice to talk to. I think

1:04:59

if you imagine models like that that build

1:05:02

a long-term personal relationship with people,

1:05:05

understand things about the people they're talking

1:05:07

to, and then use that to manipulate what they do,

1:05:09

I think that's pretty scary. I do

1:05:11

things like that, but on the other hand, I would be putting a lot more

1:05:14

public money and public resources

1:05:17

into AI. Something that I've talked

1:05:19

about at different times on the show and talked about with other people

1:05:22

is I would like to see more of a vision

1:05:24

for AI for the public good. What do we want

1:05:27

out of AI? Not just how do we get to it as

1:05:29

fast as we possibly can, but what do we want out

1:05:31

of it? What would it mean to have some

1:05:33

of this actually designed for public benefit

1:05:36

and oriented towards the public's problems? So

1:05:39

it might be that the

1:05:41

public, quote unquote, is

1:05:44

much more worried about a set of scientific

1:05:47

and medical problems as opposed

1:05:49

to

1:05:51

how to build chatbots or help

1:05:53

kids with tutoring or something, but

1:05:56

because the others have more obvious business

1:05:58

models who get the latter and not really. the former.

1:06:01

And so I think that some

1:06:03

of this is just you would have to actually

1:06:05

have a theory of doing

1:06:07

technology for the public good as opposed

1:06:09

to just having a regulatory opinion

1:06:12

on technology to the extent you have

1:06:14

any opinion at all on it. And

1:06:16

we tend to be more comfortable, at least in America,

1:06:19

with the latter. And so some

1:06:21

of what some of the reasons hard to come up with some of the things

1:06:23

I would like to talk about is that they feel very distant

1:06:26

from our instincts

1:06:29

about

1:06:29

and our sort of muscle memory about how to approach

1:06:32

technology.

1:06:33

Yeah, I guess one change of incentives you

1:06:35

could try to make is it's like very

1:06:38

narrow systems that are just extremely good at doing one

1:06:40

thing like a model that is extremely

1:06:42

good at folding proteins. They don't tend

1:06:44

to generate nearly so much concern

1:06:46

because they're not likely to be able to act that autonomously

1:06:49

because just their abilities are so

1:06:51

narrow. And it seems like to do an awful

1:06:53

lot of good, we don't need necessarily

1:06:55

general AIs that are capable of doing, you know,

1:06:58

most of the things that humans are able to do, we probably can

1:07:00

do an awful lot of good just by training these these narrow

1:07:02

systems. And those ones are just a

1:07:04

lot less troubling from many different points of view.

1:07:07

This is my gut view. And

1:07:09

in addition to that, there's always

1:07:12

the prospect out there

1:07:13

of achieving

1:07:15

generalized artificial intelligence. And

1:07:17

if you can get the AGI, then you

1:07:20

get to sort of pull out of your argumentative pocket.

1:07:23

Well, once we hit that moment,

1:07:25

then what that self improving, generalizable

1:07:29

intelligence can do, you know, will

1:07:31

so outpace all the narrow systems that

1:07:33

it'll be ridiculous that we wasted all this time doing

1:07:35

these other things. So blah, blah, blah, blah. But

1:07:38

if you're skeptical, and I do have

1:07:40

still a fair amount of skepticism that we're going to hit AGI

1:07:43

or the kinds of super capable

1:07:45

AGI that people believe in anytime

1:07:47

soon, then actually you would want a

1:07:50

lot more narrow systems. And one reason you'd

1:07:52

want them is you might believe as I believe that

1:07:55

the chatbot

1:07:55

dynamics don't

1:07:58

actually orient the

1:07:59

themselves to things that are that good for society.

1:08:02

So technology always comes to the point of view. Technology

1:08:05

always comes with things that it is better at and worse at.

1:08:08

And something I have said on my show before

1:08:10

and talked about in conversation with Gary Marcus

1:08:12

who's more of a critic of these systems, but this

1:08:14

is a point I agree with, is

1:08:16

that I think you're basically in chatbots

1:08:19

creating systems that are ideally

1:08:21

positioned to bullshit. And I mean here,

1:08:23

bullshit in the Harry Frankfurt version

1:08:25

of the term, right, where we're bushelting is

1:08:28

speaking without regard to the truth, not specifically

1:08:31

lying, just not really caring if it's true or

1:08:33

not, not even really knowing if it's true or not, right?

1:08:35

That's in some ways the whole point of hallucination

1:08:37

or the whole point of when I go to an AI system and I say

1:08:40

to it, hey, can you write me a college essay,

1:08:42

a college application essay that is

1:08:44

about how I was in a car accident as a child?

1:08:47

And I wrote me an amazing essay when I did that and

1:08:49

talked about how I got into martial arts and, you

1:08:51

know, learned to trust my body again and how

1:08:53

I worked at a hospital with other survivors

1:08:55

of car crashes, just none of it had happened,

1:08:58

right? It just made up this whole backstory for me off

1:09:00

of like a one sentence prompt. And

1:09:02

so

1:09:03

when you have a system like that, what you have is a system

1:09:05

that is well-oriented

1:09:08

towards people doing work without much regard for the truth.

1:09:11

And I think there's actually a lot of reason to think

1:09:13

that that could be a net negative on society. And

1:09:15

you don't even have to be thinking about

1:09:17

high levels of disinformation or deepfakes

1:09:20

there, just a gigantic

1:09:22

expansion in the amount of garbage

1:09:24

content that clogs up the human processing

1:09:26

system and the sort of collective intelligence

1:09:29

of humanity. Like that too would just

1:09:32

be sludge. That would just be a problem

1:09:34

if everything got way more distracting and

1:09:36

way harder to work with

1:09:39

and way harder to separate signal from noise. Like that

1:09:41

would just be bad. Meanwhile, a

1:09:43

lot of these narrow systems, I think there's incredible work

1:09:46

you can do, right? And I think the money

1:09:48

and investment and excitement that's going into the chatbot race

1:09:50

was going into trying to figure out lots

1:09:53

more predictive systems for finding

1:09:55

relationships between real things that human

1:09:57

beings don't have the cognitive.

1:10:00

of capacity to master. I think that

1:10:02

would be great. And so to me,

1:10:04

that's where, again, business models

1:10:07

matter, but also that's somewhat

1:10:09

on the public and on the government. You don't

1:10:12

just want the government to say

1:10:14

this business model is bad. You

1:10:16

wanted to say that one is good sometimes,

1:10:19

or you wanted to make that one viable. I mean, the whole

1:10:21

idea of something like carbon pricing, or

1:10:23

separately what we actually did in the Inflation Reduction Act,

1:10:25

where you put huge amounts of subsidies into

1:10:28

decarbonization,

1:10:29

is you are

1:10:31

tilting towards a business model. You're saying, if you do

1:10:33

this, we are going to make it more profitable for you to do it. You

1:10:35

can imagine prizes with AI, right, where

1:10:38

we set out this set of drug discoveries

1:10:40

we would like to make, or scientific problems we would like

1:10:42

to solve. And if you can build an AI that will

1:10:44

solve them, like the protein folding problem,

1:10:46

we will give you a billion dollars. It's

1:10:49

a problem to me that DeepMind made no money

1:10:51

from Alpha Fold. Or I mean, I'm sure

1:10:53

they did in some kind of indirect way, and

1:10:56

obviously they're trying to spin it out into isomorphic, which will

1:10:58

do drug discovery. But Alpha Fold's

1:11:01

great, right? They solve the protein folding problem. Nobody, to

1:11:03

my knowledge, cut them a check for doing so. And there

1:11:06

should be something that is cutting checks

1:11:10

if you can invent an AI to solve fundamental

1:11:12

scientific problems, not just cutting

1:11:14

checks if you can invent an AI that is

1:11:16

better at selling me hydro flask water

1:11:18

bottles as I travel around the internet. Like that's just

1:11:21

a problem.

1:11:22

Yeah. I know

1:11:25

you've got a sick kid and you've got to go. But I guess,

1:11:27

yeah, a final question for you is, I recently

1:11:30

got married and I'm hoping to start a family in the next few

1:11:32

years. I guess you've been a dad for a couple years

1:11:34

now. What's one or two pieces of advice you've

1:11:36

got for me if things work out?

1:11:37

Ooh, what a fun question.

1:11:40

Could do a whole 80,000 hours on parenting.

1:11:43

Not that I'm an expert on it.

1:11:46

I think one is that,

1:11:49

and this is like a very long running

1:11:51

piece of advice, but kids see what

1:11:53

you do. They don't listen to what you say. And

1:11:56

for a long time, they don't have language. And

1:11:58

so

1:11:59

what you are modeling is always a thing that

1:12:02

they are really absorbing.

1:12:03

And that includes, by the way, their

1:12:06

relationship to you and your relationship to them.

1:12:08

And something that really affected my parenting is, I believe

1:12:10

it's a clip of Toni Morrison, if I'm not wrong,

1:12:13

talking about how she realized at a certain point

1:12:16

that when she saw her kids,

1:12:20

that she knew how much she loved them.

1:12:23

But what they heard from her sometimes

1:12:25

was the stuff she was trying to fix, right? Your shoes

1:12:27

are untied, your hair's

1:12:30

all messed up, you're dirty, you need to whatever.

1:12:32

And that she had this conscious moment of trying

1:12:35

to make sure that the first thing they saw from her was

1:12:37

how she felt about them. And

1:12:39

so I actually think that's a really profound thing as

1:12:41

a parent, this idea that I

1:12:44

always want my kids to feel like I'm

1:12:46

happy to see them.

1:12:47

All right, that's like they feel that

1:12:50

they are seen and wanted to be seen. So

1:12:52

that's something that I think about a lot. And then

1:12:55

another thing is you actually have to take care of yourself

1:12:57

as a parent. And one

1:12:59

thing you're about to learn and you're getting, I worry

1:13:02

I'm a little more grumpy on this

1:13:04

show today than I normally am because my kid had croup

1:13:06

all night and I'm just tired. And

1:13:09

the thing that I've learned as a parent is that just 75% of how

1:13:11

I deal with the

1:13:14

world, like how good of a version of me the world gets is

1:13:16

how much sleep I got.

1:13:17

And you gotta take care of yourself. And

1:13:20

that's not always the culture of parenting,

1:13:23

particularly modern parenting. You

1:13:25

need people around you, you need to

1:13:27

let off your own steam, you need to

1:13:29

still be a person. But

1:13:31

a huge part of parenting is not how you

1:13:34

parent the kid but how you parent yourself.

1:13:36

And I'm just like a pretty crappy

1:13:39

parent when I do a worse job of that. And

1:13:42

a pretty good parent when I do a good job of that. But

1:13:44

a lot of how present I can be with my child

1:13:47

is, am I sleeping enough? Am I meditating

1:13:49

enough? Am I eating well? Am

1:13:51

I taking care of my stress level? So

1:13:54

it's

1:13:54

not 100% of parenting a child is parenting

1:13:57

yourself, but I think about 50% of parenting

1:13:59

a child is parenting yourself.

1:13:59

accepting yourself.

1:14:02

And that's an easy thing to forget. Yeah.

1:14:05

It is astonishing how much more irritable I

1:14:07

get when I'm under slept. That's maybe

1:14:10

my greatest fear. Yeah,

1:14:12

it's bad. I really, I mean, again,

1:14:15

even in this conversation, I've been like, I'm a

1:14:17

little probably

1:14:19

edgier than I normally am. And I've just felt terrible

1:14:22

all day. And there's just, it's a crazy

1:14:24

thing when you become a parent and you realize other parents

1:14:26

have been doing this all the time. Like, and you

1:14:28

see them, it's

1:14:29

cold and flu season. And you understand

1:14:32

that you didn't understand what they were

1:14:34

telling you before. And somehow

1:14:36

all these people are just running around doing

1:14:39

the same jobs they always have to do and carrying

1:14:41

the same amount of responsibility at work and so on,

1:14:44

just

1:14:45

operating at 50% of their capacity all

1:14:47

the time and not really complaining about it that much.

1:14:50

And it's a

1:14:52

whole new world of admiring

1:14:55

others opens

1:14:58

up to you. It's like I have two kids and now like my

1:15:00

admiration of people who have three or four

1:15:02

is so high.

1:15:06

So, you know, it's a real thing. But

1:15:08

it does open you up to a lot of beautiful vistas of

1:15:11

human experience. And to somebody who, you know,

1:15:13

is interested in the world, it was

1:15:15

really undersold to me how interesting

1:15:18

kids are and how interesting

1:15:20

being a parent is. And it's worth paying attention to,

1:15:22

not just because you're supposed to, but because

1:15:25

you learn just a tremendous amount

1:15:27

about what it means to be a human being. My

1:15:29

guest today has been Ezra Klein. Thanks so much for

1:15:31

coming back on the podcast, Ezra. Thank

1:15:33

you.

1:15:35

I'm

1:15:39

worried that I might have offended some

1:15:41

technical AI safety people a minute ago by saying

1:15:43

that I found their work hard to get into.

1:15:45

I think it's possible I even use the word

1:15:47

boring. The trouble and

1:15:49

I probably want to say this here because

1:15:52

I expect I'm not the only one who has privately

1:15:54

experienced this is that I don't feel

1:15:56

like I've had enough of a gears level

1:15:59

understanding of.

1:15:59

machine learning works to judge which ideas

1:16:02

in the field are good or bad, at

1:16:05

least not so long as some fraction of

1:16:07

serious domain experts say that they're into

1:16:09

a strategy and back it. And

1:16:12

in practice, that makes it a bit unrewarding

1:16:14

to dig into proposals because I know that at

1:16:17

the end from experience, I'm just going to have

1:16:19

to walk away shrugging my shoulders

1:16:21

more or less. That was more

1:16:23

so the case five years ago when there weren't

1:16:26

really products available to make

1:16:28

how AI works

1:16:29

concrete in my mind. And it was even

1:16:32

more so 10 or 15 years ago when

1:16:34

nobody had a clear picture of what general AI

1:16:36

systems might ultimately end up looking like.

1:16:39

This is one reason why we've been doing more episodes

1:16:42

on AI policy issues where I

1:16:44

think I do have some non-zero ability

1:16:46

to pick winners and losers out

1:16:49

of the ideas that are out there. All of this

1:16:51

is changing though now that I

1:16:53

guess the rubber has hit the road and

1:16:55

it's becoming clearer what we're dealing with

1:16:57

and maybe what actually has to be done. Yesterday,

1:17:00

I spoke with Jan Liker who leads Open

1:17:03

AI's alignment work. And I think I basically

1:17:05

understood everything that he was saying and I reckon

1:17:07

I could perhaps even visualize how he hopes it's all

1:17:09

going to work and explain it to someone else. But

1:17:12

anyway, if like me, you didn't

1:17:14

study computer science and you felt at

1:17:16

sea reading about technical AI progress

1:17:19

in the past, know that I

1:17:21

sympathize with you. And indeed, I have been secretly

1:17:24

sympathizing with you since about 2009. And

1:17:27

if you're a technical alignment researcher, know

1:17:29

that I've been really appreciating your work from the bottom

1:17:31

of my heart, even if my head has been finding it

1:17:34

hard to fully understand. Finally,

1:17:37

before we go, I remind about the

1:17:39

excellent new interview we've done, which is available

1:17:41

on ADK After Hours, Hannah Boettcher

1:17:44

on the mental health challenges that come with trying

1:17:46

to have a big impact. And

1:17:48

if you're enjoying these AI focused episodes

1:17:50

that we've been producing recently, then you

1:17:52

might like the compilation that I put

1:17:55

together of 11 excellent episodes of the show,

1:17:57

looking at all different angles of AI.

1:17:59

That compilation is titled the 80,000

1:18:02

Hours Podcast on Artificial Intelligence. And

1:18:05

you can search for that and listen to the feed anywhere

1:18:07

that you're listening to this or find it in the top menu

1:18:10

on 80,000hours.org. All right,

1:18:12

the 80,000 Hours Podcast is produced and edited by Kieran

1:18:14

Harris. The audio engineering team is

1:18:17

led by Ben Cordell with mastering and technical

1:18:19

editing for this episode by Marlon McGuire. Full

1:18:21

transcripts

1:18:22

and extensive collection of links to learn more are available on our

1:18:24

site and put together by Katie Moore. Thanks for joining. Transcribed

1:18:30

by https://otter.ai

Rate

Join Podchaser to...

  • Rate podcasts and episodes
  • Follow podcasts and creators
  • Create podcast and episode lists
  • & much more

Episode Tags

Do you host or manage this podcast?
Claim and edit this page to your liking.
,

Unlock more with Podchaser Pro

  • Audience Insights
  • Contact Information
  • Demographics
  • Charts
  • Sponsor History
  • and More!
Pro Features