Episode Transcript
Transcripts are displayed as originally observed. Some content, including advertisements may have changed.
Use Ctrl + F to search
0:02
Welcome to EconTalk, Conversations for
0:04
the Curious, part of the Library of Economics
0:07
and Liberty. I'm your host, Russ Roberts
0:09
of Shalem College in Jerusalem and
0:11
Stanford University's Hoover Institution. Go
0:14
to econtalk.org where you can subscribe,
0:16
comment on this episode, and find links and other
0:18
information related to today's conversation.
0:21
You'll also find our archives with every
0:23
episode we've done going back to 2006. Our email address
0:26
is mail at econtalk.org.
0:30
We'd love to hear from you.
0:37
Today is June
0:39
27th, 2023. My guest is Sve Moshowitz.
0:42
His substack is Don't Worry
0:44
About the Vase. It is a fantastic,
0:47
detailed, in-depth compendium
0:50
every week and sometimes more than once a week about
0:52
what is happening in AI and
0:55
elsewhere. Our topic for today is
0:57
what is happening in AI and elsewhere, particularly
1:00
a piece that you wrote recently we will link to
1:02
called The Dial of Progress, which by
1:04
itself, regardless of its application
1:06
to AI, I found very interesting. We're going to explore that
1:09
in
1:09
our conversation. Sve, welcome to EconTalk.
1:12
Honored to be here. First, on
1:15
just the technical capabilities
1:17
of where we are right now with AI, where do you
1:20
think we are?
1:22
I think it's still very early days.
1:25
AI has been advancing
1:27
super rapidly in the last few years
1:30
as OpenAI and others
1:32
have thrown orders of magnitude more compute,
1:35
orders of magnitude more data, and
1:37
superior algorithms continuously
1:39
at the problem, including many more people working
1:41
on how to improve all of these things. The
1:44
result of this recently was a giant breakthrough
1:46
of chat GPT and GPT-4,
1:49
which is also used in Microsoft's Bing search,
1:52
which is a tremendous jump in
1:54
our ability to
1:57
just talk with it like we would talk to a human.
1:59
to have it be a better way of learning
2:02
about the world, getting your questions answered, exploring
2:05
issues than say a Google search,
2:07
or in many cases, going to a textbook
2:10
or other previous information sources.
2:12
It's amazing at things like editing,
2:15
translation, creating images
2:17
for things like stable diffusion and mid-journey. It's
2:20
very, very good at allowing
2:23
us to do things like perform class, to translate
2:25
styles, to allow us to understand
2:28
things that we're confused by, and continuously
2:31
learning. Every month, we
2:33
learn about new developments every week. I
2:36
have this giant list of, here's introducing
2:38
these... There
2:39
are people who compile the 100 new AI
2:41
tools that you'll be able to use this week, and mostly
2:43
they're slight variants on things that happened
2:45
last week or the week before,
2:47
but iteratively all these things improve.
2:51
Now we're starting to see multimodal
2:53
approaches
2:54
where not only can you use text,
2:56
you can use pictures, and soon it will also
2:58
be video. AIs are
3:00
starting to generate voices more and more
3:03
accurately. They can now match human voices
3:05
very accurately on almost no data.
3:08
They'll soon be able to be generating videos.
3:11
Their context windows, their amount of information
3:14
they can hold in their storage and
3:16
react to at any one time, it usually expands. We're
3:19
now up to the length of books like The Great Gatsby in
3:21
some cases, or at least by anthropic
3:23
in a model called Cloth.
3:27
The sky's the limit
3:28
in many of these ways. It's all very exciting.
3:30
I am substantially more productive
3:33
in many ways than I would have been a few months ago
3:35
because when I see something without
3:37
a reference, I'll say, oh, okay, where's
3:39
that reference? I'll just ask Bing and Bing
3:42
searches the internet for me without me having to think about the query
3:44
and find the reference, explains
3:46
the information. I can ask for details.
3:48
I can ask for summaries. I can ask about
3:50
details of papers. Whenever
3:52
I'm confused about something, I can ask it about what
3:55
that's about.
3:56
These things are just scratching the surface of
3:58
what I can do.
3:59
But he is going for the roof as well.
4:02
So that's where we are now. And
4:07
it's interesting. I think the world is divided
4:10
up into people who are using it frantically,
4:13
frenetically would be a better word, not frantically. Phrenetically
4:16
like you or Mark
4:18
Andreessen, who we recently interviewed.
4:21
And those who think,
4:23
who've never heard of it, don't know how to use
4:25
it, thinks it's something weird. And
4:27
I'm in the in-between group. I'm somebody who thinks, if
4:30
I could
4:31
use this, I'd be more, if I use this more often,
4:33
I'd be more productive. But I don't think to use it. It's not my
4:35
habit yet. I
4:38
don't rely on it in any sense whatsoever.
4:40
And I love it as a novelty item, but
4:43
it's much more than a novelty item. And the question,
4:47
when we made this original leap from 3.5
4:49
to four, there was this view that
4:53
we were now soon gonna take off.
4:55
Then very quickly, shortly after that, if
4:58
it was strategic or just accurate,
5:00
Sam Altman said, we've kind of exhausted some
5:03
of the range of stuff we can
5:05
do with bigger data sets, more data
5:07
for AI to be exposed
5:09
to, for chat should be data to be exposed to.
5:12
Where do you think,
5:14
where do you think we're headed relatively soon?
5:17
And where do you think we're headed relatively
5:19
farther, further down the road
5:22
from now? Yeah, I think that's an important distinction
5:24
to draw. And also to keep in mind that like
5:26
what soon means is
5:28
constantly changing. If you had told me five years ago
5:31
about the pace of developments in the last six
5:33
months or so, where
5:34
like every week I have this
5:37
giant array of things to handle, just in terms
5:39
of practical things you can use today, even if
5:41
you exclude all the speculations about what might happen years
5:43
from now,
5:44
it just would have blown my mind.
5:46
And it's a really scary pace of development.
5:49
But what's going on is that as the promise
5:51
of transformers and the uses
5:54
of stacking more layers of transformers, which is the type
5:56
of method of implementing
5:59
AI and doing calculations.
5:59
within AI has caused them to
6:02
spend orders and orders of magnitude more money and
6:04
gather orders of magnitude more data and use orders
6:06
of magnitude more compute and more hardware
6:08
and more electricity and so on
6:10
to do all of these problems.
6:12
They're starting to hit walls, right? So we've had Moore's
6:14
law operating for many years. We've had
6:16
hardware getting more capable and
6:19
manufacturing more of it.
6:21
And what happened was we weren't using anything like
6:23
the maximum amount of capacity that we physically had
6:25
available, that we knew how to build and we knew how to use
6:27
and that we was available to purchase. And
6:30
now with GPT-4, we're starting to get
6:32
close to
6:33
the limits of what we know how to do such that when
6:35
we hit to like a 4.5 style level,
6:38
it's possible to say that you're
6:40
gonna have to get more creative. You can't simply
6:43
throw an extra zero on your budget, assemble 10
6:46
times as much stuff
6:47
and get the next performance jump just on its own
6:50
because you're starting to run into
6:52
issues where everything is more duplicated than it used to be.
6:55
And in order to get that next order of magnitude
6:57
of jump
6:58
in effective compute, you need to be more
7:00
creative than that or you have to wait for
7:02
our technologies to improve some more. So
7:05
I do think that like we're not gonna see that
7:07
much more time of the same level of
7:10
underlying jumps in capabilities as rapidly
7:12
as we saw from two to three to four, where
7:15
we saw orders of magnitude jumps
7:17
that were not like the progress we make in hardware. They were
7:19
vastly faster than the progress we make in hardware. But
7:22
over time, we will still make progress
7:24
on the hardware and we're seeing jumps in algorithmic progress,
7:26
especially often coming from
7:28
open source models
7:30
that are starting to figure out how to imitate the results that
7:32
we did get from GPT-4
7:33
and similar models more and more effectively
7:36
using less and less compute and
7:38
using more and more tricks. And we're only just
7:40
now beginning to figure out what we can do
7:43
with GPT-4. So like we have this amazing
7:45
new idea. We have a companion, we have an assistant,
7:48
we have a tremendous knowledge base, we have a new interface
7:50
using computers, we have a new way of structuring
7:52
information, we have a new way of coding, we have so many other things.
7:56
And we've only had this thing around for a few months. And
7:58
even the people who are... are just
8:01
focusing on how to use it for productivity.
8:04
We're just building apps on top of it. I just
8:06
haven't had the human time necessary to
8:09
unpack what it can do and
8:10
to progress the capabilities we can build on top of what
8:12
we have. I think that
8:14
even if we don't see a more
8:16
advanced model for several years, we're
8:19
still going to be
8:21
very impressed by the pace of what we
8:23
can do with it.
8:24
In particular, I think things like the integration
8:26
into Microsoft 365 Copilot and
8:29
into the Google suite of products,
8:31
where the machine starts to look at, okay,
8:33
here are your emails and your documents in
8:36
a way that feels secure and safe for people and which
8:38
they know how to implement without having to go through a lot of
8:41
technical details that are harder for people even like me
8:43
and say, okay, given
8:45
that context, I now know the things you know that
8:48
you have written down. I know who these people are that
8:50
you're talking to. I have all this context.
8:53
And now I can address what you
8:55
actually need me to address in this place
8:57
that's seamlessly integrated into your life.
9:00
And this becomes a giant boost to the effective
9:02
capabilities of what you can do.
9:04
Plugins are an area where we're just exploring like
9:06
what can you attach? And then the idea of,
9:08
you know, if
9:09
every website that starts building up,
9:11
okay, I now have a chat interface
9:13
with an LM that's trained particularly
9:15
for the questions that are going to be asked on my website
9:18
to help people with my products to
9:20
help them get the most out of this thing and to help me make, you
9:22
know, have the best customer experience.
9:25
We're just starting to get into those things. We're just starting
9:27
to get into applications for AR and VR. We're
9:29
just starting to get into the ideas of just
9:31
what do people want from this technology?
9:34
And we're also seeing penetration. Like the majority
9:36
of people still haven't even tried this, as you
9:38
pointed out. And we're going to see what
9:40
those less technical people, what those less savvy people
9:43
actually can benefit from. Because in many ways, they're
9:45
the people who most need a
9:47
more human, less technical way
9:49
of interacting with these systems.
9:52
And in some ways, they can benefit the most.
9:54
So just getting started, basically.
9:57
So AR and VR are augmented reality
9:59
and virtual. reality. When
10:02
Google search came along, it
10:04
was really exciting. You
10:07
know, I've used the example a few
10:09
times of my, my grandfather
10:12
who remembered a phrase, the strong man must
10:14
go. He knew it was from a poem he
10:16
couldn't figure out, couldn't remember. And then
10:18
one day years after been bothering him, he yelled
10:21
out
10:22
in a red crowded
10:24
restaurant, it's browning.
10:26
It's Robert Browning. And
10:29
poor guy, Google
10:31
finds that in a fraction of a second. And that's really
10:33
it's a wonderful thing on so many dimensions. Google
10:36
search
10:37
is quote smarter than I am in the very
10:39
narrow sense, but not trivial
10:42
that it knows more than I do by
10:44
an unimaginable amount, obviously.
10:47
So chat GBT understandably
10:50
is only the
10:52
particular generation of artificial
10:54
intelligence. It
10:55
quote knows more than I do, it can do many
10:58
things that I can do, write poetry, write
11:00
a memo, code quicker
11:03
than I can,
11:04
sometimes better than I can. And
11:06
in some
11:08
dimension, it's smarter than I am in the
11:11
similar way to Google search, but
11:13
a more interesting way I would say, and therefore
11:16
it's much more productive,
11:18
potentially and making my life better. Google
11:21
search helps me find things I can't
11:23
find. This is going to do many
11:25
things beyond that.
11:27
But in what sense
11:30
would you say the current generation of models
11:32
as they improve, and we get more plugins
11:35
and we get more websites that are optimized
11:37
for having them built in? And what
11:39
sense is it going to be smart? And I asked
11:41
that question to
11:42
head us of course, to the question of sentence.
11:45
Now, the
11:47
we can talk all we want about Google being smart
11:49
or Siri
11:52
being smart on my iPhone. It's
11:54
not smart. It just has
11:56
access to more stuff that I can access in
11:59
my car drive
11:59
much smaller. Is
12:02
chat GPT really different or is it kind
12:04
of the same thing but more so?
12:07
I think it's somewhere in the middle. I think that
12:09
when you see someone say, you know, it's an IQ test
12:12
of 155, that just shows you the IQ
12:14
test is not measuring what
12:15
you thought you were measuring. When you go out of distribution
12:18
and you see a very different thing that's
12:20
being tested, similar to how you've noted, you know,
12:22
Brian Kaplan gave an examination in economics. Some
12:25
of the questions were, what did Paul Krugman say? And of
12:27
course, it just had the answer memorized. So it just regurgitates.
12:29
It doesn't mean that you're smart, doesn't mean you understand economics.
12:32
But other questions, it shows that it actually has some
12:34
amount of understanding.
12:35
And the
12:37
AI is going to have a natural... I think
12:40
of it this way. You have
12:42
this thing that I like to think of as being
12:44
smart, being intelligent, can't...
12:47
ability to think and apply logic
12:49
and reason and figure unique things out. And
12:52
I think of that as distinct from certain other
12:54
aspects of the system, like memory,
12:56
and what knowledge you have,
12:59
and processing speed. And so
13:01
there are certain abilities that the
13:03
system just doesn't have. And no matter how
13:05
much data you fed into it,
13:07
it would not be able to do these things unless
13:10
it simply had so many close-back
13:13
similes in its training data that it was just doing
13:15
so in a kind of imitative way. That
13:17
wasn't the same thing as doing it the
13:20
way that a
13:21
person who actually understood this thing
13:24
would do it. And often people actually are,
13:26
in fact, in this imitative style way themselves.
13:29
You can make it, in some sense, smarter
13:32
by giving what's called prompt engineering. So
13:34
what you can do is you can ask it in
13:36
a way that makes it think that it is trying
13:38
to imitate a smarter person, that
13:40
it is trying to act in a smarter way, that
13:42
it's dealing with a smarter
13:44
interaction,
13:45
and to frame the questions in the right
13:48
way and guide it. And it will give you much smarter answers
13:50
to that. And that's one area where I
13:52
feel like not only have we generally not scratched the
13:54
surface on this, but that I'm definitely underinvesting
13:57
in this and almost everyone who
13:58
uses the system is. of giving up too
14:00
early when the system just doesn't give it
14:03
what you wanted it to give, you thought it maybe had
14:05
the ability to do.
14:06
Then you just don't try and then it ends up like
14:09
you get disappointed and you move on and then you
14:11
don't realize that you could have put in more work. The
14:13
same way it was a human, if you ask stupid questions
14:16
or you frame it in a way that makes them think you're stupid or
14:18
that you don't want a smart answer, they're going
14:20
to give you a stupid answer. You
14:22
had to ask the right questions in an interview if you want to get thoughtful
14:25
responses and it's the exact same thing.
14:27
I think that
14:30
the current version is not so smart, but
14:32
that it's not zero smart and
14:34
that we will see them get smarter
14:36
as we see them expand over time. Smart's
14:39
complicated and I feel like
14:42
I should tell my listeners over
14:46
the last few weeks I've thought to myself, well, this is the
14:48
last episode we'll do on AI
14:51
for a while. I've been wrong. I
14:54
find them, I still
14:57
are very interesting to me and as long as I learn something
14:59
and I hope you learn something, we'll continue
15:02
to do them because I believe it's
15:04
the most exciting technology that's
15:06
out there that's come along in a long, long time.
15:09
I think it's quite important that we understand
15:12
it, but one of the topics I haven't spent much time on with my
15:15
guests is this question of intelligence.
15:18
We gave an example earlier of intelligence
15:20
having a big memory. It helps.
15:22
Having a big memory, whether you're a human
15:24
or a search engine really helps
15:27
or chat GPT. Having an accurate memory really helps.
15:30
Chat GPT is famous now in its early days
15:32
for making things up,
15:34
but it's really the
15:36
next step that we would call creative,
15:39
synthesizing applications
15:41
that didn't immediately come to mind, that weren't
15:44
in the prompts. Those
15:46
are the things that are both exhilarating
15:48
and potentially scary. You
15:51
think they're coming?
15:53
Are they already here? Given
15:56
GPT for various tests of curiosity
15:58
and sometimes... results come back,
16:00
oh, GPT-4 is actually more creative than the average
16:02
human, because the type of creativity
16:04
they were measuring
16:06
wasn't the type of creativity that you're thinking about.
16:09
It's this sort of more narrow, like, you know, there's a thousand household uses for this
16:11
piece of string. How many of them can you name? And
16:13
GPT-4 does vastly better than the average human
16:15
at being creative in this strange
16:18
sense. That's not the thing that we
16:20
care about. That's not the thing that we want.
16:23
And I think that a lot of what we think of as human
16:25
creativity is just someone else sort of has
16:29
different training data and
16:31
different connections in their brains and thinks about different
16:33
things. And then I'll put something that
16:35
to them is not necessarily especially creative
16:38
in that way, but that seems creative in that way
16:40
to you. And they've
16:42
been exploring a different area of the space. And I
16:44
think with better problem engineering, you can get what seemed
16:46
like much more creative answers
16:48
out of the system than you would normally get the same way you
16:50
can do so with person.
16:52
I think that, you know, creativity in that
16:54
sense, it's definitely a relative weakness
16:57
of the system. If you almost
17:00
by definition say, okay, this is a system that's
17:02
training on this data, find things that are
17:04
maximally different
17:06
from that data and ask it to, like, produce
17:08
good quality things that are maximally different
17:10
from that thing. So it's going to lag behind other
17:12
capabilities if we continue
17:14
to use this particular architecture and set of algorithms
17:17
to train systems, which we might continue
17:20
to do so for a while, we might not. But
17:23
by any definition of creativity to put together, there's
17:25
not zero creativity in what
17:27
chat GPT does. It's just not
17:30
as good as its other aspect. And I think we will see it to
17:32
improve over time. Let's take a couple examples.
17:35
I
17:37
have an upcoming interview
17:40
with Adam Astraianni
17:42
about how we perceive,
17:44
how we learn and how
17:46
it is. Why is it that when
17:48
I tell you something,
17:50
you don't really observe it?
17:51
You're younger than I am, Svee. And I say, look,
17:53
Svee, I'm 68. I've lived a long time.
17:56
Here's an insight. It's really valuable to you. I
17:58
wish I'd known it was your age when I was a kid.
17:59
your age and you listen,
18:02
you hear it, those in one ear out the other,
18:05
very rarely changes your life. And
18:07
even if I care deeply about you, as I do about
18:09
my own children, for example,
18:11
they're either not interested because they're my children.
18:14
That's a tricky relationship there, but you don't
18:16
have any of those, that baggage that my kids have.
18:19
You're just a thoughtful, curious person and I
18:21
have wisdom for you. But
18:23
strangely enough, you don't always get
18:25
it and, or
18:26
maybe rarely get it. And so Adam wrote
18:29
a very thoughtful essay. That's what I'm going to just
18:31
want to make an interview about, about why that is.
18:33
Now I've thought about this problem a lot. And
18:36
in theory, I'm
18:37
not an expert on it, but I've thought about it.
18:39
It intrigues me. And when
18:40
I read his essay, I thought, wow, well, that's cool. I've learned
18:42
something. Similarly,
18:45
you wrote an article that we're going to get to in a little bit
18:48
about why certain people are unafraid of chat GPT.
18:51
And, and you created a metaphor. It's
18:54
called the dial of progress. And we get to it,
18:56
listeners will understand why it's a metaphor and whether
18:58
it's interesting to them or not, I don't know. But
19:01
I found it extremely interesting.
19:03
It's the kind of thing a human comes up with, the kind of human
19:05
I like to hang around with, where you hear that
19:07
idea and you go, wow,
19:09
I haven't thought about that.
19:11
That's intriguing. And it causes
19:13
other connections in your brain, as we'll
19:15
see. And you connect it to other things that
19:17
you know a little bit about, not as much as
19:20
chat GPT knows.
19:22
But I don't know if chat GPT could come up
19:24
with those kind of metaphors yet.
19:26
Do
19:26
you think it could? To change my
19:28
way of seeing the world? Not,
19:30
not coming up with a bunch of stuff I haven't, you
19:32
know, encountered. Sure, it's not good. It's
19:35
better at me than any human, maybe than that, than
19:37
in that kind of area. But this kind of area is what
19:40
I think of as creativity. I mean, there's
19:42
other kinds of creativity, artistic, poetic,
19:44
musical, or, you know, it's visual.
19:47
But this idea of,
19:49
here's a thought, no one's ever written about it. No one's ever written
19:51
about the Dial of Progress. You're the first person. And
19:54
I found it interesting. That's why we're talking.
19:57
Could chat GPT do that?
19:59
So right now, it definitely wouldn't
20:02
do that spontaneously. If you didn't ask
20:04
it for a metaphor, if you didn't say, I have
20:06
this concept, I'm trying to think of a name for it, or
20:08
I observed this particular phenomenon,
20:11
is there some metaphor that would summarize
20:13
it, that would help me think about it better? It's going
20:15
to have no hope. If you
20:18
use the specific prompting and leave it in that direction
20:20
and ask it for what it might come up with, it might
20:22
be able to get to something interesting.
20:25
I know the thought process
20:28
that led me to that point,
20:30
and it actually involved some things
20:33
that JetGPT is relatively strong about if it
20:35
was directed in that position, and some things
20:37
where it's relatively weak.
20:39
One area... One of the things
20:42
that JetGPT is best at is what I call vibing.
20:44
It's the idea of getting
20:46
the overall sense of... If you look at the subtleties
20:49
of the word choices that people made and
20:51
the associations of the concepts that people
20:53
were talking about,
20:55
what type of feeling
20:57
is this person trying to present to the conversation?
20:59
What are the unconscious biases
21:01
that are operating in people's heads? How
21:04
are people associating these things with
21:06
other things, and what are they trying to invoke
21:08
consciously or unconsciously
21:10
by talking about these things?
21:12
That was a lot of the key path
21:14
that I went down in the thought process that led
21:16
me to this,
21:18
was like, well, what's happening here? People
21:20
seem to be doing things that I don't understand.
21:23
The response was, well, what's
21:25
going on is that people are
21:27
thinking about how other people will
21:29
vibe
21:30
based on the statements that they are making,
21:32
and perhaps they are vibing themselves.
21:36
This is somewhat predictive in a
21:38
sense of how they're going to talk
21:41
and how they're going to think or how they're going to represent
21:43
that they're thinking. Then
21:47
I asked myself, okay,
21:50
could there be an overall structure here? That's
21:52
the kind of synthesis that I think that GBT is going to have more trouble
21:55
with.
21:56
Did you use chat GBT to help you generate
21:58
that thesis?
22:00
No, I didn't. Okay. All right.
22:03
Just want to make sure. If you did,
22:05
you were marked in a different
22:08
essay than the one we're going to talk about that
22:11
it's going to be kind of difficult
22:14
to get people to mark or
22:18
acknowledge that they got help from chat
22:20
GPT or any helper
22:22
like it
22:24
because everybody's going to be using it soon. It's
22:26
just not even going to be, it's going to be so normal. Is that an accurate
22:29
assessment of what you think? I think that's right. Everyone
22:31
uses a spell checker for almost everyone.
22:33
And then they started introducing grammar checkers, which are
22:35
a little bit more complex than are saying, well, I think this word
22:38
choice is wrong. You should reconsider
22:40
it. And
22:41
the grammar checker is a lot less accurate than the spell checker,
22:44
but I still find it
22:45
net useful reasonably often.
22:47
And then you get to this point where I say, okay, you can
22:49
feed your entire set of paragraphs
22:51
into chat GPT. And it will tell you if it thinks
22:54
there are refinements you can make, or there are points that
22:56
are not clear or something like that.
22:57
And because of the
23:00
way that my stuff is created and the
23:02
cycle in which it has to operate and the extent to which it
23:04
is constantly sort of
23:05
forcibly shifting context in ways that make it
23:07
very hard for a GPT system to follow, I
23:10
don't actually take advantage of that.
23:13
But if I was operating
23:15
at a much more relaxed pace, I definitely would. And
23:18
I think it's a sort of invisible, hard
23:21
to define line where it goes
23:23
from GPT is helping me
23:25
express what I'm expressing, but it's
23:27
helping me with the technical details of what I'm doing, which I think
23:30
more or less everybody thinks is good, to
23:32
this point where the GPT is actually sort of doing the
23:34
thinking for you in some important sense and is generating
23:37
the essay where you would almost call on
23:39
a call GPT the author, or at least the co-author
23:42
of the piece and not the editor.
23:44
And that's where people go, oh, I don't want people
23:46
to know this
23:47
was kind of a GPT generated piece.
23:53
So let's move to the dangers
23:56
and fears that people have. I
24:00
want to review on that. There's
24:02
two facets to what I want to think about. One is just
24:04
this extinction risk, which of course
24:06
is
24:08
in the air and many people are very
24:10
worried about it. The two guests I've interviewed, I think
24:13
were most worried about it early, as you'd Kowski
24:15
and Eric Hall.
24:16
I found them very provocative. We
24:19
have cheerier people, Tyler Cowan
24:21
and Mark Andreessen. And
24:23
although you haven't heard the Mark Andreessen episode, you know
24:25
what he's going to say. I'll summarize it for
24:27
you, but I'm pretty sure you've read and
24:30
know what he's going to say. Before
24:33
we get to other people and your take
24:35
on them,
24:36
where are you on this issue of just danger
24:38
to us as a species? And
24:41
we'll talk about daily life and whether it's going to get better
24:43
or
24:44
worse, which is a separate issue. It
24:46
came up in the episode that we released
24:48
today with Jacob Howland and we've talked about
24:50
it with others.
24:51
I try to draw a distinction.
24:53
People used to talk about short-term risk versus
24:56
long-term risk. The problem with that is
24:58
that what we call long-term risks
25:00
are relatively short-term risks
25:03
with some probability, so it gets very confusing. Instead,
25:05
I talk about
25:06
extinction risks
25:08
and mundane risks.
25:10
It's the way I draw the distinction. Mundane
25:12
risks is things like people will lose their jobs,
25:14
or we will put out deepfakes and people will be confused
25:16
by what is real,
25:17
or people will lose meaning in their lives
25:20
because the AI can do certain things better than they
25:22
can and they feel bad about this. There
25:25
will be just shifts in the economy and
25:27
people will not understand what's going on.
25:30
I am an optimist about those levels of risks.
25:33
It's very similar views in many ways to people like
25:35
Cohen and Andreessen. I think that we
25:37
have differences in details of models, but
25:40
what we're seeing now is
25:42
unabashedly good.
25:45
Bringing more intelligence, bringing more optimization
25:47
and power, giving humans more abilities, and
25:50
letting them lose has generally
25:52
been the greatest boon to humanity.
25:54
I expect us to be able to handle these things.
25:57
There
25:58
are a lot of jobs waiting under the surface.
25:59
reason why the unemployment rate
26:02
sticks so low is because there are
26:04
so many things that we want to do. We don't even
26:06
have to find the new jobs. The new jobs are just waiting there
26:08
for there to be workers for them
26:10
for an extended period, and we're going to be fine.
26:12
However, in the longer term,
26:15
what we're doing is we're creating
26:18
new entities that have more
26:21
capabilities than us, are better at optimizing
26:23
for whatever goals or whatever optimization
26:25
targets they've been set at,
26:28
that are going to become at some
26:30
point smarter than us in whatever sense
26:32
that we talk about. They're
26:33
going to be able to match almost every capability, likely
26:35
every capability that we have,
26:37
and are going to be much more efficient at many of these
26:39
things, are going to be able to do this faster, are going
26:41
to be copyable, are going to be configurable,
26:44
and
26:44
which are going to operate much more efficiently
26:46
when we are out of the loop than when we are in the loop,
26:49
and which are going to exhibit capabilities and
26:52
actions that we can't predict because by
26:55
definition,
26:56
they're smarter than us. They can figure things
26:58
out that we don't know. They're going to explore realms
27:00
of physics in practical
27:03
ways, not necessarily theoretical ways that
27:05
we haven't explored. They're going to figure out what the affordances
27:07
of our universe are, what
27:09
the affordances of our systems and our configurations
27:11
are that we don't know. They're going to figure out how our minds
27:14
work, how you respond to various stimuli, how you
27:16
respond to various arguments in ways that we don't know.
27:18
They're
27:18
going to figure out coordination mechanisms we don't know,
27:20
and so on.
27:23
And when I look at that future, I
27:25
see as a default, these
27:27
things are going to be, in some important sense, set loose.
27:30
We're not going to be able to keep control of them by default. When
27:37
the forces of selection, the forces of economics
27:39
and capitalism and evolution,
27:42
as it were, and selection, even if
27:44
we do a relatively good job on some very hard problems
27:47
and get them to the point where
27:49
they act in ways that we would more or less
27:51
recognize and that don't
27:53
actively try to immediately
27:55
go for weird things or act in hostile
27:57
ways or act super
27:59
surprisingly.
27:59
in these ways
28:01
that we're going to deal with a future that's out of our control
28:03
and
28:04
that is optimizing for things that we don't
28:06
particularly want
28:07
as such and that don't reflect
28:09
the details of what we need or
28:11
what we value. And that
28:14
in the medium term, it seems like if we
28:16
don't solve a bunch of very hard problems, that
28:18
we're not going to survive.
28:20
And I certainly think there's
28:22
a substantial risk that what Yurikovsky
28:25
talks about, this immediate,
28:28
this thing becomes smarter than us very quickly
28:30
and then we all die almost immediately. I
28:33
think we have to solve some hard problems for that not to be
28:35
a very
28:37
likely outcome.
28:38
But I don't think that's like, that's not
28:41
even the thing that keeps me up at night. The thing that keeps me up at night
28:43
is, okay, suppose we get past that,
28:46
then how do we avoid the standard
28:49
economic slash incentive
28:52
situations of unleashing these new beings to
28:54
not be the end of us inevitably?
28:57
I don't know the answer. So I
29:00
want to respond to that in two different ways and
29:03
ask you to try to push your analysis
29:05
a little bit. The first would be that as an
29:08
economist in the spirit of FA
29:10
Hayek, I believe
29:12
there
29:12
are a lot of problems in the world
29:15
where the
29:17
problem isn't we don't have enough data.
29:19
The problem is we don't fully understand
29:21
the complexity of the interactions and
29:23
the interrelationships that human beings
29:25
have. And the smartest person in the
29:27
world,
29:29
Adam Smith, so-called
29:31
man of system and the theory of moral sentiments, the
29:33
person who thinks that
29:35
he can move the pieces of society
29:37
around like pieces on a chessboard because he
29:40
thinks they understand their emotion, but
29:42
in fact they have emotion all their own.
29:44
In that camp and
29:47
in addition,
29:49
most of the interesting problems are,
29:51
as I quote Thomas all
29:53
the time, no solutions, only
29:56
trade-offs. Some
29:58
AI of the future will not solve the problem. these problems, it'll
30:00
still face the same trade-offs, and there's no
30:03
simple answer. You could program an answer into it.
30:05
It wouldn't be a meaningful answer about
30:06
better,
30:07
unless you were... I just can't
30:09
imagine that. That doesn't mean it can't happen, but I can't
30:12
imagine it. So I
30:13
wonder...
30:16
So I think there are fundamental limits
30:18
on its ability to either
30:21
make the world better or control it
30:23
in an authoritative way.
30:26
So do you disagree
30:28
with me? So
30:31
I am very, very much in agreement about
30:34
the man of systems when we're talking about human
30:36
interactions. I think the Hayekian view is
30:38
very, very strong. I mostly agree with it.
30:40
What you have to project forward is ask yourself,
30:43
okay, what is going wrong
30:44
in some important sense
30:46
with the man of systems? It's because
30:48
the man of systems has a very limited
30:51
amount of compute in some important sense. This man of
30:53
systems is a man. He
30:54
can only understand systems that are
30:56
so complex. He can have all the data
30:58
in the world in front of him. He can't actually meaningfully
31:01
use that much of it. Even
31:03
if he could somehow remember it, he couldn't think about it. He
31:05
wouldn't know what to do with it.
31:07
He wouldn't know how to think about it.
31:09
And he's trying to be one man dictating all
31:11
of these things. He's got a hopeless task in front of him he's
31:13
going to fail.
31:15
However, when we're talking about the
31:17
AI, we're not even necessarily talking about the
31:19
one AI in this sense that's
31:22
trying to figure all this out. Okay, it can take a thousand times
31:24
faster, and it has all this more information, but
31:26
it's got fundamentally the same problem. Maybe that's just not enough.
31:29
What we're talking about is
31:32
instead, I am a
31:34
Hayakian citizen trying to optimize
31:36
my little corner of the
31:39
world, but there's also this other AI amongst
31:41
many other AIs in this potential future.
31:43
And that AI
31:46
is being trained on
31:48
all the data that I would look at to try and figure
31:50
out this corner of the world, except it can process so
31:52
many more details than I can. It can look at so many more
31:55
connections, and it can
31:56
process so many more of these things, and it can
31:58
think in many of these ways. you know,
32:00
it can simulate all the different
32:02
local calculations in the way that I would, and
32:05
can operate only locally, right, in this sense,
32:07
right? It doesn't necessarily have to think about the bigger
32:09
picture. And it can come to a more efficient,
32:11
better solution to my local problems
32:14
than I would. Oh, the
32:17
part of the
32:18
power of the Hayekian idea is
32:20
that
32:21
a lot of the data is not explicit. It's
32:23
not out in the open. It's in my
32:25
head. And it's not in my head in the way that the
32:27
capital of France is in my head. It
32:30
only emerges and
32:31
then emerges and then
32:33
interacts with everybody else's activity to
32:36
allow other things to emerge, like prices and quantities
32:39
and actions and plans,
32:41
when things change
32:44
and therefore can't be measured in advance. And some people
32:46
believe I know that,
32:47
oh, the eye will go into my brain. It'll
32:49
know what I would do when X
32:52
happened. It'll know me so well, it'll
32:54
be much better than myself, of course. I'm
32:57
only a mere human. Do you think we're going there?
32:59
I think that we're already seeing some of that
33:02
with Apple's Vision Pro. They talk about observing
33:04
where your eyes go and the different facial expressions
33:06
you make and trying to figure out your emotional reactions to
33:09
various actions. And that we will,
33:11
in fact, advance on these things over time. But,
33:13
you know, at most, I can understand
33:16
those things in me. So the only thing I see
33:18
is what my eyes and ears can
33:21
process, which is a very small amount of stuff and
33:24
how much information I can think about and how many
33:26
conclusions I can draw. And
33:27
I'm missing so much of the information that's coming at me
33:30
because I just don't have the... I'm not equipped.
33:32
And an AI in this situation can get
33:34
vastly more than information and
33:36
can anticipate vastly more of these systems. And also
33:38
these multiple AIs can also potentially
33:40
interact in these kind of Hayekian ways amongst
33:42
themselves
33:44
and do this much faster in ways we wouldn't understand.
33:46
I guess it's possible it could have access to my
33:49
skin temperature, my
33:51
dopamine, a whole bunch of things that literally,
33:54
again, I don't even have my own access to
33:56
my own and it could have access to world
33:59
population.
33:59
everyone was giving it data like that.
34:02
I think that points in some direction
34:04
toward where we might think about being careful.
34:06
I remember in the
34:08
early days of Zoom,
34:10
there was a worry that the servers
34:12
were in China. China was mining
34:15
information off of business meetings held in America
34:18
and elsewhere. And I don't know if
34:20
that was true or not, but you can imagine that
34:22
if
34:24
AI had access to everything that was
34:26
said on Zoom all the time by everybody, it
34:28
would get smarter at what I'm scared
34:31
of, all kinds of things. And
34:33
then my body temperature and everything else as it tries
34:35
to develop the perfect cancer
34:37
drug for me, and so on, which
34:39
were definitely one, of course, but
34:42
of course, not without giving control of my life. But the
34:44
other question I would ask
34:46
that I think the worriers have
34:48
failed to make, the case that they
34:50
failed to make is how this
34:53
is going to happen. To me, there's sort of two
34:55
pieces to it. There's the run amok
34:58
piece, which I kind of get,
35:00
kind of. And then there's the, and
35:03
it will want to destroy us piece. So
35:05
it's two things. It's
35:07
interest and capability. And
35:10
you need them both together to be afraid about
35:12
the extinction. The kind of argument I hear
35:14
often from the worriers is, oh,
35:16
everything's smarter than
35:19
other things treats them badly.
35:22
Somebody I follow on Twitter, or like a lot,
35:25
made the following analogy. Said,
35:27
you know, it's like, it's like mice. You know,
35:29
we're so much smarter than mice, we don't think about, we
35:32
don't have any ethical compunctions about mice. Well,
35:34
some people do. And my second thought is, there
35:36
are a lot of mice in the world. We might
35:38
wish they were extinct, or not doing a very
35:41
good job. In fact, I'm guessing
35:43
there may be more mice in the world today than there were 100 years
35:45
ago.
35:46
So what's
35:49
the, besides the fact
35:51
that people who are smarter than me can
35:54
do damage to me if they want to,
35:57
where's the danger in a
35:59
of itself or is that not the argument?
36:02
So I think there's at least two separate
36:04
things that you're asking about here
36:06
that
36:07
like deserve separate answers, right? So the first
36:09
question is, you know,
36:11
what,
36:12
why do we keep mice around, like
36:15
the kind of question of like, what's going on there? And
36:17
I think the answer to that is because
36:19
we don't have a practical way to
36:22
get rid of the mice.
36:23
We don't have the affordances and capabilities necessary
36:26
to do that with our current levels of technology
36:28
and intelligence that wouldn't mess
36:31
with our ecosystem in ways that we would find unacceptable
36:34
or involve costs that we don't want to pay. And
36:38
as technology advances further, like we're starting to get
36:40
these new proposals for mosquitoes, right?
36:42
Where they have these mosquitoes that sterilize
36:44
other mosquitoes effectively, they don't actually breed properly
36:46
to find without mosquito populations. And
36:49
we see a lot of people who are pretty enthusiastic
36:51
about doing this. And
36:53
if the technology were there such that we could do this
36:55
at a reasonable price, and we didn't think it would damage the rest of the
36:57
ecosystem, I predict we're totally going to do that.
37:00
And I think that if New York City could
37:02
wipe out the mice and the rats in New York
37:04
City
37:06
with something similar, I
37:07
think we totally do that.
37:09
It's a question of do we have the affordance
37:11
to do that? And what
37:13
do we value? And do
37:16
we want to? Yeah. Right. And do we want to? So,
37:19
you know, the question of will the AI
37:21
be capable of doing these things? I think
37:23
it will be more like, you
37:25
know, in some cases, it might be that they do it intentionally,
37:28
right? Because it's what something they want to do. But like, we
37:30
don't think that mice are inherently bad,
37:32
right? We don't think that mosquitoes are inherently
37:35
bad. We think that mosquitoes cause bad things to happen.
37:38
Or we think the mice are consuming resources or making our
37:40
environment worse in various ways. Or we
37:42
just think the mice are, you know, the mice are using atoms
37:45
we could use for something else in some important sense.
37:48
And so we prefer if the mice weren't
37:50
around, or we take action such that
37:52
we, you don't particularly care if we're
37:54
weaving the mice supports they need.
37:58
Okay, so let's start with the dial of progress.
37:59
This is a good segue, your essay, we're 45
38:02
minutes in. That
38:04
was all interesting, but this, I think, is more interesting
38:06
now.
38:08
You say, you write the following, and you
38:10
say, quote,
38:12
recently, both Tyler Cowan in response
38:14
to the letter establishing consensus on
38:17
the presence of AI extinction risk and Mark Andreessen
38:20
on the topic of the wide range of AI dangers and
38:22
upsides have come out with posts whose
38:24
arguments seem bizarrely poor.
38:26
These are both highly intelligent thinkers,
38:29
both clearly want good things to happen to humanity
38:31
and the world. I am confident they both
38:33
mean well, and yet,
38:36
and then you ask, so what is happening?
38:39
And this to me is the flip side
38:42
of Mark Andreessen's argument. I recently
38:44
interviewed Mark. Mark is super smart,
38:46
talks, I think, even faster than you, Svi, but maybe
38:49
it's close. See,
38:52
with Mark, an hour interview, you get two hours of
38:54
material.
38:55
And Mark was very dismissive
38:58
of the worriers. He called them
39:00
members of an apocalyptic cult or something
39:03
similar to that. It's not very nice.
39:05
I apologize to my listeners. I didn't push back harder
39:08
on just that style of reason.
39:11
It's effectively an ad hominem argument. And
39:13
you're doing something similar here, not as disrespectful,
39:16
perhaps, as Mark did. But
39:18
you're saying, here are these two super smart people, which you
39:20
can see. And you say
39:22
their arguments are bizarrely poor.
39:24
So you're suggesting that
39:26
they believe and agree
39:29
with you.
39:30
They believe their arguments are bizarrely poor, because
39:32
they're too smart
39:33
to make these bad arguments. There is an alternative,
39:36
though, is that you're wrong, that they're
39:38
good arguments. First, let's start with, why are
39:40
you so dismissive?
39:42
We can start with Mark's argument that the
39:44
reason this isn't scary is it didn't evolve.
39:47
Unlike our brain, that evolved through
39:49
the process of evolution
39:52
over millennia,
39:53
centuries, thousands of years.
39:56
AI is not that kind of thing. And
39:58
so it's not going to happen. It's not the same kind
40:00
of intelligence. Do you find his argument
40:03
just wrong? Do you think that's a poor argument? I'm
40:06
not sure it's even wrong, that particular
40:08
argument, in the sense that it
40:10
doesn't address the question
40:12
of what would the system that was trained in
40:14
this particular way be likely to
40:16
do, right?
40:17
Like natural selection just says
40:19
that the
40:20
things that act in ways that cause their
40:22
components to
40:24
exist in the next generation to multiply
40:27
themselves will see more of them.
40:30
And certainly, we're seeing
40:32
very close cousins of that
40:34
in reinforcement learning, and I know that Mark understands
40:36
these principles very well.
40:38
And the idea that if we unleashed AIs on
40:40
the world, the ones that successfully gather
40:43
resources, the ones successfully get humans
40:45
to copy them or allow them to be copied,
40:47
the ones that we
40:48
use and that maximize their metrics,
40:51
we'll see more of those and we'll train, we'll
40:53
use more training systems that lead to more of those outcomes
40:55
than we'll use less of the ones that lead to less
40:57
of those outcomes. And we will
40:59
see these kinds of preferences
41:02
for survival
41:04
and reproduction in
41:06
that sense will definitely emerge.
41:09
But also, this is a failure to engage with
41:12
the many very detailed arguments that I'm sure again,
41:14
Mark is very familiar with about
41:17
the fact that we will give AIs
41:19
explicit goals to accomplish.
41:22
These goals will logically necessarily
41:25
involve being around to accomplish those goals and
41:27
having the affordances and capabilities and power
41:30
to accomplish those goals and such that
41:32
the behaviors that he's talking about will inevitably
41:34
arise unless they are stopped.
41:36
And we're not talking about Mark arguing, we
41:39
might not be wiped out by
41:41
AIs, right? It was perfectly reasonable point of view and I totally agree.
41:44
We're talking about Mark saying we couldn't
41:46
possibly,
41:47
right? It's a logical incoherence.
41:50
And similarly, Tyler's argument is
41:53
you haven't made, the reason,
41:55
you're trying to scare me, you're trying to uproot
41:58
our way of life,
41:59
you're trying.
41:59
to, you're willing, this
42:01
is Tyler speaking, you're willing to
42:04
do incredibly violent things to
42:06
people who continue to work on AI
42:09
because they don't think it's dangerous or they don't care and
42:12
you haven't given us a model.
42:14
You haven't told even a
42:17
plausible story that can allow us to test
42:19
whether this is really something to be afraid of. So
42:22
you're willing to destroy potentially
42:24
our current way of life to
42:26
prevent something that you can't specify, that we
42:29
can't test,
42:30
and that we can't assess.
42:32
How do you answer him?
42:34
So I would say simultaneously that
42:35
you are mischaracterizing
42:39
what we are requesting and
42:41
what we are saying needs to happen.
42:43
And also that you're complaining
42:45
that we don't have a model
42:47
is an isolated demand for a very specific
42:49
kind of rigor and
42:51
a very specific form of argument
42:53
and formalization that simply
42:56
doesn't match what would make sense
42:58
if you were trying to see truth in this particular
43:00
situation.
43:02
And that
43:04
if you tell me more about
43:06
specifically what you mean by a model,
43:08
I can potentially give you a model, but
43:10
that we have brought uncertainty about
43:12
many of the details. He likes to draw the
43:15
parallel to the climate models when he talks about
43:17
this. He talks about, give me a model
43:19
similar to these climate models where you have these 70 different inputs
43:21
of these different aspects of the physical world. And then
43:23
we run them through a physics engine over the course
43:26
of 50 years, and we determine what the temperature is
43:28
and what this does to the polarized caps and what you do, all the other
43:30
things. And that results in this distribution
43:32
of potential outcomes that then people can talk about.
43:35
And then certain technical people are convinced
43:37
by this. And I think it drives the actual conversation
43:40
around climate change remarkably little
43:42
compared to other things. But it provided a kind
43:44
of scientific grounding that is helpful for someone
43:46
who actually wants to figure out what's fit.
43:49
And my answer to that is that when you're
43:51
talking about
43:52
inherently social, inherently intelligence-based
43:55
dynamics that surround things that are inherently smarter
43:58
than yourself, with a lot of unknown.
43:59
own technological capabilities of
44:02
areas we haven't explored. We don't know if it is impossible.
44:05
And a lot of uncertainty that
44:07
creating any specific model here
44:09
wouldn't actually be convincing or enlightening to very
44:11
many people.
44:13
That if you, you Tyler Cowan,
44:15
gave me a specific
44:17
set of assumptions that you have for how this would
44:19
go, I can model that for you.
44:20
And I can explain why I think that under your set of
44:22
assumptions, we are in about this much danger
44:25
from this particular sources and that we would have to
44:27
solve these particular problems. But it's just a mismatch
44:29
to
44:30
many of the problems that we face.
44:32
And I've been trying
44:35
to understand, tackle
44:37
this problem, I've spent a week's effectively
44:39
of time trying to figure out how to do something that
44:41
might
44:42
actually address this question in a way that was satisfying
44:44
because it's not
44:46
the way his world isn't fair. I have to try and convince
44:48
people on their own terms.
44:51
And it's been very difficult to figure out what would be satisfying
44:53
because as, I'll be able to open his interview
44:55
with you by asking, well, different people
44:58
get off the train at different places, they have different
45:00
objections. And I found this to be overwhelmingly true.
45:03
And so, if you don't tell me
45:05
which model I'm building to explain which
45:07
step of this thing, I have to build 40 different models.
45:10
And I can only build one at a time. So help
45:12
me out here, right? Now, I'll just say for
45:14
the record that when you started
45:16
addressing Tyler's objections, I actually
45:18
thought you were, you used the word you and I thought you were talking to me
45:21
for a moment and I had a weird
45:24
physiological reaction and
45:27
I'm sure
45:28
when I have my chat, my AI chip
45:30
built into my brain would automatically
45:32
sense it. And you might be able
45:34
to hack into it. You would have realized, oh, I better
45:37
reassure him. Anyway, that's along the lines of the
45:39
things you were talking about before. So
45:41
I wanna put for the moment,
45:44
Tyler and Marcus as human beings to the side,
45:47
because
45:47
obviously we don't really know
45:49
what is driving them.
45:51
But
45:53
what's interesting about your piece is that you wrote,
45:56
you made a claim that
45:57
there's a strategic.
45:59
reason that these
46:02
that they're optimists and that strategic
46:04
reason
46:06
May not even be realized by them So
46:09
again, I don't want to speculate whether mark or
46:11
Tyler in this group literally or figuratively
46:14
But I think it's a really interesting argument
46:16
about how one might think about
46:18
marketing social
46:21
change Strategy
46:24
lobbying progress
46:26
and so on so talk about the dial of progress
46:28
and why it applies here
46:30
Yeah, so the dial of progress
46:32
concept is that you
46:35
know, we as a society collectively Make
46:37
this decision on
46:38
whether or not we're going to encourage
46:41
and allow in various senses including
46:43
social allowances and legal allowances And regulatory
46:45
allowances are people gonna be able to go out
46:48
and do the things that they locally Think are the
46:50
right things to do and they're gonna be able to develop new technologies
46:52
They have to build new buildings and they're gonna be able to lay
46:54
new roads. They're gonna be able to build power plants they're gonna
46:56
be able to
46:58
Deploy new ideas new business concepts
47:00
anything across the board
47:02
Or are we gonna require permissions? Are
47:05
we gonna say that you know, if you want to take this job need
47:07
to get a license to this job If you want to build this road
47:09
you need to clear it with the following
47:11
100,000 page reports for NEPA.
47:14
Are you going to be able to build an apartment building? Are you
47:16
gonna need to get community feedback and like have? 57 different
47:19
veto points and five years of waiting if you want to
47:21
open an ice cream shop and
47:24
Over the years we've moved from the United
47:26
States that was very much on the you
47:29
go out there And there's an open field
47:31
and you do more or less whatever you want to do
47:33
as long as you don't harm someone else or someone else's property
47:35
to a world in which vast
47:39
majorities of the economic system Require
47:41
detailed permissions that are subject to very detailed regulations
47:44
that make it very very hard to
47:46
innovate and improve and
47:47
I strongly agree
47:49
with Andreessen and Cowan and many
47:51
and I think you and many other people This
47:54
is very much holding us back. This is making us much
47:56
less wealthy This is making us much worse off
47:58
and that would be much better off
47:59
losing the rights.
48:01
And I would just add, and it stunts
48:04
what it means to be a human being,
48:06
to strive, to
48:07
innovate, to be creative. It
48:11
cedes power to the people who are
48:13
more eager just to maintain the status
48:16
quo. Yeah, I strongly agree with that.
48:18
And I think that the people who do this, you know,
48:20
often are well-meaning. Sometimes they're
48:22
trying to protect
48:23
their rent seeking or what they, their
48:25
particular means of
48:27
way of life or making a profit or their personal local
48:29
experiences at greater broad expense. But
48:32
collectively, if we all loosen the reins, it would
48:34
help almost all of us. And it would over time, the
48:37
results would come back. Right. And this has been
48:39
true throughout human history. We have been very fortunate
48:42
that we haven't had a regime
48:44
this tight,
48:45
in this sense, until very recently.
48:48
And if we kept tightening it, there's
48:50
the risk that we would lose our ability to do
48:52
things more and more and that we would even
48:54
stagnate and decline.
48:56
When you say this tight, you meant in
48:58
the United States, a regime, because
49:00
there are plenty of other regimes that you
49:02
don't even get to ask permission. It's just
49:05
you can't do anything. Yeah, I mean, not in the
49:07
United States in particular, and around the world in
49:09
general, you know, you see the same rising
49:11
tide of restrictions pretty much
49:13
everywhere.
49:15
And, you know, a
49:17
lot of their people, I try to be one
49:19
of them, who are fighting the good fight to point out this is
49:21
a problem and that we need to reverse these
49:23
trends.
49:24
And we need to where we do intervene, do
49:26
a better job. Because one of the problems is, when
49:29
we do require permission, when we
49:31
do try to regulate, we do a very
49:33
bad, Hayekian job of figuring out what
49:36
would in fact,
49:37
you know, mitigate the bad circumstances without
49:40
interfering with the good circumstances, what
49:42
would allow more competition rather than end up becoming
49:44
less competition, what regulations wouldn't
49:46
get captured and so on.
49:48
And so, by 2023, it's a reasonable thing
49:50
to say
49:51
that there are very
49:54
few areas left
49:55
in which, you know, we
49:58
still have the ability to move. And so, I
50:01
also would say that you might say, okay,
50:03
once we've protected ourselves, once we've decided to slow down
50:06
our construction of apartments, well,
50:08
now we feel safer and we feel okay to
50:11
then build power plants or roads. But
50:15
what we've observed, and I think this is correct
50:17
over the years, is this is not actually how it works. What
50:19
happens is there's a faction that goes,
50:22
we should be safe. We should be preserving.
50:24
We should be careful. We should regulate.
50:27
We should require permissions. And
50:29
the more
50:29
of these things you require, the stronger
50:31
this faction gets, the stronger this rhetoric gets, the
50:33
stronger this background belief gets, and the easier it
50:36
becomes to regulate other things. And
50:38
the more free we are, the more provisions we
50:40
don't have to ask for, the more things we do,
50:42
the more people see the benefits of this approach,
50:45
the more people understand what it can do for them,
50:48
and the more they have the
50:50
expectation that it's only normal to
50:52
be able to go out there and do useful things,
50:55
and the more progress they make.
50:57
And then the question
50:59
is, well, okay, so if I see a particular
51:01
thing that I want to restrict, right?
51:04
This is a standard libertarian-style thought,
51:07
then I shouldn't just beware of the fact that I'm going
51:09
to screw this thing up. I'm going to make it easier for everyone
51:11
to screw everything else up in a very similar way.
51:14
And even if I get this particular intervention right and I do some
51:16
vocal help,
51:18
I'm risking bad things happening in somewhere
51:20
else. And so I
51:22
said, what if we imagine this as like
51:24
one dial? And then I remembered that
51:27
this metaphor had been used in a fashion before,
51:30
in fact, by Tyler when discussing
51:32
COVID. Because early on in COVID, Robin
51:35
Hanson had this theory
51:38
that if we all let young
51:40
people who were at very little risk from COVID, it was very,
51:42
very safe for them to get COVID relatively speaking, except they
51:44
might infect others, let them get infected first.
51:47
This could create effective herd immunity
51:49
while the older people were in relative
51:52
hiding, and then we could take much less precaution afterwards
51:54
and
51:55
get through the pandemic that way.
51:57
And Tyler's response was essentially,
51:59
Well, this is just advocating for Yay COVID,
52:02
as opposed to Boo COVID.
52:04
This is just saying that we should just let COVID run rampant
52:06
and people wouldn't be able to hear you, right?
52:08
And Robin's like, I think you're acting like there's one dial from
52:10
Yay COVID, Boo COVID, and Tyra said yes.
52:13
And I remembered that
52:15
and I thought, okay, so what if there was a dial that was more general
52:17
than that? What if there was a dial of progress,
52:19
right? And the idea was, you
52:21
know, there are people who advocate, okay, we should let people
52:23
in general do more things.
52:25
You know, we should require less permissions.
52:28
We should open things up. We should let human
52:30
ingenuity run free.
52:32
And there are people who say, no, we should keep a close eye
52:34
on things. We should regulate them. We should require
52:36
permissions.
52:38
And then, you know, what if you thought
52:40
that one of the major dangers to humanity right now
52:42
in terms of being able to sustain and expand the
52:44
civilization and make, you know, life worth living
52:47
was that we've moved
52:49
the dial too far down.
52:51
Right? I made sure to make it up, down and not left,
52:53
right? To avoid confusion because it's not a partisan issue. And
52:56
say, okay, so what if we
52:58
have the crank this too far down and what if the only
53:00
one of the few places left
53:02
that we have the dial in almost the maximum
53:05
full speed ahead mode,
53:06
right? Locally speaking,
53:08
is AI because AI wasn't the thing
53:11
that was on people's radars until very recently.
53:13
So like Peter Thiel
53:15
talks about the great stagnation,
53:17
right? This concept that many
53:19
things aren't advancing and Peter Thiel talks about how,
53:21
you know, the world of atoms is restricted
53:24
and we can't do things there, but
53:26
the world of bits, you can still do some stuff, right? So hence
53:29
we see a lot of innovation with computers and
53:31
we saw it with crypto.
53:33
We see, you know, why are so many intelligent,
53:35
you know, driven people getting into crypto? Well, it's because they
53:37
can't be out there building power.
53:40
Right? They don't see opportunity there. They'd love to,
53:42
but instead this is the place they can go. So that's where they go.
53:45
And so now it's AI,
53:47
right? And AI has tremendous promise to restart economic
53:49
growth,
53:50
to provide more human intelligence, to make life a
53:52
richer, better place, to solve our other
53:54
problems, even potentially
53:56
prevent other extension risks.
53:58
And so... If
54:00
you want to
54:02
let us proceed forward, if you want to give us a chance,
54:05
what if our only chance is AI?
54:08
If one of some important things, we've lost this war everywhere
54:10
else, and if we also restrict
54:13
AI in this way, if you lock down AI, what
54:16
if there's nothing left and this just shuts
54:18
down our last hope, and that in and of itself
54:21
is kind of an existential threat, even
54:23
if it's not extinction as such? What
54:27
do we do about that?
54:29
That would be the 19... In
54:31
an extreme version, it would be 1984. 1984, everybody's
54:34
still alive, but no one wants to live... None of us want
54:36
to live in that world. Very few of
54:38
us want to live in that world.
54:40
So just to repeat the metaphor a little bit, and
54:42
by the way, I see it as going from...
54:45
It's not up or down, it's just round. So it
54:47
goes from 0 to 10, and 10 is
54:50
anything you want any time, you
54:54
have to go to the Soviet Union in 1928 to get permission
54:56
to do stuff. And
55:00
so I
55:01
think what I'm
55:03
intrigued by is the idea that this might... You
55:06
don't write about this, so I want you to speculate on if
55:08
you would. It's
55:09
kind of the way the brain works. We
55:12
don't really have the sophistication to hold
55:14
multiple ideas in our mind at the same time. Like,
55:16
I want to be really free
55:18
in this area, but not so free in this
55:20
area. That's just too hard for
55:22
me, because there's more than two. And then
55:25
I'm going to hear people making arguments
55:27
for each one, and I have to weigh each one, and it's
55:29
just better to be yay COVID or boo COVID.
55:32
And you use COVID and lockdown,
55:34
but for me, it's the kind
55:36
of insanity that we're living in right
55:38
now is vaccines.
55:41
No thought... I only got three shots. And,
55:45
ooh, only three? Are you anti-vaccine? No,
55:48
I'm not anti-vaccine. Are you an idiot? I'm
55:50
pro-vaccine. I mean, up to a point. And then
55:52
after a while, I can imagine there's decreasing
55:54
returns and taking a shot that
55:57
has never been tested in widely on people's immune
55:59
systems. for the nth time with the new
56:01
technology you've never used, seemed to be kind of prudent given
56:03
that I'm not overweight, but
56:05
I'm not obese. I don't have any horrible underlying
56:08
comorbidities. So it seemed to be prudent
56:10
to stop at three. Okay. And
56:13
this guy's I'm talking to in a party and he says,
56:15
Oh,
56:16
you stopped at three. He
56:17
says, I think you need to take, shouldn't
56:20
let your politics get in the way of your,
56:22
your, your health.
56:25
What? He, we meant,
56:27
Oh, obviously you're one of those Trump voters
56:29
and, and they're anti-vaccine. So you're
56:32
not thinking
56:33
you're just going with your
56:35
bias. I just looked at him and
56:37
smiled and said, I don't think you know, you're talking to, you know,
56:39
this is kind of something I think a little bit about,
56:42
but anyway,
56:42
that's a great example for me of the dial.
56:45
You're pro vaccine or anti. Well, it's a really
56:47
horrible, terrible question for
56:50
a thoughtful person, but
56:52
it's the way our society quote,
56:54
thinks about it. Site is a thing. It's
56:57
a complex emergent
56:59
set of opinions and, and
57:01
interactions and media. And it's
57:03
not well defined,
57:05
but somehow it's come down to pro or con
57:08
in an age when we're supposed to be really smart and
57:11
nuance is dead. And
57:13
we should be really good at nuance. We
57:15
have all that we have more information. And
57:18
yet
57:19
it's just a question of which expert expert you decide
57:21
to trust the pro vaccine guy or the anti
57:23
vaccine guy. And that's it.
57:25
Of course,
57:26
part of the problem is that
57:27
being pro or anti, as opposed to nuanced, get
57:30
you attention, you get more clicks, you
57:33
sell more ads. There's
57:34
a, there's a lot of return to
57:36
being unnuanced and very little return to being nuanced
57:38
other than that you might be right. You don't
57:40
understand their trade-offs.
57:42
I find this a very
57:45
common
57:46
and potentially very
57:48
useful way of understanding why things
57:51
that don't make any sense
57:53
are actually maybe sensible.
57:54
Yeah.
57:57
So I think of this as, you know,
57:59
sort of Our brains evolve
58:01
with very limited compute, very limited ability
58:03
to think about things in detail, very limited ability
58:05
to process bits of information in ways that,
58:08
you know, AIs will often have
58:10
more affordances in these areas.
58:12
But in order to be able to reasonably process
58:14
information, especially in places
58:17
where our major goals evolving was
58:19
to avoid tail risks, to avoid dying, or to
58:21
be driven out of the tribe,
58:23
we evolved these kinds of shortcuts and heuristics and
58:25
associations. And that's
58:27
just how the human brain inevitably works.
58:29
And
58:30
my guess is it's actually better than it used to be. And we
58:33
just like have higher standards now we see the possibility
58:35
of being able to do better. But
58:37
that, you know, common discourse has
58:40
always in some sense been us.
58:43
And
58:44
as a result, yes, absolutely. Like with COVID,
58:46
you know, I wrote a column before I was writing about AI, I
58:48
was writing about COVID, because that was what was on my mind.
58:51
And I did it first said, I think writing
58:53
is how you were, how I learned to think about things, it's
58:55
how I understand things, I wrote it, I wrote it so
58:57
I can understand it. And then I wrote it for other people,
58:59
because other people were getting benefit out of it.
59:02
And, you know, the entire time, I was definitely
59:04
trying to not get COVID, I'm not boo COVID,
59:06
I'm trying to particularly
59:08
figure out what would actually
59:10
work.
59:11
And this is something that like has a very niche audience,
59:13
right? It's definitely an
59:15
acquired case that like some people can handle
59:17
and most people can't and I accept that.
59:20
But if that's true, and you want
59:22
to influence public policy, you have to
59:24
understand that and you have to adapt your messaging
59:27
and your strategy to that situation.
59:30
And so someone could reasonably say,
59:32
okay, you are saying
59:35
Boo AI,
59:36
because you see extinction risk, you
59:38
see a very huge extinction risk
59:41
if we don't take
59:42
a very narrow particular set of interventions.
59:45
But all anybody is ever going to hear if you call for a
59:47
particular narrow set of interventions, is
59:49
Boo AI, and they're going to do a completely different
59:51
set of interventions. And even you agree those interventions
59:54
are bad. Those interventions are going
59:56
to prevent us from unlocking this amazing
59:58
potential that we all agree. AI
1:00:01
can offer us to improve our lives in the short run.
1:00:03
And it's not going to stop dynamics that
1:00:07
you are worried about that are inevitably going to
1:00:09
lead to more and more capable systems
1:00:12
that we don't know how to control, that are going
1:00:14
to end up in control of our future.
1:00:17
And so you're better off not doing
1:00:20
that and then trying to figure
1:00:22
out a better way forward when we
1:00:24
get there because your path is
1:00:26
hopeless and will also damage our ability to build houses
1:00:29
and roads and energy plants and
1:00:31
everything else. And
1:00:35
I think I'm a little
1:00:37
uncomfortable with the broadness of Boo Progress,
1:00:40
EA Progress. I think
1:00:42
it might be a little complicated, but we'll
1:00:45
talk about in a sec. But for me, the other area
1:00:47
I think this works is
1:00:50
you do a survey of people and you say,
1:00:53
do
1:00:53
you think we should spend more money on education?
1:00:56
And a lot of people say yes,
1:00:58
and they don't know how
1:01:00
much we spend on education. They've never
1:01:02
looked at a study of whether it's effective.
1:01:04
They just say, yeah, education.
1:01:07
And now in one sense,
1:01:08
it's just expressive voting. They're just conveying
1:01:11
to you the pollster that they
1:01:13
like education. They don't really mean more. But
1:01:16
I think most of them do mean more. I
1:01:18
think they assume that, okay,
1:01:20
well, you
1:01:21
know, it's true that
1:01:24
maybe it doesn't work so well. And
1:01:27
it's true that the bang for the buck might be limited.
1:01:30
But more is always better than less of education.
1:01:33
And it's not even so much that they're one issue
1:01:35
people. They have lots of issues they
1:01:37
want more of. And they don't have to worry, they don't ever
1:01:39
think about whether there's a budget constraint or limited
1:01:42
resources. Just that I want that to get a vote.
1:01:44
So yeah, more education, more
1:01:47
fill in the blank, you know, more of what I care
1:01:49
about. And that's what the political
1:01:51
process is going to listen to. This
1:01:54
is a little bit more complicated, I
1:01:57
think.
1:01:58
But I...
1:02:00
You might think about it more as
1:02:02
more control versus less control
1:02:05
rather than progress versus, you know, a
1:02:07
stagnation.
1:02:08
I think the people who are boo progress
1:02:11
are uneasy with the uncontrolled aspect of
1:02:13
it,
1:02:14
the idea that you have to ask for forgiveness
1:02:17
rather than permission. And
1:02:18
so I think
1:02:21
part of what's going on in this dynamic, if you're right,
1:02:23
and I'm not going to
1:02:25
have no idea whether you're right or not, it's just really
1:02:27
interesting to think about, is that if
1:02:29
I have this underlying idea
1:02:32
that
1:02:33
control is good, or I have an underlying
1:02:36
view that control is bad,
1:02:38
the idea that I would pick, oh,
1:02:40
well, it's worse in this area or this one, or
1:02:42
it's good in this area but bad in this one,
1:02:45
that's really hard. So I'll just get to
1:02:47
pick one.
1:02:48
Yes or no on the control. I'm only going to pick one side.
1:02:50
Yes or no on control.
1:02:51
And so you get people
1:02:54
who basically want to control
1:02:56
things or want others to control them for them, regardless
1:02:58
of whether it's possible, regardless of whether they're
1:03:01
going to do well,
1:03:02
which seems to be irrelevant, by the way, again, with
1:03:04
the money spent well in education seems to be irrelevant.
1:03:07
Whether the control actually accomplishes what people really
1:03:09
want, they pay very little attention to.
1:03:12
You can argue they don't have much incentive to and they don't have much ability
1:03:14
to understand it. But I'm actually arguing something
1:03:16
different. I don't think they care so much.
1:03:18
Because it's just that's a comfort thing.
1:03:20
It's a security thing. So what do you
1:03:22
think of that idea about that it's more of control
1:03:25
than progress, no progress?
1:03:28
Yeah, I don't think people are actually saying,
1:03:31
boo, progress any more than they were ever saying, yea,
1:03:33
COVID. I think that that's
1:03:36
the opposite. If someone's called you anti-choice
1:03:40
or anti-life, people would say, that's
1:03:42
not a good characterization of my position. And
1:03:44
they would both be right.
1:03:45
And that's not how I think about myself. So like for
1:03:48
the education people, I do see
1:03:50
exactly this mistake. Like if you look at like sources like
1:03:52
Piketty and other people who model education
1:03:55
as an input, they often literally just say,
1:03:57
you know, these people have more education because they were more inputs.
1:03:59
because you spent more
1:04:01
hours, more years in a school,
1:04:04
you have more human capital. And it's like-
1:04:06
It's curating. Yeah, it's- It
1:04:09
curates me. It's completely silly. It's just not
1:04:11
how this works. They don't think about the quality or the
1:04:13
effects. And similarly, I
1:04:15
think if you ask people, are you in
1:04:17
favor of control? Are you in favor of restrictions?
1:04:20
They would say locally sometimes,
1:04:23
yes, once they talk themselves into that,
1:04:25
but mostly they would not generally say that. They would say
1:04:27
things like they're pro-safety
1:04:29
or they're pro-responsibility,
1:04:30
right? They'd use their own words.
1:04:32
And
1:04:33
that's how they think about this, or they're
1:04:36
against risk,
1:04:38
or they're against
1:04:40
recklessness, or
1:04:43
they're, and they might
1:04:45
have various levels of specification of arguments,
1:04:48
and sometimes they have good arguments, and
1:04:50
sometimes they just use broad
1:04:51
emotional heuristics
1:04:53
and everything in between. But
1:04:56
it's the tendency, and one of the things that happens every time,
1:04:59
and one of the first things people say is, well, we don't
1:05:01
let you carry, and
1:05:04
we don't want you to use a nuclear weapon. So
1:05:07
why should we let you do that? Or
1:05:10
if you don't let someone build a house, people
1:05:12
can say, well, why am I, if you can't even build a house,
1:05:14
why are we letting you?
1:05:16
If
1:05:19
a hairdresser needs a license, right? If
1:05:21
a pilot needs 1500 hours of time
1:05:23
to fly in the air, which they obviously
1:05:25
don't, then, well,
1:05:27
clearly any job
1:05:29
you want to do, you should have to ask permission of the state, that's
1:05:31
just the standard that we've set. And then it becomes the
1:05:33
baseline of the argument that we have to make, right?
1:05:36
And this becomes a very, very difficult prior to overcome.
1:05:39
And that's a very sophisticated version
1:05:42
of this that I observed
1:05:44
from your essay,
1:05:46
and maybe because I misread it, or maybe because it wasn't as
1:05:49
fully developed as you're developing it now. And
1:05:52
the idea that I'm gonna take the logic of
1:05:55
my application here and then apply it somewhere
1:05:57
else.
1:05:59
I don't know if that's true, but I really
1:06:02
think it's provocative and interesting to think about.
1:06:04
What kind of reaction have you
1:06:06
gotten from it? So
1:06:09
I know that
1:06:10
Tyler specifically
1:06:12
thinks that this is not what he is doing.
1:06:14
And then we're planning to talk
1:06:16
and hopefully figure things out
1:06:19
more because I want to understand
1:06:21
and I try to respond to many of his ideas in
1:06:23
specific detail.
1:06:25
I do want to understand
1:06:28
whatever is going on there and to the
1:06:30
extent that he's making these mistakes, hopefully figure
1:06:32
out how to go forward the most productive
1:06:35
way possible.
1:06:36
Mostly people have seen this as a very
1:06:39
interesting proposal, something to think about. I've had
1:06:41
a mostly positive reaction.
1:06:43
I haven't seen a reaction from Mark Andreessen, but
1:06:46
the only other response
1:06:49
I saw, serious response to him was
1:06:51
from a man named Dworkash Patel who runs the Lunar
1:06:53
Society podcast,
1:06:54
who wrote a very thoughtful point by point
1:06:57
response, which was the first
1:06:59
thing I... At first I thought I would write some more
1:07:01
response. And then I said, no, it doesn't
1:07:03
make sense because if somebody is not actually...
1:07:07
If the load bearing is not in the individual points, if
1:07:09
somebody is not looking to
1:07:12
actually
1:07:13
have good logic that they examine carefully
1:07:15
and figure out the understanding, then addressing their individual points
1:07:18
just doesn't address their cruxes. It won't be convincing.
1:07:20
So it's not a right thing, but Dworkash
1:07:23
did the service of responding point by point and pointing
1:07:25
out many of the conceptual
1:07:27
problems in the essay,
1:07:29
pointing out why many of the arguments don't
1:07:31
really make sense the way they were spoken in detail.
1:07:34
And the response that Mark
1:07:36
Andreessen did was to block him on Twitter almost
1:07:39
immediately and otherwise say nothing.
1:07:42
And that was about... That
1:07:45
tells me what I need
1:07:47
to know. I would love to engage in
1:07:49
detail with such people and
1:07:52
actually talk about these disagreements in
1:07:54
any form, but it's difficult.
1:07:57
Do you think there's any tribalism involved in
1:07:59
these...
1:08:00
in these early days, I mean,
1:08:02
regardless of which side one is on these issues,
1:08:05
people ask me all the time, so where do you come down? I said, I don't
1:08:07
really, I don't have a,
1:08:09
I'm a little worried, I'm more worried than I was a year ago,
1:08:11
but I'm not scared. And
1:08:13
maybe I should be. So I'm open to that. That's
1:08:16
not very helpful. It's not what anybody wants to hear.
1:08:18
And I wonder, I
1:08:21
wonder how
1:08:24
much of where people come down on this issue
1:08:27
is a tribal identification with people
1:08:29
making kind of arguments along the lines of what you're
1:08:31
saying,
1:08:32
that, that
1:08:33
I'm going to sympathize with Tyler say,
1:08:35
or
1:08:36
Mark, because I am kind
1:08:38
of pro progress. Not kind
1:08:40
of, I've always been a huge advocate. I want
1:08:43
to be in that group. And I'm going to look for ways
1:08:45
to feel good about it.
1:08:47
And maybe that's what I'm really doing
1:08:49
if ultimately I come down on the side of,
1:08:52
let's let her rip.
1:08:53
And I do think
1:08:56
I've always, when I was younger, I liked
1:08:58
to believe that people looked for the truth. As
1:09:00
I get older, I'm not as
1:09:03
convinced of that. So
1:09:05
some of it is your suggestion, a
1:09:07
kind of a Machiavellian
1:09:09
strategic argument. I'm suggesting
1:09:12
it could be as simple as, again,
1:09:14
I'm not trying to explain Tyler Mark, which is in general where
1:09:16
people come down on these issues is
1:09:18
like,
1:09:18
I don't be like that person. That's
1:09:21
a worldview. There's other pieces of that, that
1:09:24
creep me out. You know, it's a version of sort of intersectionality,
1:09:27
right?
1:09:27
It's like, I can't be nuanced. I
1:09:29
can't go case by case, I'm just gonna everything,
1:09:32
they'll line up together. And so if
1:09:34
I'm against AI, I'm
1:09:35
against nuclear plants too.
1:09:37
And I think that
1:09:39
I don't want to be against nuclear plants,
1:09:42
right?
1:09:43
Me personally, I
1:09:44
think they're really good idea. And I really think it's a terrible
1:09:46
mistake that we have so few of them. So
1:09:49
maybe that's why I'm more susceptible to the pro
1:09:51
AI. That's your argument about, I think the dial
1:09:54
of progress.
1:09:56
I think that we've definitely seen that,
1:09:58
you know, people who think very well about
1:09:59
economics in other realms
1:10:02
in my mind, right, that share these perspectives
1:10:04
and they do that professionally,
1:10:06
often to come out with these very
1:10:08
pro AI,
1:10:10
anti-regulatory principles that they would
1:10:12
exactly what I would predict them to have in any other circumstance.
1:10:15
And in almost every other circumstance, I would mostly agree with
1:10:17
them.
1:10:18
And it makes sense that they would have
1:10:20
these perspectives on many levels.
1:10:23
And I don't think this comes from a Machiavellian perspective
1:10:26
for most of them. I think it definitely comes from a, you know,
1:10:28
my heuristics all tell me this is almost always the right
1:10:30
answer.
1:10:31
This is where my priors should heavily
1:10:34
land. You need to overcome that in order
1:10:36
to convince me otherwise. And
1:10:38
then this leads sometimes to,
1:10:40
you know, not considering the arguments, not giving them
1:10:42
the time of day or just like finding,
1:10:45
exploring possibility space to find a way to tell
1:10:47
yourself, you
1:10:48
know, a plausible story that you buy
1:10:51
that says that this is going to be okay.
1:10:53
And then, you
1:10:55
know, the amount of this is conscious or unconscious or
1:10:57
that you know, you planned is an open
1:10:59
question. And I
1:11:01
don't mean to imply ill motives and
1:11:03
anyone again, I think that everybody involved
1:11:05
wants the best outcomes for everyone.
1:11:07
I think there are very, very few people who don't
1:11:09
want that and they tend to be very vocally saying they
1:11:11
don't want that and you can tell who they are and you can
1:11:13
react accordingly. Occasionally someone
1:11:16
says, or your species is, how dare you not want AI
1:11:18
to flourish and
1:11:20
it's going to wipe out humanity and that's good. But
1:11:22
like, then you say, okay, I now know that you think that
1:11:24
and thank you and I hope more people hear you because
1:11:26
I expect their reaction to be helpful
1:11:28
to not having that happen, right? Like it
1:11:30
goes to the opposite and that's good.
1:11:32
And I believe it opened the day. So
1:11:35
when
1:11:39
I see these reactions, but I also noticed that
1:11:41
like
1:11:42
the people who thought long and hard
1:11:44
about the risks, the extension risks from artificial intelligence
1:11:46
from long before the current boom in AI and
1:11:48
who are the loudest people advocating
1:11:51
in favor of doing something about extinction
1:11:53
risks or focusing on extinction risks, they
1:11:55
tend to also like have come
1:11:58
to these realizations in the
1:11:59
economic sphere much more than
1:12:02
most other people. Whereas the
1:12:04
traditional tribal affiliations, most things
1:12:06
in American politics get quickly labeled as,
1:12:08
this is red,
1:12:09
this is blue, this is the blue position, this is the
1:12:11
red position like COVID was, right? And
1:12:14
we haven't seen that in AI, right? If you ask the surveys,
1:12:17
you see almost no partisan split
1:12:19
whatsoever, this is exactly the least tribal thing
1:12:22
of this level of importance that we've ever seen
1:12:24
for this long.
1:12:25
And we've been very fortunate
1:12:27
and like, I love that, I hope we can sustain
1:12:29
that for as long as possible and have
1:12:31
relatively intelligent dialogue. And instead we have this
1:12:33
kind of
1:12:34
weird discussion
1:12:37
where we have like economically
1:12:39
very good views actually on both sides
1:12:42
of the discussion. We're able to have a
1:12:44
very new conversation where
1:12:45
those views happen to like,
1:12:48
you know, bias people in a particular direction.
1:12:51
And yet the people who are worried
1:12:53
have managed to overcome this because they've
1:12:55
thought long and hard
1:12:56
about these other aspects of it. And
1:12:58
I
1:13:00
do think that like, the human
1:13:02
brain, as I said, that works on these shortcuts, works
1:13:04
on these heuristics. And so we will always
1:13:06
to some extent pick up from vibing, from
1:13:09
heuristics, from general
1:13:11
associations, from simplifications, and
1:13:14
from noticing other people will act this way too, and that we
1:13:16
can't speak with too much nuance if we want to be heard.
1:13:19
And so we will always have these tribal,
1:13:21
as it were,
1:13:22
issues, these partisan and context issues
1:13:26
where we go back and forth. And
1:13:28
so we do our best not to do that
1:13:30
and try to be charitable on the other side, try
1:13:33
to engage with their actual arguments.
1:13:35
I try to implore people to form their
1:13:37
own models.
1:13:38
Like I talk about models not in the formal sense
1:13:40
that Tyler Cohen says, you know, you should write down a scientific
1:13:43
model and submit it to a journal and have lots of math
1:13:45
and have all these dependencies and have
1:13:47
a precise equation pointed out. I'm talking
1:13:49
about like the kind of model you form in your head where
1:13:52
you think carefully about a situation, you
1:13:54
have an understanding on some level all
1:13:57
these different dynamics and you try to bring it together to
1:13:59
form. form as solid a distribution as you
1:14:01
can over
1:14:04
what you think might happen. In
1:14:08
my use of the word model, I could ask, like, you know,
1:14:10
Tyler, what is your model of what happens
1:14:12
after AI? And he's talked about some aspects of
1:14:14
his model. He said, AIs will have their own economy,
1:14:17
like they'll use crypto to exchange
1:14:19
value because it's the efficient way for AIs
1:14:22
to exchange it. And that
1:14:24
maybe will be the dogs to the AIs humans.
1:14:27
He's talked about this metaphor a few times, I believe, including on
1:14:29
your podcast,
1:14:30
and that the dogs will train the AIs, the way the AI
1:14:32
can train the dogs. And
1:14:34
you know, my, my imploring would be think
1:14:37
more carefully about why that
1:14:39
equilibrium is true
1:14:41
between dogs and humans in the real
1:14:43
world, and whether or not those
1:14:46
dependencies hold in the case that you're
1:14:48
imagining in the way that you're imagining. Yeah,
1:14:51
I don't know if Tyler really expects a mathematical
1:14:54
model. And I was, I've
1:14:57
been, I don't know if we talked about it
1:14:59
when I interviewed him,
1:15:01
but he actually
1:15:02
encouraged people to put
1:15:04
things into the peer review process, which is a process
1:15:07
I don't
1:15:08
believe anymore leads to truth. So it seems
1:15:10
like a bit of a, of a, of
1:15:12
a red herring. I
1:15:15
think the word
1:15:16
that describes what you're talking about is, is a narrative.
1:15:19
And some narratives are more plausible
1:15:21
than others. And I'm okay
1:15:23
with the narrative rather than a mathematical model
1:15:26
on either side of this debate.
1:15:28
I think the people who are on
1:15:30
each side need better narratives. They
1:15:32
need a better story that I'm going
1:15:34
to find convincing. I find neither side convincing.
1:15:37
Eliezer Yudkowsky came up with the most
1:15:40
creative narrative. There were some really wonderful
1:15:43
flights of intellectual fantasy there. And I don't mean that
1:15:45
in a derogatory way at all. I found it extremely
1:15:48
mind expanding, but not quite to the level
1:15:50
of dancing. And they ratcheted up my fear
1:15:53
a little bit though, because I thought that was a narrative
1:15:55
I hadn't thought of and is somewhat
1:15:58
possibly worrisome.
1:15:59
And I think this until we, we're never
1:16:02
gonna have any data.
1:16:03
I don't think, I mean, almost
1:16:05
by definition, by the time we get the data on whether it's
1:16:08
sentient and destructive, it'll
1:16:10
be, I mean, it'll be a great science fiction movie, but, you
1:16:12
know, we'll already be in prison and they'll be harvesting our kidneys,
1:16:15
just be in line to get it, to get
1:16:17
your kidney removed for
1:16:19
the paperclip factory and that's,
1:16:23
but I think we need
1:16:25
better narratives. I think we need stories
1:16:28
and
1:16:29
logic, not formalism,
1:16:32
but logic
1:16:33
about why a narrative is plausible either
1:16:36
because it mirrors past narratives
1:16:39
that
1:16:40
turned out to be
1:16:41
plausible or better.
1:16:43
It fits this particular
1:16:46
unique, very different case. And I think that,
1:16:48
but I just speculate on why we're, really smart
1:16:50
people are wildly divergent
1:16:53
on this,
1:16:54
is that there's no data,
1:16:57
there's almost
1:16:58
very little evidence
1:16:59
and we're speculating about which
1:17:01
narrative is more plausible.
1:17:03
That seems like unlikely to be
1:17:06
resolved in the near term. Maybe
1:17:08
AI can help us fix it, I don't know.
1:17:11
So I think that, you know, when we talk about narratives,
1:17:14
an important question is to what extent is your narrative
1:17:16
or model made of gears, right? To what extent does
1:17:18
it have details about how different
1:17:20
parts of it lead to other parts? Like what are the physical
1:17:23
mechanisms underlying it? Get
1:17:25
Hayaki in detail into what you're talking
1:17:27
about in a real way.
1:17:29
And so in
1:17:31
my observation, when I look into the
1:17:33
narratives that are told about why
1:17:35
everything will be fine, right?
1:17:37
I don't see very many plausible gears there. There
1:17:39
are gears, I see them as like, the gears don't
1:17:41
turn that way, right? These gears wouldn't work. This
1:17:44
is not the outcome that you would expect.
1:17:46
You
1:17:46
know, your gears lead to something else.
1:17:49
And I strive to continuously
1:17:51
improve the gears in the other models.
1:17:55
And it's definitely difficult
1:17:58
to work out the details, but I do think we have more.
1:17:59
data than
1:18:01
no data. I think that this idea
1:18:03
that all we can do is tell each other stories
1:18:05
and proposals. But we
1:18:08
know a lot of things in particular about what
1:18:10
capabilities these systems will have.
1:18:13
And also, in what ways will humans react
1:18:15
to these systems? What actions will they take?
1:18:17
So one of the big emphasis is, back
1:18:20
in the day, one
1:18:21
of the questions was, how cautious will
1:18:25
we be
1:18:26
dealing with these systems as they gain these capabilities?
1:18:29
What affordances will we hold back from these systems?
1:18:32
What capabilities will we try to not add to these systems
1:18:34
to contain our risk? So
1:18:36
for example, one idea was, what we wouldn't be so foolish
1:18:38
as to hook up
1:18:39
potentially dangerous artificial intelligence just straight
1:18:41
to the internet, let it do whatever it wanted, ping
1:18:44
any website with any piece of data and
1:18:46
do anything. Because then if it was a dangerous system, we'd be
1:18:48
in real trouble. And
1:18:49
it turns out no, humans find that useful. So we
1:18:51
just do that, including during the training of the system,
1:18:53
just right off the bat, every time. And we've
1:18:55
gotten used to it. Now it's just fine. So we can stop
1:18:58
worrying. That theory has been proven
1:19:00
wrong and other theories have been proven right. Another
1:19:02
question is, one of the things
1:19:05
Mark talks about is, well, systems don't have goals. AI
1:19:07
systems are math. They don't have goals.
1:19:10
Well, maybe they don't have goals inherently.
1:19:12
That's a question that's interesting
1:19:14
that we can speculate about as to whether
1:19:16
they would evolve goals on their own. What
1:19:19
we do know is that humans
1:19:21
love achieving goals. And
1:19:23
when you give an AI system goals, it helps you
1:19:26
achieve your goals. Right?
1:19:29
At least on the margin, at least to starting out,
1:19:32
people think this. And so we see baby
1:19:34
GPT and auto GPT and all these other
1:19:36
systems that it turns out for like 100 lines of
1:19:38
code,
1:19:39
you can create the scaffolding around GPT-4
1:19:41
that makes it attempt to act like it
1:19:43
has goals, right, to take actions
1:19:46
as if it had goals, and to act as
1:19:48
a goal motivated system. And
1:19:52
it's not great because the underlying technologies aren't
1:19:54
there. And we haven't gone through the iterations of building the right
1:19:56
scaffolding. And we don't know a lot of the
1:19:58
tricks. And it's still very, very early days.
1:19:59
But we absolutely are going
1:20:02
to turn our systems into
1:20:04
agents with goals that are trying
1:20:06
to achieve goals, then create sub-goals,
1:20:09
then plan, then ask themselves, what
1:20:11
do we need to do in order to accomplish this thing? And
1:20:14
that will include, like, oh, I don't have this information.
1:20:16
I need to go get this information. I don't have this
1:20:18
capability. I don't have access to this tool.
1:20:21
I do get this tool.
1:20:23
And it's a very small leap from there to, I'm going to need
1:20:25
more money, right? Or
1:20:27
something like that. And from there, the sky's the limit.
1:20:29
So we can rule out through experimentation
1:20:32
in a way that we couldn't two years ago,
1:20:34
right? This particular theory of Marx,
1:20:37
that the systems in the future won't
1:20:39
have goals. I mean, plus that's unless we take action
1:20:41
to stop it. I
1:20:43
mean, I think that's an interesting intellectual question.
1:20:46
And I think part of the reason that the skeptics,
1:20:49
the optimists
1:20:52
are more optimistic. And part of the reason, I
1:20:55
think we are, in some sense, just telling
1:20:57
different narratives and some are more convincing
1:21:00
than others. And it's mainly stories is that we
1:21:02
don't have any vivid examples today
1:21:05
of my vacuum
1:21:07
cleaner wanting to be a driverless
1:21:09
car. The example I've used before
1:21:11
doesn't aspire.
1:21:14
Now
1:21:15
we might see some aspiration
1:21:17
or at least perceived aspiration in
1:21:19
chat GBT at some point. But I think
1:21:21
part of the problem getting people convinced
1:21:24
about its dangers
1:21:26
is that
1:21:27
that leap the sensual
1:21:29
sleep, the conscious asleep,
1:21:31
which is where goals come in,
1:21:34
doesn't seem credible, at
1:21:37
least today,
1:21:38
maybe will be. And I think that's where
1:21:41
you and others who are worried about AI need to help
1:21:43
me and others who are less worried to see.
1:21:46
But either way, isn't the
1:21:49
much more worrisome thing that
1:21:51
a human being will use it
1:21:54
to destroy things? I mean, I
1:21:56
don't, it's just like
1:21:58
saying, well, we've got to sort of. automatic rifle,
1:22:02
what if it jumped out of the hands
1:22:04
of a person and started spraying bullets
1:22:07
all around because you've given this
1:22:09
motor or something that causes some
1:22:11
centripetal movement, blah, blah, blah. That's
1:22:15
not the problem. The problem is a person's going to grab
1:22:17
it, use it to kill people.
1:22:19
And it seems to me that that
1:22:21
is, you know, could it get out?
1:22:24
That's not the bigger worry. The bigger worry is someone
1:22:26
lets it out. And hardest
1:22:29
is it to do evil because they want
1:22:31
to be noticed because their life is miserable
1:22:34
for 100 reasons that humans are not
1:22:36
just creative, but also destructive.
1:22:39
It's
1:22:41
going to be really hard to keep that from happening. I
1:22:44
can't imagine stopping it. So I
1:22:47
think in the short term, that's absolutely
1:22:49
like the only risk, right? Like in the next month
1:22:51
or the next year, even, right? Like if AI
1:22:53
does harm, it's because some human directed it to do harm.
1:22:56
But
1:22:57
I do think that like, you know, even without religious
1:22:59
intent, there's going to be tremendous economic
1:23:01
incentive, tremendous, just personal incentive
1:23:04
to hand over more and more of the range to AI
1:23:06
that are operating more and more without checking
1:23:08
with humans, without getting permissions from humans,
1:23:11
because this is what gets us what we want. This
1:23:13
is what makes the corporation more profitable. This is what
1:23:16
allows us to do our job better. This
1:23:17
is what
1:23:19
just achieves all of our goals, right? And
1:23:21
so a lot of these goals are going to be maximalist
1:23:23
goals. And really things like, you know, maximize profits for the
1:23:25
corporation.
1:23:27
And so with these AIs,
1:23:29
you know, on the loose in the sense, even without malicious
1:23:31
intent, we're going to have a serious problem because
1:23:34
the AIs are not, you know,
1:23:36
it's going to be very, they're going to be competing and
1:23:39
they have to keep each other in check.
1:23:40
And you have the obvious externality problems that
1:23:43
arise in these situations that they're not going to internalize
1:23:46
and so on. Yeah.
1:23:48
Well, you know, we just did this episode with Mike
1:23:50
Munger
1:23:52
on enforcing the end and for a convenience
1:23:54
to the unenforceable. And
1:23:56
it's this idea that norms
1:23:58
are a very powerful way that we restrict. things.
1:24:01
And I start to say,
1:24:02
well, yeah, I can't really expect AI to have norms
1:24:05
or ethics. People are talking about ethics, give it ethics.
1:24:08
They just program ethics into it.
1:24:09
Like that's easy. But
1:24:11
if people are right, that it's going
1:24:13
to have some sense, it could develop norms,
1:24:16
maybe. But why would develop norms that would be
1:24:18
good for humans would be
1:24:21
hard to argue. It would seem to me.
1:24:22
Yeah, the problem is that, you know, if it develops norms
1:24:25
that make it less competitive, make it worse
1:24:27
at getting the human that's operating it
1:24:29
what they want, that human is going to select against
1:24:32
those norms. And so it's not going to go
1:24:34
the way that we want, even if we get this kind of lucky,
1:24:36
right, that some of them happen to evolve
1:24:38
these norms, we'd have to do it
1:24:40
intentionally and carefully. We
1:24:42
have an interesting situation. So like,
1:24:44
AIs are really, really bad at
1:24:47
observing norms that they don't actually
1:24:50
get rewarded or punished for observing or
1:24:52
not observing. But they
1:24:54
are also very good at actually
1:24:57
obeying the rules if you
1:25:00
tell them they have to obey the rules. And so this
1:25:02
middle ground that was talked about in that episode gets
1:25:05
completely destroyed, right, you can move a lot
1:25:07
more things into the rules set, right, things
1:25:09
where the human knows the speed limit is
1:25:11
not actually 65. The human knows it's actually 72.
1:25:14
And the human knows in the situation you're supposed to break the traffic
1:25:16
laws because that's silly.
1:25:18
Whereas the AI literally cannot break
1:25:21
the traffic laws, right, it has a restriction on it's like,
1:25:23
nope, never allowed to break the traffic laws, it
1:25:25
will never obey any heuristics or norms,
1:25:28
because some of those norms, including some norms that actually
1:25:30
break the technical law.
1:25:32
And so the human old solution doesn't work at
1:25:34
all anymore, right. And so our old human solution
1:25:36
of norms
1:25:37
breaks that entirely. I also really appreciated
1:25:40
when you have AI in the brain, everything is in some way a
1:25:42
metaphor for the problem. And
1:25:45
so this idea of, you know, Marx writes this
1:25:47
huge thing about how capitalism is terrible, we're going
1:25:49
to overthrow it, we're going to create this communist utopia,
1:25:51
and he writes five pages that
1:25:53
are completely vague
1:25:55
about what the communist utopia is going to be.
1:25:57
And, you know, similarly, we have many
1:26:00
other people who do similar things. And
1:26:02
that to me is like, AI is another example of
1:26:04
this, where a lot of people are
1:26:06
saying, we're going to build this amazing
1:26:08
AI system, they're going to have all these capabilities,
1:26:11
and then we're going to have this brave new world where everything
1:26:13
is going to be awesome
1:26:14
for us humans and we're going to live great lives.
1:26:17
And then they spend
1:26:19
one paragraph trying to explain
1:26:22
what are the dynamics of that world? Like,
1:26:25
what are the incentives? Why is this a system in equilibrium?
1:26:28
Why do the humans survive over
1:26:31
the long run,
1:26:32
given the incentives that are inherent in all
1:26:34
the dynamics involved?
1:26:36
And that's, if we've done a lot of the hard work already
1:26:38
that I think is going to be very, very hard and that
1:26:40
I'm not confident we will successfully do, but let's say that we
1:26:42
do it.
1:26:43
We can align the system. Well, what exactly did you
1:26:45
do when you aligned the system?
1:26:47
What did you tell the system to do under what circumstances,
1:26:49
what rules is it applying? It can't just be a bunch
1:26:51
of fuzzy human norms that just
1:26:54
sort of come together. Have you gamed out what
1:26:56
happens after that?
1:26:57
When you put all of these new entities into
1:26:59
the system and then let normal dynamics play?
1:27:02
And the answer is that if they have answers
1:27:04
to these questions,
1:27:06
they're very, very basic, simple
1:27:10
models that have no detail to them.
1:27:12
Whenever I try to flush them out, I can't. I
1:27:14
don't know how.
1:27:17
One of the things I like, I like all
1:27:19
of that we just said, I think that's really interesting.
1:27:24
I've noticed in the last six months, 12 months,
1:27:27
it must be a European Union thing or some US
1:27:29
law that came into effect. There's a lot more
1:27:33
websites asking me if I want cookies.
1:27:36
Now
1:27:36
deep down, I think cookies are kind of creepy
1:27:38
and dangerous and allow some surveillance
1:27:41
that I'm not excited about, but I
1:27:43
just take the cookies. Darn it. I
1:27:45
want the website. I want to get to the answer. I'm
1:27:48
in a hurry. I think that the really good science fiction
1:27:51
Brave New World 2.0
1:27:52
is going to be exploiting
1:27:54
that human desire for ease,
1:27:57
comfort,
1:27:58
productivity, whatever it is, goals.
1:27:59
use as you mentioned, and we're not going
1:28:02
to worry so much as individuals.
1:28:04
And it's going to be hard to get people to
1:28:07
reign the whole system in. So
1:28:08
I think we're headed there.
1:28:11
I'm not sure the quality of life will be better. I've
1:28:14
been skeptical of if you've heard
1:28:16
it, but certainly Valiezer
1:28:19
and I got a word in about it with Mark that
1:28:21
much again, has an area but intelligence
1:28:24
by itself doesn't have a great track record of my
1:28:27
view in human affairs,
1:28:29
some rather than none,
1:28:31
great.
1:28:32
A lot rather than a good amount,
1:28:35
very mixed record. Now,
1:28:37
maybe this will be different, but I
1:28:41
hope the story in and of itself forget the existential
1:28:43
risk part. I'm not convinced of, I'm
1:28:45
not convinced of the fact that
1:28:47
if we turn to it for parenting, say, or
1:28:52
romantic advice, I'm
1:28:54
not sure that's going to make us feel better as humans. It might make
1:28:56
us feel a lot worse. And
1:28:58
I talked about it a lot with this episode that came out today
1:29:00
with Jacob Howland that some of our
1:29:03
most human skills are going to atrophy.
1:29:05
Maybe it won't matter. But
1:29:07
I do feel like
1:29:10
it's time to put seat belts on folks, we're
1:29:12
going to get into a very bumpy
1:29:14
ride and it's coming
1:29:16
very much in our own lifetimes.
1:29:20
Oh, definitely put your seat belts on. Everything
1:29:22
is going to change. A lot of these norms
1:29:25
and ways of doing things are going to fall away and
1:29:27
things are going to have to adopt to
1:29:29
the new world. I think with atrophying,
1:29:32
different humans are going to have a choice to make
1:29:34
because we're going to have to decide what
1:29:36
are the things we're going to pursue for their own sake
1:29:39
or because we don't want to atrophy them
1:29:41
and what are the things that we do want to pursue.
1:29:43
Another thing that
1:29:45
I think is interesting with intelligence is this idea
1:29:48
of a shift in how you do something.
1:29:50
So like GPT right now
1:29:52
is often doing something with
1:29:54
an imitative way. It's doing something in a less intelligent
1:29:56
way than we are. We're doing something where
1:29:58
we understand the underlying...
1:29:59
structure of the system. And
1:30:02
then we think about it
1:30:03
logically, and then we figure out from very
1:30:05
little data from very little, with
1:30:07
very little memory, with very little compute,
1:30:10
what the answer is. And the GPT just brute forces
1:30:12
this through vibe and association and correlation,
1:30:14
because that's the way that it currently
1:30:16
works. And then over time, it hopes
1:30:19
to approximate that and then as
1:30:21
it gets better, it moves through the system.
1:30:23
And humans often have the same thing where
1:30:26
you start by imitating
1:30:28
the sounds and
1:30:30
physical actions that you see around you and then you slowly
1:30:32
understand the underlying structure. And
1:30:34
then suddenly,
1:30:36
it snaps into place. And now instead of just
1:30:38
trying to vaguely imitate the jazz sounds
1:30:40
you're hearing around you, you can start improvising
1:30:43
and making real jazz and you start to understand the thing and then
1:30:45
it evolves from there.
1:30:46
And then there are many of these leaps we make.
1:30:48
And what's
1:30:51
often going to happen is we have a certain way of doing
1:30:53
things
1:30:54
that is not the most efficient way
1:30:56
of achieving things, but
1:30:58
the more efficient ways
1:31:00
are well beyond... They require a
1:31:03
leap in
1:31:04
a phase shift in how we understand things.
1:31:07
And as the AI gains its capabilities,
1:31:10
it's going to start doing these phase shifts. They call this grokking
1:31:12
in training, as I understand it,
1:31:14
where the loss function says that
1:31:16
you're not doing a very good job.
1:31:18
You're doing a pretty lousy job at figuring out what you're supposed
1:31:20
to do here and
1:31:21
you keep on doing a slightly less lousy job, slightly less lousy
1:31:23
job and then suddenly wham and
1:31:25
the system gets much better at it.
1:31:27
It's called grokking. It's a system for you
1:31:29
to understand, in some sense. It developed
1:31:31
a different way of understanding things.
1:31:33
And humans do this as well.
1:31:35
And so what will often happen is that the AI will shift
1:31:37
from the current
1:31:40
way that a goal is achieved that something is
1:31:42
done to
1:31:43
a new way. And one of the worries is
1:31:45
this kind of shift breaks
1:31:48
our current ways of
1:31:50
keeping the system in check, of understanding the system, of predicting
1:31:52
the system, of making sure the system
1:31:54
is going to do the things that we want.
1:31:56
And this is one of the things that I worry about.
1:31:58
Yeah, it's a little like...
1:32:01
Chess or Go programs, they
1:32:03
start very primitively and they get a lot better now. Life
1:32:06
isn't like Chess or Go, very different board,
1:32:09
even though Adam Smith I use is better
1:32:11
for pretty well.
1:32:14
I just can't stop thinking about this idea
1:32:17
that
1:32:20
even if it's not really good for us, we're going to probably do it
1:32:22
anyway because it's pleasant.
1:32:23
Certainly
1:32:25
if it's not good for us as a species, I'm
1:32:27
thinking about the Amish.
1:32:29
A lot of people decrying the
1:32:32
cell phone these days, I worry about it sometimes,
1:32:35
I talk about it on your compulsion,
1:32:37
I have with it sometimes, I view it
1:32:39
as unhealthy for me.
1:32:41
And the Amish,
1:32:43
they go case by case, by the way,
1:32:45
they take a technology, they say, we'll
1:32:47
use a wagon, we're not going to drag stuff
1:32:49
on the ground, we could use a wagon, and
1:32:51
they could have a car if they need to get to the
1:32:53
hospital, say, but they're not going to buy a sports
1:32:56
car and they're not going to do other things,
1:32:58
and they're not going to change their farming in certain ways.
1:33:02
I think those of us on the outside, of
1:33:05
course it's true of a religious life in many, many
1:33:07
ways, on the outside, it's something really
1:33:09
beautiful about that. But the fact
1:33:12
is, most people don't find it beautiful enough
1:33:14
to adopt because it's hard.
1:33:17
There's
1:33:17
hard parts to it, and by definition almost.
1:33:20
Stories that are held together and that
1:33:23
have all kinds of returns
1:33:25
and belonging and meaning and purpose
1:33:28
have hard parts because otherwise
1:33:30
it's not interesting.
1:33:32
It doesn't hold together. And so the idea
1:33:34
that, like I like to say on this show a lot, I
1:33:36
think it's probably not true now that I
1:33:39
think about it, that oh, norms will evolve
1:33:41
to cope with this, and we'll understand
1:33:44
you shouldn't use AI for this, and
1:33:46
you speak, it's okay to use it for that. These
1:33:48
norms will come along and they'll keep it within a human
1:33:50
context.
1:33:52
I don't think that's true, probably for most
1:33:54
people. I think they're going to use it a lot because
1:33:57
it makes life easier, it makes
1:33:59
them look better. look smarter, make
1:34:02
more money, and it's going to be really hard. I'm
1:34:05
not sure we're, like I said, I don't think we can, I think
1:34:08
we're going to slow this down much.
1:34:12
The problem is, you know, there's, it's
1:34:14
very difficult to slow this down in a
1:34:16
meaningful way. It's also very difficult to ensure
1:34:18
a
1:34:19
good outcome. And then if you have, you know,
1:34:21
two impossible things and you need to solve one of them,
1:34:23
or you're in a lot of trouble, you need to pick which
1:34:26
one you think is the best one to act on
1:34:28
and do the best you can.
1:34:30
With the Amish, you know, I think they achieve a
1:34:32
lot of very good things. And it's a question of,
1:34:34
you know, is this more valuable in the things that you give
1:34:36
up or not?
1:34:37
But they do this because they have the
1:34:39
economic,
1:34:40
right, affordance to be able to do that,
1:34:42
that
1:34:43
their lifestyle has
1:34:45
these costs, these economic costs,
1:34:47
these economic benefits that allows
1:34:49
them to produce the things they need to survive,
1:34:51
that allows them to turn a profit, that allows them
1:34:54
then to be continuously purchasing more land and
1:34:56
having more people that can survive on that land.
1:35:00
And, you know,
1:35:01
as we are forced to compete,
1:35:04
we have more and more capable AIs and AI
1:35:06
systems that have more and more affordances.
1:35:08
You
1:35:09
know, will that kind of
1:35:11
system actually continue to be
1:35:13
economically competitive? And
1:35:16
if it's true locally, that it can survive
1:35:19
on its own resources, will it be left alone
1:35:21
to do so?
1:35:22
Right? Like from something that like covets those
1:35:24
resources simply from a I can achieve more of
1:35:26
my goals if I have more of the land, if
1:35:28
I have more of the more of the energy from the sun
1:35:31
available to me or you know whatnot.
1:35:34
And so, you know, it's not necessarily safe
1:35:36
to be an Amish, even if you can make the
1:35:38
very difficult
1:35:40
emotional decision to stay on the farm
1:35:43
and enjoy this very carefully selected lifestyle
1:35:45
that you think has value.
1:35:49
I guess today has been Sve Mashahed's
1:35:53
sub-sec is don't worry about the vase. I
1:35:55
strongly recommend it if you're interested in keeping
1:35:58
up. It's hard to keep up. He
1:36:00
keeps up quite
1:36:01
well, but I'm sure even Sve,
1:36:03
there are a few things he doesn't know. But
1:36:05
he does a lot of them and more than most. Sve,
1:36:07
thanks for being part of EconTalk. Thanks for having
1:36:09
me.
1:36:16
This is EconTalk, part of the Library of Economics
1:36:19
and Liberty. For more EconTalk, go to econtalk.org,
1:36:22
where you can also comment on today's podcast
1:36:24
and find links and readings related to today's
1:36:27
conversation. The sound engineer
1:36:29
for EconTalk is Rich Goyette. I'm
1:36:31
your host, Russ Roberts. Thanks for listening.
1:36:34
Talk to you on Monday.
Podchaser is the ultimate destination for podcast data, search, and discovery. Learn More