Episode Transcript
Transcripts are displayed as originally observed. Some content, including advertisements may have changed.
Use Ctrl + F to search
0:05
Welcome
0:06
to the Making Sense podcast. This
0:09
is Sam Harris. Just
0:11
a note to say that if you're hearing this, you are not currently
0:14
on our subscriber feed and will only be
0:16
hearing the first part of this conversation. In
0:18
order to access full episodes of the Making Sense
0:20
podcast, you'll need to subscribe at SamHarris.org.
0:24
There you'll find our private RSS feed to add
0:26
to your favorite podcatcher, along with other subscriber-only
0:29
content. We don't run ads on the
0:31
podcast, and therefore it's made possible
0:33
entirely through the support of our subscribers. So
0:36
if you enjoy what we're
0:36
doing here, please consider becoming one.
0:46
Welcome to the Making Sense podcast.
0:48
This is Sam Harris. Okay,
0:52
just a reminder that subscribers
0:54
to the podcast can now share full
0:57
episodes by going to the episode
0:59
page on my website
1:01
and getting the link.
1:04
And you can share one-to-one with friends
1:06
and family, or you can
1:08
post to social media, whatever
1:11
you like. Okay,
1:14
today I'm speaking with Mustafa Suleiman.
1:16
Mustafa is the co-founder
1:19
and CEO of Inflection AI
1:21
and a venture partner at Greylock,
1:24
a venture capital firm. Before
1:26
that, he co-founded DeepMind, which
1:28
is one of the world's leading artificial intelligence
1:31
companies, now part of Google.
1:34
And he was vice president of AI product
1:36
management and AI policy at
1:39
Google.
1:40
And he is also the author of a new book,
1:42
The Coming Wave,
1:44
Technology, Power, and the 21st Century's
1:46
Greatest Dilemma, which is the focus
1:49
of today's conversation.
1:51
We talk about the new book. We talk
1:53
about the progress that was
1:55
made in AI by his company,
1:57
DeepMind, various landmarks.
2:00
They achieved Atari DQN,
2:03
AlphaGo, AlphaZero, AlphaFold. We
2:06
discussed the amazing fact that we now
2:08
have technology that can invent
2:11
new knowledge. The risks of our
2:13
making progress in AI,
2:15
super intelligence as a distraction from
2:17
more pressing problems, the inevitable
2:19
spread of general purpose technology,
2:22
the nature of intelligence, productivity
2:24
growth and labor disruption, the
2:27
containment problem,
2:28
importance of scale, open
2:31
source LLMs, changing
2:33
norms of work and leisure,
2:35
the redistribution of value,
2:37
introducing friction into the deployment of
2:40
AI,
2:41
regulatory capture, the
2:43
looming possibility of a misinformation
2:45
apocalypse, digital watermarks,
2:48
asymmetric threats, conflict
2:51
and cooperation with China, supply
2:53
chain monopolies
2:55
and other topics.
2:57
Anyway, it was great to get Mustafa here. He's
2:59
one of the pioneers in this field.
3:01
And as you'll hear, he shares many of my concerns but
3:04
with different points of emphasis.
3:07
And now I bring you Mustafa Suleiman.
3:15
I am here with Mustafa Suleiman. Mustafa,
3:17
thanks for joining me.
3:19
Great to be with you, Sam. Thanks for having me.
3:21
So you have a new book which the
3:24
world needs because this is the
3:26
problem of our time. The title
3:28
is The Coming Wave, Technology,
3:31
Power and the 21st Century's
3:33
Greatest Dilemma. And
3:35
we will get into the book because
3:37
it's really quite a good read. And
3:41
we will talk about what that coming wave
3:44
is. But
3:45
you're especially concerned about AI
3:47
which is your
3:49
wheelhouse, but also you're talking
3:51
about synthetic biology
3:54
and to a lesser degree robotics
3:56
and some other
3:57
technologies that are gonna
3:59
be more and more...
3:59
presence if things don't run totally
4:02
off the rails for us. But before we jump
4:04
into the book, let's
4:06
talk about your background. How would
4:08
you describe the
4:09
bona fides that have brought you to
4:11
this conversation? Yeah, I
4:14
mean, I started life, I
4:16
guess, as an entrepreneur.
4:18
When I was 18, I started my first company,
4:20
which was a point of sale system
4:23
sales company. And we were sort
4:26
of installing these sort
4:28
of very early PDAs back
4:31
in 2002, 2003, and networking equipment. It wasn't successful,
4:35
but that was my first attempt. I
4:38
dropped out of Oxford at the end of my
4:40
second year, where I was reading philosophy
4:43
to start a charity.
4:45
And I helped two or three other people get a
4:47
telephone counseling service off the ground. It was
4:50
a secular service for
4:52
young British Muslims.
4:54
I was a just turned an atheist
4:56
at the time, having been to
4:57
Oxford, discovered
4:59
human rights principles and the
5:01
ideas of universal justice,
5:04
and managed to sort of move
5:06
out of the faith,
5:07
and decided
5:09
that I really wanted to dedicate
5:12
my life to doing good and studying
5:15
philosophy and the theory.
5:16
I was too esoteric
5:19
and too distant from action. I'm a very
5:21
kind of practical action
5:23
focused person.
5:25
So I spent a couple years doing that.
5:28
A little bit after that time, I spent
5:30
a year or so working in local government
5:32
as a human rights policy officer
5:34
for the mayor of London at the time. I think
5:37
I was 21 when I started that job. It
5:39
was very
5:40
big and exciting, but
5:41
ultimately quite unsatisfying and frustrating.
5:45
Who was the mayor? Was that Johnson?
5:47
That was before Johnson, yeah, quite a bit
5:49
before. It was Ken Livingston
5:51
back in 2004. So quite a while back
5:55
in London.
5:56
And then from there, I wanted
5:58
to see how I could... scale up my impact
6:01
in the world and
6:03
you know i helped to start
6:06
a conflict resolution firm i was very
6:08
lucky at the age of twenty two.
6:10
To be able to
6:11
co found this this consultancy with
6:13
a group of some of the most
6:16
practiced. Negotiation
6:18
experts in the world
6:20
some of the people are involved in the piece and reconciliation
6:22
process in south africa post apartheid.
6:25
There's a big group of us coming together with very different
6:27
skills and backgrounds and
6:30
i had incredible three years there
6:32
working all over the world and.
6:34
Inside person for the dutch government
6:37
for you know on the israel-palestine question
6:40
you know many different places
6:42
and it was hugely inspiring and taught
6:44
me a lot about the world. Put a
6:46
sort of fundamentally realized from there that if
6:48
i. If i didn't get back to technology
6:51
i would miss the most important transition
6:54
you know wave if you like happening in my lifetime.
6:57
And you know
6:59
i said about
7:01
shortly after the climate negotiations
7:03
that we were working on in two thousand nine in koban
7:05
hagan
7:06
everyone left feeling. Frustrated
7:09
and disappointed that we can manage to reach
7:11
agreement and this was the year that.
7:13
Sort of obama was coming over and everyone had a lot
7:15
of hope
7:16
and it didn't happen turns out for another
7:19
ten or twelve years.
7:20
And i i sort of had this aha moment
7:22
i was like if i don't get back to
7:25
technology then i'm gonna miss the most important thing
7:27
happening and so. I said about this
7:29
question i sort of find anyone that
7:31
any anyone who i knew even
7:34
tangentially who is working in.
7:36
Technology
7:37
and my best friend from
7:40
when we were teenagers his
7:42
older brother was demissus obvious.
7:44
And we were playing poker together
7:47
one night in the victoria casino in
7:49
london and we got.
7:51
Chatting about the ways that at the time
7:53
you know we framed as robots we're gonna transform
7:55
the world and
7:57
deliver enormous productivity boost
7:59
and improve. efficiency in every respect.
8:01
And we were sort of debating like,
8:03
how do you do good in the world? How do you
8:06
get things done? What is the real
8:08
set of incentives and efforts that really makes
8:10
a difference?
8:12
And both very
8:14
passionate about science and technology and having a positive
8:16
impact in the world. And
8:18
one thing led to another and eventually
8:20
we
8:21
ended up starting DeepMind. I
8:23
did that for 10 years.
8:26
Yeah, along with Shane Legge, right?
8:28
Shane is our
8:30
other co-founder exactly. Shane was a Gatsby
8:33
computational neuroscience unit in London
8:35
at the time. And
8:38
he had just finished his PhD a few years early. He was
8:40
doing postdoctoral research
8:42
and his PhD was on definitions of intelligence,
8:44
which was super interesting. It was very
8:47
obscure and really, really
8:49
relevant. He was sort of trying to synthesize 60
8:51
or
8:52
so different definitions of intelligence
8:55
and trying, tried to sort of abstract
8:57
that into an algorithmic construct, one that we could
9:00
use to
9:00
measure progress towards some
9:03
defined goal. And
9:05
his frame was that intelligence
9:07
is the ability to perform well across
9:09
a wide range of environments.
9:11
So the core emphasis was that
9:14
intelligence was about generality, right?
9:15
And we, you know, we can get into
9:17
this. There's lots of different definitions of intelligence,
9:19
which
9:20
place emphasis on different aspects of our capabilities,
9:23
but generality has become the
9:26
core concept that sort of dominated the field
9:28
for the last sort of 12, 15 years. And of course, the
9:32
term AGI, I mean, that predated Shane,
9:35
but I
9:36
think it was very much popularized by
9:38
our
9:39
kind of mission, you know, sort
9:41
of, it was really
9:42
the first time in a long time that a company
9:45
had been
9:45
founded to invent general
9:48
intelligence or AGI.
9:50
And that was our mission to try and build safe and ethical
9:53
artificial general intelligence.
9:56
So I'm trying to remember where we met. I know we
9:58
were both at the Puerto Rico
9:59
conference at the beginning of 2015
10:02
that I
10:03
think it
10:06
was the first
10:07
of these meetings, but it was the first that I was aware
10:09
of that really focused the conversation
10:11
on AI safety and risk.
10:13
And I know I met Demis there.
10:16
I think you and I met
10:18
in LA subsequent to that. Is that right?
10:21
Yeah, I think we
10:23
met... I can't remember if we met before or
10:25
after that, but I think we had a common interest
10:28
in our LA conversation. It might
10:30
have been just before that
10:32
talking about extremism and radicalization
10:34
and terrorism and... Oh, within Islam, yeah. That's
10:37
right, yeah.
10:38
Yeah, so I can't... I don't think we met
10:40
in Puerto Rico, but that conference was very
10:43
formative of my... It
10:45
was really my first impression of
10:47
how big a deal this was going to be ultimately.
10:50
And then there was a subsequent conference
10:52
in 2017 at a Sylmar where
10:56
I think we
10:58
met again. I think I met Shane there as
11:00
well.
11:00
So let's... Before we jump into,
11:02
again, the book and
11:04
what you're doing currently, because you've since moved
11:07
on from DeepMind and you have a new company
11:09
that we'll talk about,
11:10
but let's talk about DeepMind
11:13
because it really was...
11:16
You know, before it has been eclipsed
11:19
in the popular consciousness
11:21
by OpenAI of late with the
11:24
advent of chat GPT
11:26
and large language models.
11:29
But prior to that, really DeepMind
11:32
was the preeminent...
11:34
I may in fact still be the
11:37
preeminent AI company, but
11:39
it's now a branch of Google. Give
11:42
us a little bit of the history there and
11:45
tell us what
11:46
was accomplished because at
11:48
DeepMind, you had several breakthroughs that
11:51
were just fundamental and you
11:54
really put AI back
11:56
on the map and
11:58
prior to what you did there... We were
12:00
in an AI, so-called AI winter, where
12:03
it was just common knowledge that
12:05
this artificial intelligence thing wasn't really panning
12:08
out, and then all of a sudden everything changed.
12:10
So I think pre-acquisition, which
12:13
was in 2014, I think
12:16
there were probably two principal contributions
12:18
that we made. I think the first is
12:21
we made a very early bet on deep
12:23
learning. I mean, the company was founded in 2010, in the summer of 2010.
12:26
And it really wasn't for
12:30
a couple of years that deep learning had even
12:32
appeared on the field, even academically,
12:35
with the ImageNet Challenge a few years after we founded.
12:38
So that was a very significant bet that we
12:40
made early and that we got right. And
12:42
the consequence of that was that we were able
12:44
to
12:45
hire
12:46
some of the best PhDs and postdoctoral
12:48
researchers in the world, who
12:51
at the time were working on this
12:53
very obscure,
12:54
very uninteresting, largely
12:57
not very valuable
12:58
subject. In fact, Jeff
13:00
Hinton was one of our consultants, so
13:03
was his
13:04
student at the time, Ilya Satskiva,
13:06
who's now chief scientist and co-founder
13:08
of OpenAI,
13:10
along with many others from OpenAI and
13:12
elsewhere who basically
13:14
either worked with us full-time
13:16
or worked with us as consultants.
13:19
That was largely reflective of
13:21
the fact that we got the bet right
13:23
early on deep learning.
13:25
The second contribution, I would say, was
13:27
the
13:28
combination of deep learning and reinforcement learning.
13:30
I mean, if deep learning was obscure,
13:33
reinforcement learning was even more theoretical.
13:36
And we were actually quite careful
13:38
to frame our mission among
13:41
academics,
13:42
less around AGI
13:44
and more around applied machine learning. Certainly
13:46
in the very early days, we
13:49
were a bit hush-hush about it. But as
13:52
we got more traction in 2011, 2012, it
13:54
became
13:55
very attractive to people who were otherwise
13:58
quite theoretical in their outlook.
14:00
to come work on problems like reinforcement learning
14:02
in a sort of more engineering focused
14:05
setting albeit still a research lab.
14:08
And it was the combination of deep learning and reinforcement
14:10
learning that
14:11
led to our
14:13
first i think major
14:15
contribution which was the
14:17
Atari dq and i dq
14:20
and so. You can
14:22
was a pretty incredible
14:24
system i mean it is essentially learn
14:26
to play.
14:27
50 or so of the old school
14:29
sort of a tease Atari games Atari
14:32
games
14:33
to human level performance simply
14:35
from the pixels learning to correlate.
14:37
A set of rewarding moments in
14:39
the game via score with
14:42
a set of frames.
14:44
That led to that score in the run
14:46
up to that any actions that were taken there
14:48
and that was a really significant achievement
14:50
it was actually that. Which kor
14:53
larry page's attention and
14:55
led him to email us.
14:57
You know and you know sort of invite us to
14:59
come and be part of google.
15:02
And then google acquired you and
15:06
what was the logic there use good
15:08
to have google's resources to
15:10
scale or. I mean larry made a very
15:13
simple
15:14
claim which was you
15:16
know i've spent
15:18
the last. You know
15:20
ten years or so
15:22
building a platform with
15:24
all the resources necessary. To
15:26
make a really big bet on hg
15:29
i know why should you guys go through all
15:31
of that again
15:32
you know we'll give you the freedom you need to
15:34
carry on operating as a.
15:36
Essentially a independent subsidiary
15:38
even though we were part of google why
15:40
wouldn't you just come and work with us and
15:43
have all the resources you need to.
15:45
To scale
15:46
you know significantly which is what we did and
15:48
it's it's it was a very compelling.
15:51
Proposition because of the time you know monetizing
15:54
deep learning back in 2014 was
15:56
gonna be really tough so. But
15:59
google had its own. AI
16:01
division as well that was just kind of working
16:03
in parallel
16:05
with DeepMind. At some point, you guys
16:08
merged. I don't know if that happened
16:11
after you left or before, but was
16:14
there a firewall between the two
16:16
divisions for a time and then that came
16:18
down or how did that work?
16:20
Yeah, so the division you're referring to is
16:22
Google Brain, which is run by Jeff Dean. And I think
16:24
that started in 2015 with
16:28
Andrew Ung actually as well.
16:30
And in some ways,
16:32
that's the kind of beauty of Google scale, right? That
16:34
it was able to
16:35
run multiple huge billion
16:37
dollar efforts in parallel. And
16:40
the merger,
16:41
which I think has been long coming,
16:43
actually only happened this year. So Google
16:46
plus DeepMind is now Google DeepMind.
16:49
And most of the kind of
16:51
open-ended research on AI is now
16:53
consolidated around Google DeepMind and
16:56
all of the sort of more focused applied
16:59
research that helps
17:01
Google products more directly in the short term
17:03
is focused on a separate division
17:05
Google research.
17:06
Right. So you had the
17:09
Atari game breakthrough,
17:11
which caught everyone's attention because you
17:14
have these memory serves,
17:17
you managed to
17:19
build a system that
17:21
had achieved
17:23
human level
17:25
competence and beyond and also achieved
17:27
novel strategies that
17:30
many humans wouldn't come up with. But
17:32
then the real breakthroughs that
17:35
got everyone's attention
17:36
were with AlphaGo and AlphaZero
17:39
and
17:39
AlphaFold. Perhaps you can run through those
17:42
because that's when, at least
17:44
to my eye, things just became unignorable
17:47
in the AI field.
17:49
Yeah, that's exactly right. I mean, it's
17:52
pretty interesting because sort of after
17:54
we got acquired, it was actually Sergei
17:57
that
17:58
was sort of insisting that we tackle. Go.
18:00
I mean, his point was that
18:03
Go is a massively complex space,
18:06
and all the traditional
18:08
methods that
18:09
had previously been used for games before
18:12
DQN, which essentially involved
18:14
handcrafting rule-based features,
18:17
which is really what drove the work behind
18:19
Deep Blue, IBM's model
18:22
a long time ago, in 97, I think it was.
18:25
Go has something like 10 to
18:27
the power of 170 possible
18:30
configurations of the board. So it's a 19 by 19
18:33
board with black and white stones,
18:35
and the rules are very simple. It's a
18:37
turn-based game where each player simply
18:39
places
18:42
one stone on the board,
18:44
and when you surround your opponent's
18:46
stones, you remove them from the board, and the goal
18:48
is to surround your opponent.
18:51
And so it is a very
18:53
simple rule set, but it's a massively
18:55
complicated
18:57
possible set of different configurations
18:59
that can emerge.
19:00
And so you can't search all
19:02
possible branches of that
19:05
space because it's so enormous. 10
19:07
to the 170 is more atoms
19:09
than there are in the known universe, approximately. Yeah.
19:12
I think that's something like 10 to
19:14
the 80 that gets you
19:16
all the protons in the universe. So yeah,
19:18
it gets bigger still when
19:20
you're talking about Go. Right.
19:23
So this needed
19:25
a new suite of methods, and
19:28
I think it was an incredible experience
19:30
seeing AlphaGo progressively
19:32
get better and better. I mean, we already had
19:34
an inkling for this when we saw
19:36
it play the Atari games, but this was just
19:39
seismically more
19:40
complicated and vast, and yet it was
19:42
using the same basic principle, actually
19:45
the same principle that has subsequently been applied
19:47
in protein folding too.
19:50
So I think that's
19:52
what's really interesting about this is that it's the generality
19:55
of the ideas that simply scale with
19:57
more compute.
19:58
Because a couple of...
19:59
years later,
20:01
AlphaGo became AlphaZero,
20:03
which essentially achieved
20:06
superhuman performance without any
20:08
learning from prior games.
20:12
Part of the trick with AlphaGo is that it
20:14
looked at hundreds of thousands of prior
20:17
games. It's almost like the expert knowledge of
20:19
existing players that has been handed
20:21
down for centuries of playing the game.
20:24
Whereas AlphaZero
20:25
was able to learn entirely through self-play,
20:28
almost like I think the intuition is spawning
20:31
instances of itself in order to play
20:33
against itself
20:34
in simulated environments, many,
20:36
many hundreds of millions of billions of times. It
20:40
turns out to be way more valuable than bootstrapping
20:43
itself from the first principles
20:45
of human knowledge, which if you think about the
20:47
size of the state space represents a minor
20:50
subset of all possible configurations
20:52
of that board. That was a kind
20:54
of remarkable insight. Indeed, it did
20:56
the same thing for other games, including
20:58
chess and shogi and so on.
21:00
Yeah, that's a
21:03
really fascinating development where it's
21:05
now uncoupled from the
21:07
repository of human knowledge. It plays
21:10
itself, and over the course of, I think
21:12
it was just a day of self-play,
21:14
it was better than
21:17
AlphaGo and any other
21:19
system, right?
21:21
Right. That's exactly right. Obviously,
21:23
that's partly a function of compute,
21:25
but
21:25
the basic principle gives an important
21:28
intuition which is that
21:29
because these methods are so general, they
21:31
can be paralyzed and scaled up.
21:34
That means that
21:36
we can take advantage of all of the
21:39
traditional assets of computing
21:42
infrastructure rather than relying on
21:44
old-school methods,
21:46
perfect memory, paralyzable compute,
21:49
Moore's
21:49
law, daisy-chaining
21:52
compute together just like we do with GPUs
21:55
these days.
21:56
In some ways, that's
21:58
the key intuition because...
22:00
It means the sort of barrier
22:02
to application of the quality
22:04
of the algorithm is lower because
22:06
it's turbocharged by all these other
22:09
underlying drivers which are also
22:12
improving the power and performance of these models.
22:16
And also AlphaZero in
22:18
when it was playing the world champion
22:20
came up with a move that all
22:23
Go experts thought they immediately
22:25
recognized as a mistake, but
22:27
then when the game played out it turned
22:30
out to be this
22:31
brilliant novel move that no human
22:33
would have made and it just
22:36
a piece of discovered Go knowledge.
22:39
Yeah, I mean, I remember sitting
22:41
in the commentary
22:43
room live watching that unfold
22:45
and listening to
22:47
the commentator who was himself a nine
22:50
down expert
22:52
say that it was a mistake. He was like, Oh, no,
22:54
we've lost. And it took 15 minutes
22:57
for
22:57
him to correct that and sort of come
22:59
back and reflect on it was a really
23:02
remarkable moment.
23:03
And actually,
23:05
for me, it was a great inspiration,
23:07
because this is why we
23:10
started the company. I mean, the quest was to
23:13
try to invent new knowledge. I
23:15
mean, our goal here is to try
23:17
to design algorithms
23:19
that can teach us something that we don't know,
23:21
not just reproduce existing
23:23
knowledge and synthesize information in new ways,
23:26
but genuinely discover new strategies
23:29
or
23:29
new molecules or new
23:32
compounds, new ideas, and
23:34
contribute to the
23:35
kind of well of human
23:37
knowledge and capability. And this
23:39
was a kind of first, well, actually, it
23:42
was the second indication because the first instinct
23:44
I got for that was watching the Atari games player
23:46
learn new strategies from scratch. This
23:49
was kind of the second, I think.
23:59
Well, protein folding
24:02
is a long-standing challenge,
24:06
and we actually started working on this as
24:08
a hackathon, which started in my group back
24:12
in 2016.
24:13
And it was really just an experiment
24:15
to see if, you know, some of the AlphaGo
24:17
models could actually make progress
24:19
here. And the basic idea
24:21
is that if you can
24:23
sort of generate
24:24
an example of the way
24:27
a protein folds, this folding
24:29
structure represents,
24:30
might tell you something about
24:33
the value
24:35
of that molecule in practice, what it can do,
24:38
what
24:38
its strengths and weaknesses are, and so on.
24:40
And so, the nice
24:42
thing about it is because it operated in a simulated
24:45
environment, it was quite similar to some of
24:47
the games that we had been
24:48
playing, you know, teaching
24:50
our models to play. And,
24:53
you know, previously the experiments had
24:55
done something like 190,000 proteins, which is about 0.1% of all the proteins
24:57
in
24:58
existence.
25:02
But
25:03
in AlphaFold 2,
25:05
the team actually open sourced something
25:07
like 200 million protein structures all
25:09
in one go, which is sort of all known
25:12
proteins. This is a massive breakthrough
25:14
that took, you know, four or five
25:16
years of work in development. And
25:19
I think just gives an indication
25:22
of the kinds of things that become possible with
25:24
these sorts of methods.
25:25
Yeah, I forget. Someone
25:27
gave a, well, what reported to be a kind
25:29
of a straightforward
25:30
comparison between what
25:33
AlphaFold did there and
25:35
the academic years of PhD
25:38
theses. And it was something like, you
25:41
know, 200 million PhD theses got
25:43
accomplished in a few years there
25:45
in terms of solving those protein folding
25:47
problems.
25:48
Yeah, I mean, those kinds
25:50
of insights, those kinds of sort
25:53
of compressions are similar to,
25:55
you know, across the board with many technologies.
25:57
Like
25:58
another one that's sort of similar to that is... that
26:00
the amount of labor
26:02
that once produced 50 minutes
26:04
of light in the 18th
26:07
century now produces 50 years
26:09
worth of light. And that just
26:11
gives us a sense for
26:14
how technology has this massive
26:16
compressive effect that is
26:19
hugely leveraging in terms of what we can do.
26:21
Yeah, there's another crazy analogy
26:23
in your book talking about the
26:25
size of these
26:28
parameters of these new large
26:30
language models, which we'll get to. But the
26:32
comparison was something like executing
26:34
all of these floating point operations. If
26:38
every operation were a drop
26:39
of water, the
26:41
largest large language models
26:43
execute as many calculations
26:46
as would fit into the entire Pacific
26:48
Ocean. So it's just the scale
26:51
is astounding. Right.
26:53
So your book was a bit of a surprise
26:56
for me because you
26:58
are more worried than
27:00
I realized about how all of this can go
27:03
wrong. And I got
27:05
the sense in, you and I haven't spoken
27:07
very much, but
27:08
in talking to you and Demis and Shane,
27:10
I
27:11
got the sense that,
27:13
and this is these conversations are now
27:15
several years old, that you were
27:17
more sanguine about our
27:20
solving all of the relevant
27:23
problems, alignment being the
27:25
chief among them, but
27:27
other concerns of bad incentives and arms
27:29
race conditions and
27:31
et cetera.
27:33
You all were putting a fairly
27:35
brave face on
27:37
a problem that was making many
27:40
of us increasingly
27:41
shrill and not to say hysterical.
27:44
And so
27:47
I guess the most hysterical voice of
27:49
the moment is someone like Eliezer Yudkowsky.
27:52
And there was obviously Nick Bostrom and others who were
27:56
issuing fairly grave warnings
27:58
about how it was more likely
28:00
than not that we were going to screw this up and build
28:03
something that we really can't
28:05
control ultimately and that could well
28:07
destroy us. On the way to
28:09
the worst possible outcome,
28:11
there are many bad, very
28:14
likely outcomes like a
28:16
misinformation apocalypse and
28:18
other risks. But
28:21
in your book, you
28:22
don't give the risks
28:25
short shrift. I mean, you
28:27
do seem to suggest that,
28:30
and certainly when you add in the attendant
28:32
risks of synthetic biology
28:34
here, which we'll talk about, you
28:37
are quite worried and yet there's no,
28:40
as you agree with a
28:42
point I made
28:43
early on here, which is that
28:46
as worried as we are, there really is no
28:48
break to pull. I mean, the incentives are such
28:51
that we're going to build this. And so
28:53
we have to sort of figure out
28:55
how to repair the rocket as it's
28:58
taking off and align it properly as it's
29:00
taking off because there's just no getting
29:02
off this ride at the moment, despite the fact
29:04
that people are calling for a moratorium
29:07
or some people are.
29:08
So I guess before we jump into the book,
29:12
when did you get worried? Were you always
29:14
worried or
29:17
are you among the newly worried people like
29:19
Jeff Hinton, who I mean, like Jeffrey
29:22
Hinton, who you mentioned is
29:24
really the godfather of this technology. And
29:27
he just recently resigned
29:30
from Google so that he could express his worries
29:32
in public.
29:33
And he seems to have just become worried
29:36
in the presence of these large
29:38
language models.
29:39
And it's quite inscrutable to me
29:42
that he
29:44
suddenly had this change of heart because in
29:47
my view, the basis for this concern
29:49
was always self-evident. So
29:52
give me the memoir of your
29:54
concerns here.
29:55
Yeah, so this is not a new consideration
29:58
for me. I've been worried about it.
29:59
about this from the very
30:02
first days when we founded the company. And in
30:04
fact,
30:05
our strap line on our business plan
30:08
that
30:08
we took to Silicon Valley in
30:11
2010 was building artificial general intelligence
30:14
safely and ethically for the benefit of everyone. And
30:17
that was something that was critical to me all the
30:19
way through. And when we sold the company, we
30:22
made it a condition of the acquisition that we have
30:24
an ethics and safety board with some
30:26
independent members overseeing technology
30:29
in the public interest that our technologies
30:31
wouldn't be used for military purposes
30:34
like lethal autonomous weapons or surveillance
30:36
by the state. And since
30:39
then at Google, I went
30:41
through lots and lots of different efforts to experiment
30:43
with different kinds of oversight boards
30:45
and charters and external
30:48
scrutiny and independent audits and all
30:50
kinds of things. And so I'd say I've definitely been
30:53
top of mind for me all the way through. I think
30:55
where I diverge from the
30:57
sort of Bostrom camp a bit
30:59
is that I think that
31:01
the language around super intelligence
31:04
has actually been a
31:05
bit of a distraction.
31:07
And I think it was quite obviously a distraction
31:09
from fairly early
31:11
on. I think that the focus
31:13
on
31:14
this sort of intelligence
31:17
explosion, this AI that recursively
31:19
self improves and suddenly takes over everybody
31:21
and turns the world to paperclips.
31:23
I think has consumed way more
31:25
time than the
31:27
idea justifies.
31:29
And actually, I think there's a bunch
31:31
of more near term, very practical
31:34
things that we should be concerned about.
31:36
They shouldn't create trill alarmism
31:39
or panic, but they are
31:41
real consequences that if we don't take
31:43
them seriously, then they have the potential to
31:45
cause serious harm. And
31:48
if we continue down this path
31:51
of complete openness
31:52
without any sort of checks
31:54
and balances on how this technology arrives
31:57
in the world, then essentially
32:00
it has the potential to cause a great deal of chaos.
32:02
And I'm not talking about
32:04
AIs running out of control and
32:06
robots and so on. I'm really talking
32:08
about
32:09
massively amplifying the spread
32:12
of misinformation
32:13
and more generally reducing the
32:16
barrier to entry to be able to
32:18
exercise power. That
32:20
is fundamentally what this technology is.
32:23
In my book, I have a framing, which
32:25
I think is more helpful around a
32:27
modern Turing test, one that evaluates
32:30
capabilities, like what can an
32:33
AI do. And I think that we should be much
32:35
more focused on
32:36
what it can do
32:38
rather than what it can say. What it
32:40
can say is important and it has huge influence,
32:42
but increasingly it's going to have capabilities.
32:45
And so an artificial capable
32:48
intelligence, an ACI,
32:50
is something that has the potential
32:52
not just to influence and persuade,
32:55
but also to learn to
32:57
use APIs and initiate actions,
33:00
queries, calls in third party
33:02
environments. It'll be able to use
33:05
browsers and parse the pixels
33:07
on the browser to be able to click buttons and
33:09
take actions in those environments.
33:11
It'll be able to call,
33:13
phone up and speak to,
33:15
communicate with other AIs
33:17
and other humans. So
33:20
these technologies are getting smaller
33:22
and smaller and more and more capable,
33:25
are getting cheaper to build. And so
33:27
if you look out over a 10 to 20 year period,
33:30
I think the story is one of a
33:32
proliferation of power in the conventional
33:34
sense,
33:35
not so much an intelligence explosion, which
33:38
by the way, just for the record,
33:39
I think is an important thing for us to think about.
33:42
And I care very deeply about
33:44
existential risk and AGI safety,
33:47
but I think that the more practical
33:49
risks are not getting enough consideration.
33:52
And that's actually a big part of the book. In
33:54
no way does that make me a pessimist. I mean,
33:56
I'm
33:57
absolutely an optimist.
33:59
I'm hoping.
33:59
hopeful and positive about technology. I want to
34:02
build things to make people's lives
34:04
better and to help us create more value in
34:06
the world and reduce suffering. And I think
34:08
that's
34:09
the true upside of these technologies, and we will
34:11
be able to deliver them
34:12
on that upside.
34:13
But no technology comes without risk,
34:16
and we have to
34:17
consciously and proactively attend
34:20
to the downsides.
34:22
Otherwise,
34:24
we haven't really achieved our full
34:26
objective. And that's the purpose of speaking up about
34:28
it. Well,
34:29
before we get into details
34:31
about the downsides,
34:33
let's talk about how this
34:34
might go well.
34:37
I guess before we talk
34:39
about the upside, let's just define the
34:41
terms in the title of your book. The
34:43
title is The Coming Wave. What
34:45
is the coming wave?
34:47
So when you look back over
34:49
the millennia, there have been
34:52
waves of general purpose technologies,
34:55
from fire to the invention of the wheel to
34:57
electricity.
34:59
And each of these waves,
35:01
to the extent that they have been lasting and valuable,
35:03
are general purpose technologies,
35:06
which enable other technologies. And that's
35:08
what makes them a wave. They're enablers of other
35:10
activity. They're general purpose in
35:12
nature.
35:14
And as they get more useful,
35:16
naturally, people experiment
35:18
with them. They iterate, they invent, they
35:20
adapt them, and they
35:23
get cheaper and easier to use. And
35:25
that's how they proliferate.
35:27
So in the history of technologies,
35:29
all technologies that have been useful, that are real
35:32
general purpose technologies, have spread far
35:34
and wide and got cheaper.
35:36
And
35:37
almost universally, that is an incredibly
35:39
good thing. It has transformed
35:42
our world.
35:43
And I think that that's
35:46
an important but very simple
35:48
concept to grasp. Because
35:51
if that is a law of technology,
35:53
if it is a fundamental property
35:55
of the evolution of technology, which
35:58
I'm arguing it is, then
35:59
that happens. real consequences for the
36:01
next wave, because the next wave
36:04
is a wave of intelligence
36:07
and of life itself. Intelligence
36:11
is the ability to take actions. It
36:14
is the ability to synthesize
36:17
information,
36:18
make predictions,
36:20
and affect the world around you.
36:23
It's almost the definition of power.
36:25
Everything that is in our visual
36:28
sphere, everything in our world, if you look around you
36:30
at this very minute today, has
36:31
been affected
36:33
in a very material way by intelligence.
36:36
It is the thing that has produced all of the value
36:39
and all of the products and all of
36:41
the affected the landscape that you can
36:43
see around you in a huge
36:45
way.
36:46
And so the prospect of being able to
36:48
distill what makes us unique
36:50
as a species into
36:52
an algorithmic
36:54
construct that can benefit from being
36:56
scaled up and paralyzed, that can
36:58
benefit from
36:59
perfect memory and compute and consuming
37:02
vast amounts of data, trillions
37:04
of words of data,
37:06
is enormous. That in itself
37:08
is almost like gold.
37:12
It's like being able...it's like alchemy. It's like
37:14
being able to capture the essence of what has
37:16
made us capable
37:18
and add more
37:19
knowledge and
37:21
essentially science and technology
37:23
into the
37:24
human ecosystem. So
37:26
imagine that
37:27
everybody will now in the future,
37:29
in 10 years, 15 years, have access to
37:32
the very best doctor
37:35
in the world, the very best
37:37
educator,
37:39
the very best personal assistant and chief
37:41
of staff. And any one of these roles,
37:44
I think, is going to be very,
37:46
very widely available to billions of people.
37:48
And people often say to me, well, aren't
37:52
the rich going to benefit first? Or is it going to be unfair
37:54
in terms of access? Yes, for a
37:56
period of time.
37:57
That's true.
37:58
But we're actually living...
37:59
in one of the most meritocratic moments
38:02
in the history of our species.
38:04
Every single one of us, no matter how
38:06
wealthy you are, every one of us
38:08
in the Western world,
38:10
really the top 2 billion people on the planet,
38:12
have access to the same smartphone.
38:16
No matter how much you earn, you cannot buy
38:18
a smartphone or a laptop that
38:20
is better than the very richest.
38:22
That's an unbelievably meritocratic moment
38:25
that is worth really meditating on.
38:28
And that is largely a function
38:30
of these exponentials. The
38:33
cost of chips has exponentially
38:35
declined over the last 70 years,
38:38
and that's driven mass proliferation.
38:40
And if intelligence and life are
38:43
subject to those same exponentials,
38:45
which I think they are, over the next two to
38:47
three decades,
38:49
then the primary trend that we have
38:51
to cope with in terms
38:53
of our culture, and our politics and commerce,
38:56
is this idea that intelligence,
38:58
the ability to get stuff done,
39:00
is about to proliferate.
39:02
And that's going to produce a
39:04
Cambrian explosion of productivity.
39:08
Everybody is going to get access to a tool that enables
39:10
them to pursue their agenda, to make us all
39:12
smarter and more productive and more capable.
39:15
So I think it might be one of the most
39:18
productive periods in the
39:20
history of humanity.
39:22
And I think, of course, the challenge
39:24
there is that it may also be one
39:26
of the most unstable over the next 20 years.
39:29
Yeah. So that
39:31
cornucopia
39:33
image immediately
39:35
begets the downside
39:38
concern of
39:39
massive labor disruption, which
39:42
many people doubt in principle. They
39:44
just think that we've learned over the course
39:46
of the last 200 years of
39:49
technological advancement and economic
39:51
thinking that there is no such
39:53
thing as a true
39:56
canceling of a need for human
39:58
labor.
39:59
And so people people draw the obvious analogies
40:01
from agriculture and other previous
40:03
periods
40:04
of labor disruption
40:06
and
40:08
conclude that
40:09
this time is no different and while
40:12
there might be a few hiccups,
40:15
what's going to happen here is that all
40:17
of these productivity gains and job
40:19
canceling innovations born
40:22
of AI will
40:24
just open new lanes for
40:27
human creativity and there'll be better
40:29
jobs and, you know, we're
40:32
just as we were happy to get rid of jobs
40:34
in agriculture and coal mines
40:37
and open them up in the service sector,
40:40
we're going to do the same with AI. I remain
40:42
quite skeptical
40:45
of that this
40:47
time is the same given
40:49
the nature of the technology. This is the as
40:51
you just said, this is the first moment
40:54
where we are
40:56
envisioning a technology
40:59
which is a true replacement
41:02
for human intelligence.
41:04
If we're talking about general intelligence
41:07
and we're talking about the competence
41:09
that you just described, the ability
41:12
to do things in addition to saying
41:14
things,
41:15
we are talking about the
41:17
cancellation of human
41:20
work,
41:21
at least in principle, and
41:24
strangely, I mean, this
41:26
is not a terrible surprise
41:28
now, but it would have been a surprise probably 20 years
41:30
ago, this
41:31
is coming for
41:33
the higher cognitive, higher status,
41:35
white-collar jobs before it's coming for
41:38
blue-collar jobs. How do you view
41:40
the prospect of labor
41:42
disruption here, and how
41:46
confident are you that everyone
41:48
can be retrained
41:51
with their nearly omniscient
41:53
AI
41:54
assistants and chiefs of staffs
41:56
and find something worth
41:59
doing that other people
41:59
people will pay them to do.
42:01
Yeah, I mean, I'm with you. I've
42:04
long been skeptical of people who've
42:06
said that this will be just like
42:09
the agricultural revolution, or
42:11
this will be like the horse and cart and
42:14
cars. People will
42:16
have more wealth.
42:18
The productivity will drive wealth
42:20
creation, and then that wealth creation will drive
42:22
demand for new products. And we couldn't possibly imagine
42:25
what people are going to want to consume
42:28
and what people are going to create with this new
42:30
wealth and new time. And that's typically
42:32
how the argument goes. And I've
42:35
never found that compelling. I mean,
42:37
I think
42:38
that if you look at it, it's been quite
42:41
predictable the last decade. I mean,
42:43
these models are deliberately
42:46
trying to
42:47
replace human cognitive abilities.
42:50
In fact, they have been slowly climbing
42:52
the ladder of human cognitive
42:55
abilities for many years. I mean, we started
42:57
with image recognition and
43:00
audio recognition, and then moved
43:02
on to audio generation,
43:05
image generation, and
43:06
then text, understanding,
43:09
text recognition, and then
43:11
now text generation. And
43:13
it was kind of interesting because
43:15
if you think even just two or
43:17
three years ago, people would have said,
43:20
well, AIs will never
43:22
be creative. That's not achievable.
43:25
That creativity will always be
43:27
the preserve of humans, and
43:30
judgment is somehow unique and special to
43:32
what it means to be human. Or like AIs
43:34
will never have empathy. We'll always be able
43:36
to do care work, and emotional
43:38
care is something that's special. You can never
43:40
replace that connection. I mean, both
43:42
of those are now self-evidently not
43:44
true, and I think have been quite predictable.
43:47
So I think that
43:49
the honest way to look at this is that these
43:51
are only temporarily augmenting of human
43:53
intelligence. If you think about the trajectory
43:56
over 30 years, I mean, let's not quibble over whether
43:58
it's five years, 10 years, or 15 years. engineers, just
44:00
think about it long-term. I think we can all agree long-term.
44:03
If these exponential trajectories
44:05
continue, then they're
44:08
clearly only temporarily going
44:10
to turbocharge an existing human.
44:13
And so we have to really think, okay, long-term,
44:15
what does it mean to have systems that
44:18
are this powerful, this cheap,
44:20
this widely proliferated?
44:22
And that's where I think the broad concept
44:25
I have in the book of containment comes in, because you
44:27
can start to get an intuition for
44:29
the massive consequences of
44:31
the spread of this kind of power,
44:33
and then start to think about what are the sorts
44:36
of things we would want to do about it. Because on the face
44:38
of it, like you said earlier, the incentives
44:40
are absolutely overwhelming. Technology
44:43
has always been a machine
44:45
of statecraft. It's been used
44:48
by militaries and used by nation states to
44:50
serve citizens and drive
44:53
us forward. And now
44:54
it is the fundamental driving force of
44:56
nation states, being commercially
44:58
competitive, having the best companies, having
45:01
the best labor market,
45:02
that drives our competitive edge. So
45:05
from a state perspective, a nation state
45:07
perspective, from an individual
45:09
scientific perspective, the huge
45:12
drive to explore and invent and discover,
45:14
and of course from a commercial
45:16
perspective, the profit
45:18
incentive is phenomenal. And all of these
45:20
are good things, provided they can be
45:23
well managed and provided we can mitigate
45:25
the downsides. And I think we have to be focused
45:28
on those downsides and not be afraid to talk
45:30
about them. So I definitely
45:34
experience when I bring up these topics over
45:36
the years, this kind of, what
45:39
I describe in the book as a pessimism aversion.
45:41
There's people who are just sort
45:44
of constitutionally unable to
45:46
have a dark conversation about how things
45:48
may go wrong. And I'll get accused
45:51
of not being an optimist or something
45:53
as though that's like a sin
45:55
or something, or that being a pessimist
45:57
or an optimist is somehow...
45:59
a good way of framing things. To me, both
46:02
are biased. I'm just observing the facts
46:04
as I
46:06
see them. And I think that's an important
46:08
misconception and unhelpful
46:10
framing of pessimism and optimism, because
46:13
we have to start with our best assessment
46:15
of the facts and try to reject those facts
46:17
if they're inaccurate in some way,
46:19
and then try to collectively predict
46:21
what the consequences are going to be like. And
46:24
I think it's another trend over the last decade
46:26
or so. Post-financial crisis,
46:29
I feel like people,
46:30
public intellectuals and elites in general
46:32
and everyone in general, has sort of just
46:34
got a bit allergic to predictions. We've
46:37
got a bit scared of being wrong. And I think
46:39
that that's another thing that we've got to shed.
46:41
So we've got to focus on trying to make some of these
46:44
predictions. They may be wrong. I may have
46:46
got this completely wrong, but it's
46:48
important to lay out a
46:50
case for what might happen and
46:53
start taking steps towards mitigation
46:55
and adaptation. Well,
46:57
you invoke the concept of containment,
47:00
which does a lot of work in the book, and you have
47:02
this phrase, the containment problem
47:05
that you use throughout. What
47:07
is the containment problem?
47:09
In its most basic form, the idea
47:12
of containment is that we
47:14
should be able to demonstrate to ourselves that
47:16
technologies that we invent
47:19
should always be
47:20
accountable to humans
47:23
and within our control. So it's the
47:25
ability to close down or
47:27
constrain or limit a new technology
47:30
at any stage of its development or
47:32
deployment.
47:34
And that's a grand claim.
47:36
But actually put in the most simple terms,
47:39
it basically says we shouldn't
47:41
allow technologies to run out of our control.
47:43
If we can't say what
47:46
destiny we want for how a technology
47:48
impacts our species, then we're at the
47:50
mercy of it. And I think the
47:54
idea is if we don't
47:56
have mechanisms to shape that and restrict
47:58
its capabilities... then it potentially
48:02
leads us into some quite catastrophic
48:04
outcomes over a 30-year period. Do
48:08
you think
48:09
we've lost the moment
48:12
already? I mean, it seems like the digital
48:14
genie is more or less out of the bottle.
48:17
I mean, this is something that, if anything
48:19
surprised me, and I know
48:22
certainly surprised the people who were more
48:24
focused on AI safety, and again,
48:26
people like Yudkowsky, in
48:28
recent developments around these LLMs, was
48:31
that we missed a moment that many
48:34
of us, more or less expected, were more
48:36
or less sure was coming, which was there'd
48:39
be a breakthrough at some company
48:41
like DeepMind where
48:43
the people building the technology
48:46
would recognize that they had finally gotten
48:48
into the end zone or close
48:51
enough to it so that they're now in the presence of something
48:53
that's fundamentally different
48:55
than anything that's come before, and
48:58
there'd be this question, okay, is
49:00
this safe to work
49:02
with? Is this safe to release
49:04
into the wild? Is this safe to create
49:07
an API for? So
49:10
the idea was that you'd have this digital
49:13
oracle in a box that
49:16
would already have
49:18
been air gapped from the internet and incapable
49:21
of doing anything until we let it out, and
49:23
then the question would be, have we done enough
49:25
safety testing to let it out? But now
49:28
it's pretty clear that everything
49:30
is already more or less out, and we're building
49:32
our most powerful models
49:35
already in the wild, right?
49:38
And they're already hooked up to things, and they
49:40
already have millions of people playing with them,
49:42
and they're open source versions of the
49:44
next best model. And so is containment
49:48
even a dream at this point?
49:50
So it's definitely not too late.
49:52
We're a long, long way away. This is
49:54
really just the beginning. We have
49:57
plenty of time to address.
49:59
this. And the more
50:02
that these models and these ideas
50:04
happen in the open,
50:06
the more they can be scrutinized and
50:08
they can be pressure tested and held accountable.
50:10
So I think it's great that they're happening in
50:12
open source at the moment. So,
50:15
you like Sam Altman. This is what
50:17
Sam has always said, that the
50:20
philosophy behind OpenAI is do this stuff
50:23
out in the open, let people play
50:25
with it, and we will learn a lot as
50:28
we get closer and closer to building
50:30
something that we have to worry about.
50:32
I think that we have to
50:34
be humble about the practical
50:36
reality about how these things emerge. So
50:39
the initial framing
50:42
that it was going to be possible to invent this
50:44
Oracle AI that stays in a box, and
50:46
we'll just probe it and poke it and test it
50:48
until we can prove that it's going to
50:50
be safe, and that we'll
50:52
stay in the bunker and keep it hidden from everybody.
50:55
I mean, this is a complete nonsense and it's attached
50:57
to the super intelligence framing. It
50:59
was just a completely wrong metaphor
51:02
that totally ignores the history of all technologies.
51:04
And actually, this is one of the core motivations for me in
51:06
the book is that I had time during the pandemic
51:09
to really sleep and reflect
51:11
and really deeply think, okay, what is
51:13
actually happening here on a multi-century
51:16
scale? And what are the patterns of history
51:19
around how inventions end
51:21
up proliferating? And it's
51:24
really stating the obvious, it's almost like ridiculously
51:26
simplistic, but it needed to be said that actually,
51:30
as soon as something as an idea is
51:32
invented,
51:33
millions of other people
51:35
have approximately the same idea
51:37
within just weeks, months, years,
51:39
especially in our modern digitized
51:42
world. And so we should
51:44
expect, and as we do see, the open
51:46
source movement to be right
51:48
hot on the heels of the
51:50
absolute frontier. And so,
51:52
I mean, just one small example
51:54
of that to give an intuition,
51:56
GPT-3, it was
51:58
launched in the summer of 2020.
51:59
So three years ago, 175 billion
52:02
parameters, and is now regularly
52:05
being trained at 2 billion parameters.
52:08
And so that is a massive
52:10
reduction in serving cost, you
52:12
know, that now means that people
52:15
can have open source versions
52:17
of GPT-3 that have broadly
52:19
the same capabilities, right, but
52:21
are actually extremely cheap to serve and
52:24
indeed to train.
52:25
So if that trajectory
52:28
continues,
52:29
then we should expect that what is
52:31
cutting edge today, frontier models
52:33
like ARSA inflection and like
52:36
GPT-4, GPT-3.5
52:39
even, will be open source in the next
52:41
two to three years. And so what does it
52:43
mean that those capabilities are available
52:46
to everybody, right? And I think that is a great
52:48
thing for where we are today. But if
52:50
the trajectory of exponentially
52:52
increasing compute and size of models
52:55
continues for an other three,
52:57
four, five generations, which we all
52:59
expect it to,
53:00
then that's a different question. We have
53:03
to step back and honestly ask ourselves, what does it mean
53:05
that this kind of power is going to proliferate
53:07
in open source, number one?
53:09
And number two, how do we hold accountable
53:11
those who are developing these mega models, even
53:14
if they are centralized and closed, myself
53:16
included, OpenAI, DeepMind, et cetera?
53:18
And if you just look at the amount of compute, it's predictable
53:22
and breathtaking. And I think people forget
53:24
how predictable this is.
53:26
So going back to Atari DQN,
53:28
we developed that model in 2013 and it used two
53:30
petaflops of
53:35
computation, right? So
53:38
a petaflop
53:39
is a billion million
53:41
operations, right? So imagine
53:43
a billion people,
53:44
each holding one million calculators each
53:47
and doing a complex calculation
53:50
all
53:50
at the same time pressing equals, right?
53:52
So that's, that would be one petaflop and Atari
53:55
used two petaflops over several
53:57
weeks of computation.
53:59
later, the cutting
54:01
edge models that we develop at Inflection for
54:03
Pi, our AI, use
54:07
five billion times the
54:09
compute that was used to play Atari DQN.
54:11
So 10 billion, billion, million.
54:14
It's just like-
54:17
Now you're sounding like Ali G.
54:19
Exactly. That's
54:23
basically 10 orders of magnitude
54:25
more compute in a decade. So
54:27
one order of magnitude every year. So 10X
54:30
every year for 10 years,
54:32
which is way more than
54:34
Moore's Law. Everyone's familiar with Moore's Law, 70
54:37
years of doubling, doubling every 18 months
54:39
or whatever. I mean, that is miniscule
54:42
by
54:42
comparison now. Of course, there's a very good handle, maybe. If
54:45
you'd like to continue listening to this conversation, you'll
54:48
need to subscribe at SamHarris.org. Once
54:51
you do,
54:51
you'll get access to all full-length episodes of
54:53
The Making Sense podcast, along with other
54:55
subscriber-only content, including
54:57
bonus episodes and AMAs, and
55:00
the conversations I've been having on the waking up app.
55:03
The Making Sense podcast is ad free and
55:05
relies entirely on listener support, and
55:07
you can subscribe now at SamHarris.org.
Podchaser is the ultimate destination for podcast data, search, and discovery. Learn More