Episode Transcript
Transcripts are displayed as originally observed. Some content, including advertisements may have changed.
Use Ctrl + F to search
1:57
In
2:02
his new book, The Coming Wave, Mustafa
2:05
describes what the AI future
2:07
might look like. In our conversation,
2:10
he told us how we'll all be interacting
2:13
with and living with AIs in the future. And
2:15
he gave an assessment on how worried we
2:18
need to be about the potential harms. Mustafa
2:21
also told us why he thinks that people
2:24
and governments aren't yet ready
2:26
for the sweeping changes that the coming
2:28
technologies could deliver. This
2:31
is Babish from The Economist, I'm Alok Jha.
2:34
Today, Mustafa Suleiman on
2:36
what our future relationship with artificial
2:38
intelligence will look like and how
2:40
we can all prepare for it. Just
2:45
before we start, we should mention that Mustafa is
2:47
a non-executive director on the board of
2:49
The Economist Group, which is our parent company. But
2:51
this interview is editorially independent.
2:53
Mustafa Suleiman, thank you so much for
2:56
joining us. Alok, thanks for having me. It's
2:58
great to be here. Well, let's start from the beginning. You're
3:00
not an engineer or a computer scientist. And
3:02
in your book, you talk about your life before tech. Describe
3:05
that life for me. What were you doing? And then let's get
3:07
to the point where you discovered technology. Yeah,
3:09
I've got quite an unusual background in
3:11
the sense that you're right. I started off doing
3:13
philosophy and theology when I was at Oxford. I
3:16
ended up dropping out of my degree after a
3:18
couple of years and I went back
3:20
to London to help to start a telephone
3:23
counselling service, a charity.
3:24
And that was a great transition
3:26
for me at the age of 18. And from then
3:29
on, I really just wanted to see if I
3:31
could have the most positive impact in the world.
3:34
So I ended up co-founding a
3:36
conflict resolution firm. And I soon
3:38
sort of realised I actually wanted to scale
3:40
up that impact and doing it locally in
3:43
small groups was
3:45
not something that was going to change the world
3:48
fast enough. And I realised how important
3:50
technology was at that moment. I could see
3:52
that Facebook was exploding, grown
3:54
to 100 million monthly active users in two
3:57
years, between 2007 and 2009. and
4:00
I wanted to get involved in technology. I did everything
4:03
I possibly could to just meet people,
4:05
talk to people, anyone who would teach me about
4:07
tech.
4:08
Was it the fact that these sorts of platforms
4:11
allowed, as you said, scale, allows
4:13
you to do things for many, many people? Was
4:15
that the thing that turned your head? And what was it about
4:17
that that attracted you? Yeah, it's
4:20
a good question. I could see that
4:22
the structure
4:24
of the page itself, just
4:26
the colours, the information
4:29
hierarchy, the layout, the incentive
4:31
structure of Facebook, was
4:34
creating new relationships in the
4:36
real world. It was driving very
4:39
specific behaviours that were going to change
4:41
ultimately what people think, what
4:44
people feel, how they entertain themselves,
4:46
how they learn. It was
4:48
obvious to me that it wasn't just a neutral platform
4:51
that was facilitating an underlying
4:54
existing social cultural state.
4:57
It was massively changing it and
4:59
adapting it in the process. We
5:01
sort of design technology tools. They
5:04
shape our values, our behaviours and practices.
5:07
Those tools then end up shaping us. And that closed
5:10
loop cycle is,
5:12
I think, now self-evident to people, but
5:14
it was observing that in
5:17
Facebook and seeing that I was essentially
5:19
trying to do that in the real
5:21
world with conflict resolution or
5:23
a new way of designing an intervention in a complex
5:26
social problem. And I
5:28
could see that technology was about to massively
5:31
accelerate that feedback loop.
5:32
But then how did you move from that to artificial
5:35
intelligence? What was the point you realised that
5:37
machine learning type models would be important?
5:40
What was the point where you thought, you know, this is something
5:42
we can build into some amazing
5:44
stuff?
5:45
Well, it wasn't until sort of
5:47
approaching the end of the first year of the company,
5:49
DeepMind, the summer of 2011. So
5:52
we started in 2010. By 2011,
5:56
it was becoming clear that there were some methods
5:58
in deep learning that were started.
7:53
of
8:00
over a decade of fairly predictable
8:03
progress in the field of deep learning.
8:06
The way to look at it is that there's sort of been two
8:09
phases. The first phase,
8:11
which went from 2013 or so to 2020 or so, was
8:17
the phase of classification. And
8:19
that's where the deep learning models were learning
8:21
to recognize pixels,
8:24
understand the content of images, understand
8:27
the content of audio well enough to transcribe
8:29
the sound into written text.
8:33
And naturally having established
8:36
what things look like and understanding the perceptual
8:38
inputs of large datasets,
8:41
the next phase, which began in the
8:44
last sort of two or three years or so, is this
8:46
generative AI phase. Once
8:49
the model has learned to understand,
8:52
say visual input or text
8:54
input or audio input, it
8:57
knows something about the concepts
8:59
that are in that data stream well
9:01
enough that it can actually generate a
9:04
new example of the content that it's
9:06
been fed. So if you say to it, produce
9:09
a crocodile with wings that
9:11
is pink, right, it is essentially
9:14
interpolating between those three concepts
9:16
that it already holds and generating
9:18
a new example of the thing that you've asked it to
9:21
produce.
9:21
So for you, it wasn't a surprise because it
9:24
was predictable in terms of how computing
9:27
power was improving, how much data was improving and
9:29
so on. And I just wonder, large
9:32
language models right now, the clue is in the name
9:34
large, are they gonna continue to grow
9:36
at the same pace? Because they're gonna need more
9:38
and more power to operate, they're gonna
9:40
need more data. I just wonder what your thought is on how much
9:42
bigger these language models can get.
9:45
Yeah, I think that's exactly the right question.
9:47
And the reason why I think it's been somewhat
9:50
predictable is because you can look back
9:52
over the last decade and quantify the amount
9:54
of computation used to train the
9:57
largest models, the most cutting edge models
9:59
in each. year and you know
10:02
since 2013 that
10:04
amount of compute has 10x
10:07
every single year for the last 10 years.
10:09
That's pretty steady right? Yeah
10:12
I mean it's a remarkable you know 10 orders
10:14
of magnitude is a growth rate that we
10:16
just don't see in any other area of life
10:19
it's remarkable. Now that the
10:21
absolute number has got so large
10:23
you know you mentioned power
10:25
for example you mentioned data
10:27
and training tokens. I'm
10:30
not sure that it will 10x every year
10:32
for the next five or ten years but it's certainly
10:34
going to 10x another three
10:37
or four times. It's really very
10:39
predictable. I mean I think even
10:41
at my own company Inflection we will 100x
10:44
in terms of compute investment the
10:46
current frontier models in the world over
10:49
the next 18 months so two
10:51
more orders of magnitude in size.
10:53
And as these models get bigger they start
10:55
to have really interesting emergent properties. I
10:58
mean people are probably familiar and have played with
11:01
chat GPT and other language models that
11:03
can talk to them, can respond
11:06
to queries in a very human like way. This
11:08
is sort of almost like a suspiciously
11:10
human like way. Paint me a picture of what
11:13
these kinds of AI's can do well right now and
11:16
what they can't do well. So then we can talk about what
11:19
the sort of future looks like.
11:21
One very crude intuition
11:23
is that the model is learning an all-to-all
11:26
connection between all of the
11:28
input data that it has been given.
11:31
And so at the scale
11:33
of trillions of words of
11:35
training data size
11:38
really does matter because if you have more
11:40
computation then you can sort
11:42
of basically have more parameters
11:45
which represent the relationship between
11:47
all of the different tokens or words
11:50
that go into the training set. And
11:52
so what we see with scale
11:56
is
11:57
increased accuracy and more
11:59
controllability. And for
12:01
some people that was counterintuitive because
12:04
the primary criticism three
12:06
years ago when GPT-3 first came out was
12:09
that these models were biased and toxic and they
12:11
would always produce inaccurate
12:13
or even offensive outputs. That's
12:16
turned out not to be true. The larger models
12:19
are in fact more accurate, they have fewer
12:21
hallucinations and they're easier
12:23
to control. So our AI
12:25
PI, which stands for personal intelligence,
12:28
is incredibly subtle and
12:30
nuanced and precise and it has very deliberate
12:34
behaviours, it has very strict guardrails,
12:37
it's highly conversational. And
12:39
I think what that indicates to me is
12:41
that the good news is that
12:43
as we scale up the size of these training
12:46
runs, we'll be able to produce much
12:48
more precise and therefore useful
12:50
AI models. Are those positives
12:52
you just mentioned are they a result of scale,
12:56
training data and compute
12:57
or is it also because in
12:59
your case with PI, you've
13:02
put guardrails in, you've decided what you can and can't
13:04
do. These kinds of things are going to be important
13:06
to shape how these things interact with
13:08
the people in the world. How much is it about the fact
13:11
that these models just get bigger and they can
13:13
perform better and how much is it the human input
13:15
as well?
13:16
Yeah, you're definitely right. The human input
13:18
is essential. These models will reflect
13:21
the very best of us and the very worst of us.
13:23
They essentially amplify the
13:25
values that we put into them and so
13:28
that's where the big responsibility and
13:30
opportunity comes to try to really
13:33
shape models that respect the
13:35
values that we care about, whether it's your business'
13:37
values or whether they're your society's
13:40
values. We will have a very different type
13:42
of AI to what
13:44
we have in China, for example. We hope
13:46
that it won't be used in the way that it might
13:48
be used over there. The values
13:51
really matter. What I would like to emphasize
13:53
is that the capabilities don't
13:55
emerge. We sculpt the capabilities.
13:58
We reveal capabilities. capabilities, but we can
14:01
choose to reveal some capabilities
14:03
and not others. We're very,
14:05
very deliberate and careful about shaping certain
14:08
capabilities and suppressing others.
14:10
OK then, prediction time. What
14:12
does the future look like for
14:14
people using AIs in five
14:16
years? What kind of day
14:18
or lifestyle incorporates those sorts
14:21
of models that you're talking about?
14:23
Over the next five years, I think that there are going
14:25
to be many, many different AIs. Everyone
14:28
is going to have their own AI and
14:31
some of those will be more enterprising and some
14:33
of those will be more personal. I'm making
14:36
a big bet that every individual
14:38
consumer and every person, every citizen
14:40
will also want their own AI.
14:44
What that will essentially do, your personal intelligence
14:47
or your personal AI, is it will come
14:49
to know you like a personal assistant.
14:52
It will help you to organize and prioritize
14:55
your day, it will help you to plan
14:57
things, book things, arrange
14:59
things. Think of it as having the
15:02
ultimate personal tutor in your pocket.
15:04
It's going to know exactly what your style
15:07
is and how you like to consume information.
15:10
It'll be able to prioritize what
15:12
you learn, when you learn it. If
15:15
I were a researcher or a scientist,
15:17
I'd want a tool like this to help distill
15:20
all the latest academic
15:22
literature. What new papers have come out? What should
15:24
I be paying attention to? I'd want to
15:26
say, here's a new idea that I've been thinking
15:28
about. Can you go away and do some research or
15:30
can you think of any similar concepts
15:33
like this? Is this novel or is there any prior
15:35
work in this area? All those kinds
15:38
of very practical questions, I mean that's just
15:40
in the case of a research scientist, but for every
15:42
single other work domain
15:45
or even individual hobby and pursuit, you're
15:48
now going to have a really smart,
15:50
capable research
15:51
and planning assistant at
15:54
your side. Just to be clear, that's an
15:56
app on your phone or your computer you talk
15:58
to or interact with
15:59
way. The way I think about an AI
16:02
is that it's much more than just
16:04
an app. An AI is a relationship.
16:07
It's a friend. An AI will
16:09
be wherever you are. So you
16:11
can actually talk to our AI, Pi, today
16:14
on WhatsApp, Telegram, Discord,
16:17
Facebook, Instagram Messenger, SMS.
16:20
You can actually phone Pi. You
16:22
can have a fluent conversation
16:24
with Pi on the phone when you're walking
16:26
home or when you're driving, when you're taking a walk
16:29
in the park, when you're doing the dishes. And
16:31
in time, it can't do this yet, but you'll
16:34
be able to tell it to remind you of things,
16:36
you know, order the groceries. You'll
16:38
say, I've just came out of the shower. I've got this great
16:40
idea. I'm thinking to do X. It's
16:43
really going to be ever present and alongside
16:45
you, living with you, basically on
16:48
your team. I like to think of it as like having
16:50
a great coach in your corner. And
16:52
what about 20 years from now?
16:55
20 years from now, I think is much harder to
16:57
predict. I think that
17:00
we'll start to see the consequences
17:03
of having made everybody much,
17:05
much more productive. Right. So I
17:07
think that a lot of the sort
17:09
of drudgeoness cognitive
17:11
manual labor is going to
17:14
get extracted from the
17:16
production chain. Right. So let's set aside
17:19
actual manual labor because that
17:21
involves robotics and it's a whole different trajectory.
17:24
But cognitive manual labor, I think
17:26
of as the day-to-day tasks of
17:29
administration, back office
17:31
work, accounting, finance,
17:34
payroll, you know, databases,
17:36
supply chain management, project management,
17:39
most of those things I think are going to get
17:42
much, much more efficient with these tools.
17:44
And so the question is, with this new
17:47
time and space, what
17:50
kind of creative endeavors and entrepreneurial
17:52
endeavors are people going to get up to?
17:55
And I think that overall
17:57
is set to make us much, much more productive.
18:00
as a species, because we're going to have time
18:02
to invent and create and
18:05
essentially produce new value in the world.
18:07
Now
18:07
you've been explaining to me how transformative
18:10
AI is going to be in the future. It's a very sort of enticing
18:13
vision, and it makes sense as well. But
18:15
of course you yourself have been front and centre
18:17
in this revolution, and as you've described
18:19
with inflection, you've got a very company
18:22
which is building some of the largest language
18:24
models. So forgive my journalistic
18:26
cynicism. You would say all this, wouldn't you? I mean,
18:28
it makes sense for you to say all this. Tell listeners
18:31
why your vision goes beyond
18:33
just sort of your own interest in this.
18:35
Well, I mean, I guess
18:39
kind of like what I just said, really. I mean, I'm
18:41
describing what I see, and I'm
18:45
also betting that my prediction
18:47
about the way that the invention
18:50
ecosystem will unfold is
18:52
also just going to be how it happens. And therefore
18:54
I'm sort of placing two big bets. I've
18:57
written a book laying down
19:00
my ideas in the most transparent way
19:02
as best as I can, and that is an exercise
19:04
in, I think, accountability. We
19:07
can see in five or ten years whether I
19:09
was completely wrong. And likewise with
19:11
the company, we'll be able to see if I'm completely
19:13
wrong. If I've over bet
19:15
on this and it turns
19:16
out not to be possible for some reason in the next
19:18
two or three years, then you'll see a
19:21
burning hole in the ground of
19:23
very seismic scale. So, you
19:26
know, I will see. OK,
19:29
well, let's get into the nitty gritty of what's
19:31
going to make this happen or not. There are big
19:33
hurdles to growing
19:35
these language models even bigger. You
19:37
talked about how there could be a few more
19:39
orders of magnitude in terms of how big they can
19:41
get in the next few years and before
19:44
we start to hit limits. For example, GPT-4,
19:46
which is the latest model from OpenAI,
19:49
seems to have scraped pretty much all the data available
19:51
on the Internet. If you want to train an even bigger
19:53
model, if you want to do it in inflection
19:56
or Google wants to do it or Facebook, whoever else,
19:58
where is this extra data going to come
19:59
from? A lot of
20:02
people have been concerned about that. I'm not entirely
20:04
sure it's going to be a constraint. So it's
20:06
an open research question, but there are a number
20:09
of possible parts. I
20:11
think the first thing to say is
20:13
that it may be possible to generate synthetic
20:16
data that is of high enough quality.
20:18
So that means have an AI that
20:20
is good enough to write an entire book accurately
20:23
that you can then feed that back into the training pipeline.
20:26
There are some issues with that direction, but
20:28
it may be possible. Second is that
20:31
there's some signs that it's possible to achieve
20:34
improved performance through repetitions.
20:37
The third is that there's actually a lot more data
20:39
out there than I think people realize. People
20:41
say all the data on the open web, it's just
20:43
not quite true. There's a lot more. Well,
20:45
you mean hidden inside companies or people's
20:48
own computers or what would you mean? Correct.
20:51
Yeah, not on people's computers, no. But
20:53
a lot of companies have a ton of data. So
20:55
there's those... Proprietary commercial, yeah. Exactly,
20:58
proprietary commercial. And then
21:00
I think there's interaction data.
21:02
When you talk to the model, there's potential
21:04
for that to be used for training. So I
21:06
think there's a lot of different directions and
21:09
I don't necessarily think that's going to be a constraint. And
21:11
then I think the other direction that is quite promising
21:13
is being able to train these models with less
21:16
data that is higher quality. I
21:18
think there's lots of different research directions to explore
21:20
here. And what about the physical infrastructure,
21:22
the chips at the moment, your
21:24
company, others are using the best possible chips, these
21:27
things will get updated and Nvidia of course
21:29
make some of these enormously powerful graphical
21:32
processing units. Are those chips going to keep getting
21:34
better and better or do you think that we're reaching a
21:36
choke point there?
21:37
I think that the chips are on
21:39
a trajectory to keep getting better. I mean
21:41
the difference between today's cutting
21:44
edge Nvidia H100, which
21:46
is the latest version, and
21:49
the Nvidia A100 from
21:51
three years ago gives you 3x more
21:55
flops or kind of units of computation
21:58
per dollar. 15 point operation.
23:59
in on any single method.
24:02
Isn't that what's happening there right now? That's
24:04
naturally what happens when there's any excitement.
24:07
I mean, the reason why everyone's going all in at the moment is because
24:09
it's working. The
24:11
entire field is not stupid. Everybody
24:14
is betting huge on this and everybody is
24:16
delivering incredible experiences,
24:18
totally magical mind-blowing experiences
24:21
that were basically not even close
24:24
to being on the table three or four years ago. But
24:26
that doesn't mean to say that we shouldn't also pursue
24:28
other methods, absolutely. I mean, we want to have
24:31
diversity of bets, we want to have a whole
24:33
range of different engineering efforts, we want to
24:35
do everything possible to mitigate the weaknesses
24:38
and downsides of these models. You know,
24:40
hallucinations is a fundamental challenge. I
24:43
believe they're going to be largely eliminated
24:45
by scale, but they may not
24:47
be and we may need other
24:49
tricks. So
24:50
no question that we should be doing research
24:53
in all other areas as well.
24:54
Let's talk about some of the downsides then. The
24:57
AI-dominated society you've been describing does
24:59
come with a lot of risks and you're very honest about
25:01
them in the book. Outline to me what
25:03
you are most concerned about in terms
25:05
of the things that might go wrong if
25:08
we're not cognizant and careful in
25:10
how we introduce these technologies to the world.
25:13
I think most of the people who have been
25:15
concerned about AI over the last sort
25:17
of five years or so have
25:20
framed the danger as
25:22
one of a potential existential threat
25:25
of a superintelligence. An
25:27
AI that could somehow update its own
25:29
code and just recursively
25:32
self-improve itself and ultimately
25:36
deceive us and trick us to
25:38
let it out of the box and then cause some
25:40
sort of chaotic, unintended
25:43
harm because it suddenly develops
25:46
intelligence powers way beyond any
25:48
human and therefore it would be impossible
25:50
to keep it in the box. And
25:53
I think this is actually a really unhelpful
25:55
framing of the problem because this
25:58
speculates on a possible trajectory
26:01
that is, in my opinion, and
26:03
some people disagree with this, 20 or 30 years away,
26:06
I mean, decades away. So it's not to say
26:08
that, you know, it's irrelevant and we
26:11
should completely ignore it. It's to say
26:13
that in the next five years, we
26:15
have a long list of very practical
26:17
risks that we have to mitigate. I
26:20
mean, you know, we need to make these models as accurate
26:22
and reliable as possible. That's a huge
26:24
engineering challenge. The second thing
26:26
is, these models are going to get really
26:28
good at generating new forms of misinformation.
26:31
People are going to use them to continue
26:33
what they've been doing in social media
26:36
for the last 10 years, which is amplifying
26:38
polarization, destabilizing
26:41
what is true and what is not, and
26:43
generally spreading fear, right?
26:45
And it's going to now get easier
26:48
to do that because anybody with
26:50
fewer technical skills, with less financial
26:53
resources, is going to have an easier
26:55
time of producing made up stuff.
26:57
Now, that's definitely not an existential
26:59
risk, but it's a very practical thing that we
27:02
do need to work on. In the elections, for example,
27:04
or just general information, vaccines, etc.
27:07
Exactly, in both, right? I mean, I've actually
27:09
come out in an op-ed saying that we
27:11
should call for a ban on the use of AI
27:14
generated tools for electioneering
27:16
and election campaigning. So these chatbots shouldn't
27:19
engage in any kind of explanation about
27:21
the election stuff. It's just not reliable enough,
27:24
and they certainly shouldn't engage in any kind of persuading
27:26
or campaigning. How do you manage those
27:28
sorts of risks in the near term and the medium term, especially
27:31
if all companies who are developing these
27:33
models, there are only a few of them really, including
27:36
yours, that power is concentrated in a few hands,
27:38
essentially. So how do you mitigate
27:40
the sort of risks of all of that that we've
27:42
just discussed, given that companies also
27:44
want to make money and be the first to do these
27:46
amazing things and get to that world that you
27:49
described earlier? It's a very fair question.
27:51
I think it's another one of the big risks that we
27:53
have to contend with as these models get bigger
27:55
and bigger. They cost hundreds
27:57
of millions of dollars to train. via
28:00
deep technical expertise that is very
28:02
expensive and scarce. And
28:05
so there are really only ever going to be a small
28:07
number of very large model developers.
28:10
And that represents a kind of centralized power
28:13
that we have to hold accountable. On
28:15
the flip side, there's going to be a mass proliferation
28:17
of these models. The open source movement is
28:20
not too far behind the cutting edge. You
28:22
know, there's very, very talented open
28:24
source developers who are
28:27
building models which aren't quite as good as what
28:29
you get at the big companies, but they're good enough to
28:31
experiment with. And so I think that's the
28:34
flip side of the challenge, which is like, how do you handle
28:36
this huge proliferation of power, you
28:39
know, which is very easy to copy
28:41
and replicate? I mean, you could store the
28:43
model weights from an open source model on
28:46
a small hard disk that can be moved around
28:48
very easily. What's your opinion on
28:50
this sort of open source movement in AI? I
28:53
think that for the time being,
28:55
probably for a good number of years to come, it's
28:58
a great thing that these are available in open
29:00
source. You know, the best way to reduce
29:03
fear is to give people the chance
29:05
to wrap their heads around the strengths and weaknesses
29:07
of these models. So there's huge upside
29:10
there. Having said that, the
29:12
fact that these models are going to be available
29:14
in open source does mean that they'll be much,
29:16
much harder to regulate, right? And
29:19
there are capabilities which we
29:21
would want to suppress, right?
29:23
So if there is an open source model, which
29:26
is very easy to run a
29:28
recursive self-improvement loop, then
29:31
I think that would be a dangerous capability
29:33
to have out in the wild. If there's
29:35
an open source model that has been optimized for
29:38
cyber attacks, or for coaching
29:40
somebody to develop a biological weapon, both
29:43
of those capabilities we've seen in these large
29:46
models, and we've certainly removed them from our
29:48
model, and OpenAI and Anthropic
29:50
have done the same, those would be
29:52
very dangerous capabilities to have out in the open
29:54
source. And so that raises the question of,
29:56
like, how do we actually prevent the
29:58
proliferation of those sorts of things? of capabilities.
30:01
But I think we should keep the anti-proliferation
30:04
efforts very restricted for the
30:06
time being, right? They should really only be
30:08
to the things which we already know to be illegal
30:11
and we should be wary of people who are sort of trying to do illegal
30:14
things. We shouldn't be going further than that anytime
30:16
soon.
30:17
Are you worried about it?
30:18
Yeah, I'm very worried about it because I think
30:20
this lowers the barrier to entry to a
30:23
bad actor wanting to take some
30:25
kind of action to destabilise our world.
30:30
We'll be back with Mustafa in just a moment.
30:33
First though, just a quick reminder that you can keep
30:36
up to date with the ever-evolving story
30:38
of generative AI by taking out
30:40
a subscription to The Economist. Our
30:42
most recent coverage explored the role that AI
30:44
might play in the many elections of 2024.
30:48
That was a recent cover story and briefing.
30:51
This week, our editor-in-chief Zani
30:53
Minton-Bedez also sat down with Mustafa
30:55
Suleiman and also the author and historian
30:58
Yuval Noah Harari. They
31:00
explored whether AI could ever have agency.
31:03
You can watch that debate in full on
31:05
our website. There's much more on
31:07
AI to come as well. Babbage
31:10
listeners can try out a month of a digital
31:12
subscription for free by heading to economist.com
31:15
slash podcast offer. The link is in the
31:17
show notes. Coming up, I'll
31:19
ask Mustafa how AI is
31:21
influencing the enterprise of science
31:24
and what to do to manage those risks
31:26
that he's so worried about.
31:35
Today on Babbage, I'm in conversation with Mustafa
31:37
Suleiman, one of the co-founders of AI
31:40
companies DeepMind and more recently
31:42
Inflection AI. Now
31:45
the flip side to the
31:47
AI will destroy us hypothesis and
31:49
theory that some people are trying to scare everyone
31:51
with is that AI is going to save us and
31:54
create solutions for climate change and new
31:56
battery materials, new drugs, all that sort of
31:59
exciting stuff. accelerate scientific
32:01
productivity essentially. And it's a large part
32:03
of your book too. You also talk about biotechnology
32:06
and synthetic biology. Talk to me about, what is it about
32:08
those technologies that you put on a pedestal
32:11
that's similar to the AI sort of
32:13
wave? Well, so with synthetic biology,
32:15
we're essentially turning an
32:18
exercise that previously required experimentation
32:21
in wetware. So you had to physically
32:23
handle a compound and
32:26
manipulate it and test it with
32:29
some specific experiments in the
32:31
real world. We're increasingly,
32:33
not entirely, but increasingly turning that
32:35
process into a computational effort,
32:38
right? So the more we represent
32:41
things that were previously in atoms,
32:43
we represent them in bits. So we move into
32:46
information space, the faster
32:48
the process of evolution, right? And
32:51
so that is the trajectory that we're
32:53
seeing more over the last 30 years in biology,
32:56
right? Not only have we been able to read
32:59
DNA, but we can increasingly synthesize
33:02
DNA. That is like print new compounds.
33:05
And that is likely to enable us to increase
33:08
the speed of innovation and
33:10
discovery, create new types of materials
33:14
which suit the specific purposes that
33:16
we care about. Maybe they are more resistant
33:19
to disease. Maybe they are more robust in
33:21
some way. Maybe they're cheaper. They're more efficient. They
33:23
have higher protein content, et cetera,
33:25
et cetera. So we're sort of turning
33:28
the quest to find new
33:30
compounds into a search process,
33:33
right? And that's what we've obviously been doing in
33:36
agriculture or in food for centuries,
33:38
right? I mean, that is the process of natural selection.
33:40
We can now produce
33:42
the same kilo of grain by
33:46
using only 2% of the
33:48
human labor that was required 100 years
33:50
ago. And that's an amazing
33:52
achievement for productivity, and
33:55
it's a result of this constant
33:57
process of iteration and
33:59
evolution. natural selection essentially, and
34:02
now we're sort of turning that into a computational
34:04
search space. So that tells
34:06
me that with scientific progress you could
34:08
accelerate large parts of it by having
34:11
these sorts of artificial cognitions. You can have
34:13
lots more artificial scientists working
34:16
on problems, whether it's searching for new
34:18
materials or optimising ways of growing
34:20
food, for example, using synthetic biology
34:22
techniques. What are the hurdles towards that?
34:25
I don't necessarily see any obstacles.
34:28
I mean, we're genuinely at
34:30
the very, very beginning of this revolution.
34:32
And if you just look at what the tools are doing
34:35
in their abstract form, they're
34:37
helping you to do the
34:39
entire scientific process, right?
34:42
You are, first of all, trying to creatively
34:45
establish a hypothesis.
34:47
What combination of elements do you think is
34:49
likely to cause some effect? You're
34:51
then asking a model to help search
34:54
over some past number of ideas
34:56
or papers or prior literature, which
34:59
provides some evidence for
35:01
that case. And then you're going to want to get
35:03
it to help you design an experiment
35:05
that allows you to test and validate that. So
35:08
each of the stages of invention and
35:10
discovery are now made just a
35:12
little bit more efficient, making you as an
35:14
individual scientist a little bit more productive.
35:17
And I think that the volume of
35:20
team discovery is about to scale up
35:22
massively. And I think that in itself,
35:25
rather than fundamentally reimagining the scientific
35:27
process, I think that in itself is going
35:29
to lead to huge gains. Do you think that
35:32
AI at some point will be making their own hypotheses
35:34
and doing everything in a closed loop and then
35:36
just coming up with ideas,
35:38
thoughts and things that can be laid to check by humans?
35:41
I'm just wondering if you can take humans out of the loop at any
35:43
point. Yeah, I mean, I'm not
35:46
desperate to take humans out of the loop
35:48
on any front. It's
35:50
a good question. I think it's the right question. And
35:53
I think many people are going to be asking it. So I
35:55
think you're spot on. But the first part of what you said
35:58
is absolutely true. AIs
36:00
are going to generate new hypotheses. Why
36:02
should they not? Essentially, these models are
36:05
very good at interpolation. They're
36:07
really trying to find a point
36:10
between multiple different
36:12
conceptual ideas, just like the crocodile
36:14
example I gave earlier. But now imagine
36:16
it for tens of thousands of points,
36:19
this massively multi-dimensional
36:21
intersection. That's essentially a hypothesis.
36:24
It's a new idea. Many
36:27
of those might be awful, and the human
36:29
might work through a series of
36:32
iterative steps with the model to craft a
36:34
new hypothesis.
36:35
I can certainly see that being the way that
36:38
science gets done in the future.
36:40
Now, before we finish up,
36:42
Pye wants us to ask you about how to better manage
36:44
the risks of AI. So in your book, you tell
36:46
people about an idea you have called containment.
36:48
It's an idea that goes back to the Cold War,
36:51
to nuclear proliferation.
36:53
You say that it's not possible in
36:55
the coming ways of technology. So what
36:57
can we do then to contain the
36:59
risks that are out there? Well,
37:01
I think I framed it as not
37:03
being possible as a provocation to
37:06
encourage people to help me refute
37:09
the idea that it is not possible. Containment
37:12
is a pretty simple
37:14
and intuitive principle that
37:17
I think will resonate with most people,
37:19
which is to say that technologies
37:21
that we develop should always in
37:24
the future be accountable
37:26
to us as humans, controllable
37:29
by us. We should be able to impose
37:31
real constraints at any
37:33
point in the development or deployment
37:36
process to make sure that they are
37:38
accountable to our democratic institutions
37:41
so that we can make the decisions collectively
37:43
as a species about how much of a free
37:46
rein we end up giving these kinds
37:48
of AIs over time and these kinds of biological
37:51
tools. Because the time will come when
37:54
synthetic life
37:55
is a
37:56
genuinely capable
37:58
option.
37:59
We have to confront that reality,
38:02
that both AIs and synthetic
38:04
life is coming over the next 30 years,
38:08
and we'll have to make some very
38:10
major decisions around what we
38:12
leave on the tree. Which fruit do we
38:14
not pick? Because we should be making
38:17
that moral and political choice as a species.
38:20
That sounds to me like a very honest embrace
38:23
of the problems that might be ahead, but as
38:25
an entrepreneur, how can you leave things
38:28
on the tree?
38:29
How would you sort of justify that to
38:31
shareholders, investors, etc? I think people
38:33
do that all the time. For example,
38:36
we don't get to fly drones around wherever
38:38
we like at the moment. That's a new technology,
38:41
it's very capable, it's actually pretty safe,
38:43
it can be used in lots of practical ways, but it's regulated,
38:46
and that's a good thing. We've made the decision for
38:48
various reasons, invasion of privacy,
38:51
noise, all kinds of reasons that we
38:54
don't want those to be buzzing around our
38:56
neighborhoods 24-7, even though they could well be.
38:59
So, in my opinion, that's a containment strategy.
39:02
We've decided as a
39:05
set of nation states to leave that for
39:07
the time being until it's more provable.
39:10
I think the other thing I would say is that the incentives
39:13
of a company obviously really do matter.
39:15
We've founded the company as a public benefit
39:18
corporation, which is a new type
39:20
of company, which means that
39:22
we have a legal obligation to hold the
39:25
requirement to return value to our investors
39:28
as an equal motivation to
39:30
the obligation to do good for the world.
39:33
We have to account for the impact
39:35
of our activities on people
39:38
that are not our customers.
39:40
Let me ask that incentive question in a slightly different way
39:42
then. You have called for
39:44
things like regulation and discussions about the
39:47
harms and risks of these amazing technologies.
39:49
OpenAI, Chief Effective, Google,
39:52
everyone is talking to members
39:54
of governments around the world about regulation in a much
39:56
more forward thinking way than previous
39:59
generations of technology.
39:59
And that's a good thing, but on
40:02
the flip side of that, all of that regulation will
40:04
cost money. It will mean that the rate
40:06
of progress will be slightly slower necessarily.
40:09
How does that fit with the sort of raw
40:11
red meat of capitalism and making as much
40:13
money as possible, which the tech companies are mainly known
40:16
for? I don't see why we can't do
40:18
both. I don't really even see those
40:20
as an obvious tension. I think we
40:23
want to create a healthy and peaceful
40:25
world, and that's how we'll
40:27
create profitable opportunities. And
40:30
I think having a constitution which
40:32
doesn't just require us not to cause harm, but actually
40:35
directly
40:35
incentivizes us to do good is
40:38
a great thing. I feel like
40:40
this is the evolution of incentive
40:42
structures and the corporate structure itself. And
40:45
this experimental, we don't know that it's going to
40:47
work, but I definitely think it's headed in the right
40:49
direction.
40:50
What's one thing that companies like yours
40:52
can do to make sure that as
40:54
you go forward, the problems within
40:57
AIs and the sort of implementations
40:59
of them are managed?
41:01
Well, I think the first thing that we've already
41:03
committed to doing in the voluntary commitments
41:05
that we signed up to at the White House
41:08
two months ago, along with the other companies,
41:10
is that we will red team our own models,
41:13
find their weaknesses, and then
41:15
share those safety best practices,
41:18
not just with our competitors, but with everyone
41:20
in the world. And I
41:22
think that's a really significant step forward. And
41:25
something that other companies should do. Yeah, most definitely.
41:27
And I think the open source movement is starting to
41:29
do this as well now. So I expect
41:32
this to become the default culture in years
41:34
to come. Okay, writing and talking about the
41:36
risks of all these global challenges in
41:38
your book, I mean, it must have taken quite an emotional
41:40
toll on you. I'm just wondering how optimistic
41:43
you are about the future. I don't like to think
41:45
of myself as an optimist or a pessimist.
41:47
I think each of those are biased
41:49
in their own way. I mean, I think we need a clear-eyed
41:52
view of the threats. And we need
41:54
to be very focused on
41:56
the benefits because this is going to be a
41:58
radically beneficial transformation that
42:01
lifts hundreds of millions of
42:03
people out of poverty that improves our lives
42:05
and wellbeing for many, many people. And
42:08
I think there's a huge win
42:10
at the end of this if we can just manage some
42:12
of the turmoil that might arise along
42:14
the way. Every new technology
42:16
that has ever arisen has that characteristic.
42:19
A truly novel fear is
42:22
what we face with each wave. In
42:24
the book, I found this story about the
42:27
first railway trip that was taken in
42:29
Liverpool in the late 1800s. The Prime Minister
42:31
and the Member of Parliament at the time
42:33
were standing on the tracks celebrating the arrival
42:36
of this train, and they had
42:38
no conception of the movement of
42:40
this beast. And
42:43
it ran through the celebration party and
42:45
killed a bunch of people. I mean, this is just remarkable.
42:48
Isn't that like an incredible realization
42:50
that actually things can feel so
42:53
alien and unfamiliar? And
42:54
then in a snap,
42:56
they feel readily part
42:58
of the fabric of our lives. And we
43:00
know how to deal with the downsides and get most
43:02
of the upsides. And I think we've demonstrated
43:04
our ability to do that in so many
43:06
different ways over the centuries. And
43:09
I think we have to have confidence in our
43:11
ability to be resilient and adaptive
43:14
and not sort of characterize one
43:17
another as like techno-optimistic or
43:19
catastrophizing and pessimistic. I think we
43:22
just have to basically hold both intentions
43:24
simultaneously. And that's where maybe we'll
43:26
get a little bit of wisdom out of it. Okay,
43:28
Mr. Staffan, that was fascinating. Thank you very
43:31
much for your time. Thanks a lot. That was fun.
43:37
And thank you for listening to Babbage. Next
43:40
week, we'll be diving even deeper on the topic
43:42
of how AI could accelerate
43:44
and perhaps even completely transform the
43:47
practice of scientific research. We'll
43:49
also meet some fascinating lab robots
43:51
that promise to automate at least some of the
43:54
drudgy parts of the scientific process.
43:57
Babbage is produced by Jason Hoskin and
43:59
Canal Pada. Sound engineer
44:01
is Jane Stickland and the executive
44:03
producer is Marguerite Powell. I'm
44:07
Alok Jha and in London, this is
44:09
The Economist.
Podchaser is the ultimate destination for podcast data, search, and discovery. Learn More