Episode Transcript
Transcripts are displayed as originally observed. Some content, including advertisements may have changed.
Use Ctrl + F to search
0:01
I'm Barry Weiss, and this is
0:03
Honestly. Six
0:05
months ago, few people outside of Silicon
0:07
Valley had even heard of OpenAI,
0:10
the company that makes the artificial intelligence
0:12
chatbot known as ChatGPT.
0:15
Now, ChatGPT is being
0:18
used daily by over 100 million
0:20
users, and by some of these people, it's
0:23
being used more often than Google. Just
0:26
months after its release, ChatGPT
0:29
is the fastest growing app in
0:31
history.
0:35
ChatGPT can write essays. It
0:38
can code. It can ace the bar
0:40
exam. It can write poems and song
0:42
lyrics, summarize emails. It
0:44
can give advice, scour the internet for
0:47
information, and it can do all of this in a
0:49
matter of seconds. And
0:51
the most amazing thing of all is
0:54
that all of the responses it generates
0:56
are eerily similar to
0:58
those of a human being.
1:05
For many people, it feels
1:07
like we're on the brink of the biggest thing
1:09
in human history, that the technology
1:12
that powers ChatGPT and
1:14
the emergent AI revolution more broadly
1:17
will be the most critical and
1:19
rapid societal transformation in
1:21
the history of the world. If
1:24
that sounds like hyperbole to you, don't take
1:26
it from me.
1:27
What do you compare AI
1:29
to in the course of human civilization?
1:33
You know, I've always thought of AI as the most
1:35
profound technology humanity is working
1:38
on. More profound than fire
1:40
or electricity or anything that we have
1:42
done in the past.
1:43
Google CEO Sundar Pichai said
1:46
that the impact of AI will be more
1:48
profound than the invention of
1:50
fire.
1:51
As you know, I work in AI
1:53
and AI is changing the world. Computer
1:56
scientist and Coursera co-founder Andrew
1:58
Ng. AI is the new electricity. said
2:00
that AI is the new electricity.
2:04
Some compare it to the printing press.
2:06
Others say it's more like the invention of
2:08
the wheel or the airplane.
2:10
Sacks, you're saying explicitly you think this
2:13
is bigger than the Internet itself. Bigger
2:15
than mobile has a platform shift.
2:18
It's definitely top three and I think it might be the biggest ever.
2:21
Many predict that the AI revolution
2:24
will make the Internet seem small. And
2:26
last month, the Atlantic ran a story
2:29
comparing AI to
2:30
nuclear weapons.
2:35
Now, I'm generally an enthusiastic personality
2:37
and so when someone tells me about
2:39
a new technology, I get excited.
2:42
When I heard about crypto, I bought
2:44
Bitcoin. When a friend told me that
2:46
VR is going to change my life, I
2:49
spent hours trying on his headset
2:51
in the metaverse. So there's something profoundly
2:54
exciting about a technology
2:56
that so many smart people
2:59
believe could be a world changer, literally.
3:02
You know, we are developing technology
3:04
which for sure one day will
3:06
be far more capable than anything
3:09
we've ever seen before.
3:11
But it also scares me.
3:14
Because other smart people,
3:17
sometimes the very same people,
3:19
are saying that there is a flip side to all of this
3:21
optimism. And it's a very dark
3:23
one. The problem is that we do not get 50
3:26
years to try and try again and observe that
3:28
we were wrong and come up with a different theory and realize
3:30
that the entire thing is going to be like way more difficult
3:33
and realized at the start. Because
3:35
the first time you fail at aligning something much
3:37
smarter than you are, you die.
3:41
One of the pioneers of AI, a guy named
3:43
Eliezer Yudofsky, claims that
3:45
if AI continues on its current trajectory,
3:49
it will destroy life on Earth as we know it. Here's
3:51
what
3:52
he just wrote recently. If
3:54
somebody builds a too powerful AI
3:56
under present conditions, I expect that every single
3:59
one of us will be able to do it.
3:59
member of the human species,
4:01
and all biological life on Earth
4:04
dies shortly thereafter. Now
4:08
his concerns are particularly severe.
4:10
It's hard to think of a more dire
4:13
prediction than that one. But he's not the
4:15
only one with serious concerns.
4:17
Thousands of brilliant technologists, people
4:19
like Elon Musk and Steve Wozniak, are
4:22
so concerned that last month they put out a public
4:24
letter calling for an immediate pause
4:26
on training any AI systems more
4:29
powerful than the current version of
4:31
chat GPT.
4:35
So which is it? Is
4:38
AI the end of the world? Or
4:40
the dawn of a new one?
4:46
To answer that question, I
4:48
invited Sam Altman on the show today.
4:51
Sam is the co-founder and CEO
4:53
of OpenAI, the company that makes chat
4:55
GPT, which makes him arguably
4:58
one of the most powerful people in Silicon Valley,
5:00
and
5:01
if you believe the hype about AI,
5:03
the whole world. I
5:05
asked Sam, is the technology
5:07
that powers chat GPT going
5:10
to fundamentally transform life
5:12
on Earth as we know it? And if so,
5:14
how?
5:15
How will AI affect the way we
5:17
do our jobs, our understanding
5:19
of intelligence,
5:21
our relationships with each other, and
5:23
our basic humanity, and are the
5:25
people in charge of this technology, people
5:28
like him,
5:29
ready for the responsibility?
5:36
That and more after a short break. Stay
5:39
with us. This
5:41
episode is brought to you by Cozy Earth. This
5:43
year is going to be my first Mother's Day. So like
5:46
all moms around the country, I'm expecting breakfast
5:48
in bed, an original poem, and
5:50
yes, a rope. So if you want to go
5:52
above and beyond, buy me or
5:54
maybe your own mother some of Cozy Earth's
5:57
comfortable temperature regulating bedding
5:59
or lounge wear. Cozy Earth's products
6:01
are incredibly soft. I slept in their pajamas last
6:04
night. So whether it's their luxury sheets,
6:06
their lounge wear, socks, tees,
6:09
bath towels, their new waffle towels, Cozy
6:11
Earth has you covered. They also offer 100
6:13
night sleep trials. So you can sleep on it, wash
6:16
it, try it out. And if you're not completely in love, send
6:18
it back for a full refund. Cozy
6:20
Earth is having a huge Mother's Day sale
6:23
and it's going on right now. Live
6:25
up to 35% with my promo code, berry, at
6:28
cozyearth.com. Just enter
6:30
B-A-R-I at checkout for 35% off
6:32
your order.
6:34
This show is sponsored by BetterHelp. Dear
6:36
listener, I think you deserve to hear this one directly
6:39
from me. You need the guidance and expertise
6:41
of a licensed mental health professional. Therapy
6:44
deepens your self-awareness and your understanding
6:46
of yourself because sometimes we don't
6:48
know what we want or why we react the way we do
6:51
or why we spent two and a half hours watching YouTube
6:53
videos of people deep cleaning their apartments until
6:55
we talk through things. BetterHelp
6:58
connects you with a licensed therapist who
7:00
can help you with that self-discovery wherever
7:02
you are in your mental health journey. It's
7:04
entirely online and it's designed to be convenient,
7:06
flexible, and suited to your schedule. Fill
7:09
out a brief questionnaire to match with a licensed therapist
7:11
and you can switch therapists at any time with no
7:13
additional charge. Discover your potential
7:16
with BetterHelp by visiting betterhelp.com
7:18
slash honestly today to get 10% off
7:21
your first month. That's BetterHelp. B-E-T-T-E-R-H-E-L-P
7:26
dot com slash honestly.
7:30
Sam Altman, welcome to Honestly.
7:32
Thanks for having me on. So Sam, last
7:34
night I was watching 60 Minutes because despite appearances,
7:36
I guess I'm a boomer on the inside and I
7:39
listened as Google CEO compared
7:42
AI to the invention of fire.
7:46
And if that's true, then I guess despite
7:48
the fact that many of us feel like we're living at the pinnacle
7:51
of civilization, we're actually
7:53
in retrospect going to look something like I
7:55
guess Neanderthals or cavemen. And
7:58
I wonder if you... agree with that analogy.
8:01
If you think that this technology
8:03
that you're at the very cutting edge of as the CEO
8:06
of OpenAI, that it's going to create
8:08
as seismic a change
8:11
in human life on earth, as
8:13
did fire or maybe electricity.
8:16
My, my old understanding of the world did
8:18
sort of match that there were all of these different
8:21
technological revolutions and argue about
8:23
which one is bigger or smaller than the other and
8:25
talk about
8:26
when different people reached the
8:28
pinnacle or whatever. And now I understand
8:30
the world in a very different way, which is this one long
8:33
arc, this one single technological
8:35
revolution or the knowledge revolution.
8:38
And it was our incredible
8:41
ability to figure things out, to
8:44
form new explanations
8:47
in the beginning of infinity language, good explanations
8:50
and advance the state
8:53
of knowledge and evolve this sort of infrastructure
8:55
outside of our ourselves, our civilization,
8:58
that really is the way to understand progress
9:01
forward. And it's this one big, gigantic
9:04
exponential curve that we're all writing, the
9:07
knowledge revolution. And that's
9:09
not how I view history and
9:12
certainly how I view the history of technology. And
9:14
so I think it's like always tempting to feel like we're
9:17
at the pinnacle now. And I'm sure people in the past felt
9:19
like they were at the pinnacle and
9:21
that the part of the revolution that
9:23
they happen to be living through was the most important part ever.
9:26
But I think we are at a new pinnacle and
9:28
then there will be many more to follow. And
9:30
it's all part of this one expanding
9:32
thing. Right. But not every period of
9:35
time feels as, as enormous.
9:38
You know, if you look between like the eighth and 10th century,
9:41
probably not that much change. I mean, I'm sure it did
9:43
to the people who were alive then, but this
9:45
feels like a revolution to me
9:47
in a way that so many other things that have
9:49
been hyped as revolutions in the past 10
9:52
years simply don't. The
9:54
curve has squiggles for sure. And
9:57
I think this is bigger than some
9:59
of the things that have been hyped in the last decade
10:02
that haven't quite planned out. But that's okay. That's
10:04
like the way of it. Sometimes it looks more
10:06
obvious, sometimes it doesn't. And again,
10:08
there are periods where less happens.
10:11
But if you like zoom out, you know, this is like
10:14
the third millennium since we've been counting, but
10:16
let's say like, this is maybe, you know,
10:18
year 70,000 of humans or whatever. And
10:21
you can say, wow, between years 60 and 70,000, so
10:23
much happened. I bet
10:25
way more will happen between year 70,000 and 80,000 of human history.
10:29
I think it's just going to keep going. Sam,
10:32
in just a few years, your company
10:35
has gone from being a small nonprofit
10:37
that few outside of Silicon Valley
10:39
paid much attention to, to
10:42
having an arm of the company
10:45
that's a multi-billion dollar company
10:47
with a product so powerful that
10:50
some people I know tell me they already spend more
10:52
time on it than they do Google. Other
10:54
people are, you know, writing op-eds warning
10:56
that the company you're overseeing, that technology
10:59
overseeing has the potential to destroy humanity
11:01
as we know it. You know, for those
11:03
who are just sort of new to this conversation,
11:06
what happened at open AI over
11:08
the past few years that's led to
11:11
what to many of us seems like this
11:13
massive explosion only over the past few
11:16
months? What have you guys been doing for the past
11:18
few years?
11:20
First of all, we are still a nonprofit. We have a
11:22
subsidiary capital profit. We realized that we
11:24
just needed way more capital than we could
11:26
raise as a nonprofit given the compute power
11:28
that these models needed to be trained. But
11:31
the reason that we have that unique structure around
11:33
safety and sharing of benefits, I think it's only
11:35
more important now than it
11:37
used to be. What changed is our
11:40
seven years, whatever it's been of research, finally
11:43
really paid off. It took a long time
11:45
and a lot of work to figure out how we
11:48
were going to develop AI. And we tried a
11:50
lot of things. Many of them came together.
11:52
Some of them turned out to be dead ends.
11:55
And finally we got to a system
11:57
that was over a bar
11:59
of utility. You can argue
12:01
about whether it's intelligent or not, but most
12:03
people who use it would not argue that it doesn't have
12:06
utility. And then after we developed
12:08
that technology, we still had to develop a
12:10
new user interface. Another thing that I
12:12
have learned is that making
12:15
a simple user interface that fits
12:17
the shape of a new technology is important
12:19
and usually neglected. So we
12:22
had the technology for some time, but it took
12:24
us a little while to find out how to make it really
12:26
easy to chat with. And we were very focused on this
12:28
idea of a language interface, so we wanted to get there.
12:31
And
12:31
then we released that. People, it's been very gratifying
12:33
to see, have found a great deal of value
12:36
in using it to learn things, to
12:38
do their jobs better, to be more creative, whatever.
12:42
I know that there are listeners of this show, including
12:44
my mom, who have vaguely heard what AI
12:46
is. They know it's a thing. They know it's
12:49
a thing that a lot of people are going
12:51
on about, either very excited or very
12:54
scared of it. But they've definitely
12:56
never used chat GPT. They've
12:59
probably never heard of a large language model. So
13:02
first, just just to set the stage,
13:04
how do you define what artificial
13:07
intelligence or artificial general
13:09
intelligence AGI is? What
13:12
is that?
13:13
So I don't like either of those terms, but
13:16
I've like fought battles in the past to try to change
13:18
them and given up on that. So I'll just stick
13:20
with them for now. I think AI
13:22
is
13:23
understood to still be a computer program, but one
13:26
that is smarter. So you still use it like you
13:28
use some other computer program, but it seems
13:30
to get what you mean a little bit more. It seems
13:32
to be a little bit closer towards like
13:35
a smart person that can sort of intuit
13:37
things or put things together for you in new ways or just
13:39
be a little bit more natural, a little bit more flexible.
13:42
And so people have this experience the first time
13:44
they talk to chat GPT, which is like, wow,
13:47
the experts, the linguists, they can
13:49
argue about the definition of the word, understanding
13:51
that it feels like this thing understands me, feels
13:53
like this thing is trying to help me and do
13:55
my task or whatever.
13:57
And that's powerful. And then AGI.
15:47
What
16:00
are these systems actually doing? Are we too impressed?
16:02
Is it a parlor trick? But in terms of delivering
16:04
the value to a user, in some cases, it's
16:07
inarguably there. Chat
16:09
GPT is the fastest growing
16:11
app
16:12
ever in the history of the internet. In
16:14
the first five days, it got a million users.
16:17
Then over the course of two months after it
16:19
launched, it amassed a hundred million.
16:22
And this was back in January. And right from the
16:24
beginning, it was doing amazing things, things
16:26
that every single dinner party I was going to, it's all anyone
16:28
could talk about. It could take an AP test.
16:30
It could draft emails. It could write essays.
16:33
I mean, before I went on Bill Maher most recently,
16:35
I knew we were going to talk about this subject. I
16:37
typed in Bill Maher monologue and it turned
16:40
out
16:40
a monologue that sounded a whole lot like Bill
16:43
Maher. He was not thrilled to hear that. And
16:45
yet you have said that you were embarrassed
16:48
when chat GPT 3 and 3.5, the
16:51
first iterations of the product were released. And
16:53
I wondered why.
16:55
Well, I think that Paul Graham once
16:57
said to me, there's always stuck with me is if you don't launch
17:00
version one that you're a little embarrassed about, you waited
17:02
too long to launch.
17:03
Explain to Paul Graham. Paul Graham,
17:05
he ran YC before me and is just sort of
17:07
a legend, rightfully so among
17:10
founders in Silicon Valley. I think he did more
17:12
to help founders as
17:14
a whole, like as a class probably than any other
17:16
person, both in terms of starting YC
17:18
and also just
17:20
the contributions, the advice and the
17:22
support he gave to people like me and thousands of other
17:24
founders he worked with over the years. But
17:27
one thing he always said is, if you
17:29
don't launch a version that you're a little embarrassed about, you waited too long.
17:31
So there's all of these things in chat GPT that
17:33
are still don't work that well. And
17:37
we make it better and better every week and that's okay. Last
17:39
month you released the current version, chat
17:41
GPT 4, which is remarkably
17:44
more effective and accurate than the previous versions.
17:47
I saw a chart of exam results between chat
17:50
GPT 3.5 and 4, and
17:52
it's crazy how much better it is. Like it
17:54
went from failing the bar exam to getting only 50% of
17:57
the answers correct to scoring in the
17:59
nine.
17:59
or scoring one
18:02
out of five on an AP Calc exam, calculus
18:04
exam, to four out of five, which
18:06
is much better than I did. So how
18:08
were you able at OpenAI to
18:10
improve GPT's accuracy
18:13
with such speed? And what does that
18:15
great leap tell us about what
18:17
the next version of this product will look like?
18:20
So we had GPT-4 done for a long
18:22
time, but as you said,
18:25
these technologies are anxiety-producing
18:29
to say the least. And when
18:31
we finished the model, we spent then about
18:33
eight months
18:34
aligning it, making it safer,
18:37
red-teaming it, having external audits done.
18:40
We really wanted to make sure that that
18:42
model was safe to release into the world. And
18:46
so it felt like
18:48
it came pretty quickly after 3.5, but
18:51
it was because we had had
18:53
it for a while and we're just working on safety testing.
18:55
Alignment that word you just used is
18:57
a word that comes up a lot around this subject.
19:00
What do you mean when you say it? The
19:02
model acts in accordance
19:04
with the desire of the person using
19:06
it and that it follows whatever overall
19:09
rules have been set for it. Okay. I
19:11
want to get in a little bit to the safety question
19:13
because that's one of the biggest questions people raise. But
19:16
just briefly, what are you using this
19:18
product for right now?
19:20
Well, right now, this is the
19:22
busiest I've ever been in my life. So right now, I'm mostly using
19:24
it to help process inbound
19:27
information.
19:27
So email, summarizing
19:30
Slack threads
19:31
take a very long email someone writes
19:33
and give me the three bullet point summary.
19:35
That kind of stuff I've really come to rely on. That's
19:38
probably not its coolest use case, but you ask how
19:40
I'm personally using it right now and that's it. What
19:43
is its coolest use case? I'm sure
19:45
you're hearing from tons of people that are using
19:47
it. Give us some examples of the wide
19:49
range of uses it has.
19:50
The one that I find super
19:53
inspiring because I just get these heartwarming
19:56
emails and a lot of them every day are
19:59
people using it. to learn new things and
20:02
how much it's changed their life there. You
20:04
hear this from people in all different
20:07
areas of the world, all different subjects. But
20:09
this idea that with very
20:12
little effort to learn how to use it this way, you can
20:14
have a personal tutor for
20:17
any topic you want and one that really
20:19
helps you learn. That's a super cool thing,
20:21
and people really love that. A lot of programmers
20:23
rely on it for different parts of their workflow. Like,
20:26
that's kind of our world, so we hear about that a lot. And
20:28
then we could go on for a long list of down every vertical
20:31
of what we've seen there. There was
20:33
a Twitter thread recently about someone who
20:35
says they saved their dog's life because they input
20:38
a blood test and symptoms into GPT-4.
20:39
That's an amazing use case.
20:42
I'm curious where you see chat
20:45
GPT going. You use the
20:47
example of summarizing long-winded
20:50
emails or summarizing Slack. This is
20:52
kind of like in the menial
20:54
task category, right? The grocery store
20:57
order, the sending emails, making payments. And
20:59
then on the other side of it, it's the question about
21:02
having it do things that feel more
21:04
existential
21:06
and more foundational to what it is to be a human
21:09
being, things that emulate or replace human
21:11
thinking, right? So someone recently released
21:13
an hour-long episode of the Joe Rogan
21:16
experience, and it wasn't Joe Rogan, it was someone
21:18
who created it. It was an hour-long conversation
21:20
between you and him, and the entire
21:23
thing was generated using AI language models.
21:25
So is it the sort of like chores
21:28
and mindless emails, or is it
21:31
the creation of new
21:33
conversation, new art, new
21:35
information? Because those seem like very
21:38
different goals with very different human
21:40
and moral repercussions.
21:43
I think it'll be up to individuals
21:46
and society as a whole about how
21:48
we wanna use this technology. The technology
21:51
is clearly capable of all of those things,
21:53
and it's clearly providing value to people
21:55
in very different ways.
21:57
We also don't know perfectly yet how
21:59
it's... going to evolve, where we'll hit roadblocks,
22:02
what things will be easier than we think, what things will be much, much
22:04
harder than we think. What I hope
22:06
is that this becomes an
22:09
integral part of our workflow
22:12
in many different things. So
22:14
it will help us create. It will help
22:17
us do science. It will help us
22:19
run companies. It will help us learn when we're in school
22:21
and later on in life. I think
22:24
if we change out the word AI for
22:26
software, which I always like doing, you
22:28
know, we say like, is software going to help us
22:31
create better or is it going to help us do menial tasks
22:34
better or is that going to help us do science better? And the
22:36
answer, of course, is all of those things.
22:38
And if we understand AI is just really advanced software,
22:41
which I think is the right way to do it, then
22:44
the answer is maybe a little less mysterious. Sam,
22:47
in a recent interview, when you were asked about
22:49
the best and worst case scenarios
22:52
for AI, you said this of the best case,
22:54
I think the best is so unbelievably
22:56
good that it's hard for me to imagine. I'd
23:00
love for you to imagine like, what is
23:02
the unbelievable good that you believe
23:04
this technology has the potential to do?
23:07
I mean, we can pick any sort of
23:09
trope that we want here. Like what if we're
23:12
able to cure every disease? That
23:14
would be like a huge victory on its own. What
23:16
if every person on earth can
23:19
have a better education than any person on earth
23:21
gets today? That would be pretty good.
23:24
What if like every person, you know, 100 years
23:26
from now is 100 times richer in the subjective
23:30
sense, better off, like just
23:32
sort of happier, healthier, more
23:34
material possessions, more ability to sort of live
23:36
the good life and the way it's fulfilling to them than
23:38
people are today? I think like all of these things
23:41
are realistically possible.
23:42
That was half of the answer that you gave to the question
23:45
of sort of best and worst case scenarios, right?
23:47
I was figuring you're going to mention the other half here. So
23:50
here was the other side of it. You said
23:52
the worst case scenario is quote, lights
23:54
out for all of us. A lot of people
23:56
have quoted that line. I'm sure back to you. What
23:59
did you mean by it?
24:01
Look, I understand why people
24:04
would be more comfortable if I would only talk about
24:06
the great future here. And
24:08
I think that's where we're going to get. I think this can
24:10
be managed. And I think the more that
24:13
we're talking about this now, the more that we're aware of
24:15
the downsides, the more that we as a society work
24:18
together on how
24:19
we want this to go, the way more likely
24:22
we're going to be in the upside case. But
24:24
if we pretend like there is not a pretty serious
24:26
misuse case here,
24:28
and just say like full steam ahead, it's all great. Don't
24:30
worry about anything. I just don't think that's the
24:33
right way to get to the good outcome. As
24:35
we were developing nuclear technology, we
24:37
didn't just say like, hey, this is so great. We can
24:40
power the world. Like, oh yeah, don't worry about
24:42
that bomb thing. It's never going to happen. The
24:44
world really grappled with that. And
24:47
it's important that we did. And I think we've gotten to a surprisingly
24:50
good place. There's a lot
24:52
of people, as you know, who are sounding
24:54
the alarm bells on what's happening in the world of
24:56
AI. Last month, several
24:58
thousand leading tech figures and AI experts,
25:01
including Elon Musk, who co-founded
25:03
OpenAI but left in 2018. Also
25:05
Apple co-founder Steve Wozniak, Andrew
25:07
Yang, who you backed in the last election. You're also
25:09
a UBI fan. All these people signed
25:12
this open letter
25:13
that called for a minimum six month
25:15
pause on the training of AI
25:17
systems more powerful than
25:20
chat GPT-4. Here's part of what they wrote.
25:23
Contemporary AI systems are now becoming human
25:25
competitive at general tasks. And we
25:27
must ask ourselves, should we let machines
25:30
flood our information channels with propaganda and
25:32
untruth?
25:32
We already have Twitter for that. Nice.
25:35
Should we develop non-human minds that might eventually
25:37
outnumber, outsmart, obsolete, and replace
25:40
us? Should we risk, they wrote, the loss
25:42
of control of our civilization. Such
25:45
decisions must not be delegated to unelected
25:47
tech leaders. Powerful AI systems should
25:49
be developed only once we are confident that
25:52
their effects will be positive and their risks will
25:54
be manageable. Now there's two ways I think to interpret
25:56
this letter, at least two ways. One is that
25:58
this is a cynical move by.
25:59
people who want to get in on the competition. And
26:02
so the smart thing to do is to tell the
26:04
guy at the head of the pack to pause.
26:07
The other cynical way to read it is that by creating
26:09
fear around this technology, it only makes investments
26:11
further flood the market. So I see
26:13
like two cynical ways to read it. Then I see a pure
26:16
version, which is they really think this is dangerous
26:18
and that it needs to be slowed down. How
26:21
did you understand the motivations behind
26:23
that letter? Cynical
26:25
or pure of heart?
26:27
You know, I'm not in those people's head,
26:30
but I always give the benefit of the doubt. And
26:32
particularly in this case, I think it is easy
26:34
to understand where the anxiety
26:36
is coming from. I disagree with almost
26:39
everything
26:40
about the mechanics of the letter,
26:43
including the whole idea of trying to govern
26:45
by open letter. But I
26:48
agree with the spirit. I
26:50
think we do need, you know,
26:53
open AI is not the company racing right now, but
26:55
some of the stories we hear from other companies
26:58
about their efforts to catch up with
27:00
open AI and new
27:02
companies being started or existing
27:05
very large ones. And some of the stories
27:07
we hear about
27:08
discussions about being willing to cut corners
27:10
on safety, I find quite concerning.
27:13
What I think we need is
27:15
a set, and this happens
27:17
with any new industry, they evolve. I
27:20
think we need an evolving set of safety
27:23
standards for these models, where
27:25
before a company
27:27
starts a training run, before a company releases
27:29
a new model, there are evaluations
27:33
for the safety issues we're concerned about. There
27:36
is an external auditing process that happens.
27:39
Whatever we agree on as a society
27:41
are going to be the rules to ensure safe
27:44
development of this new technology. Let's
27:46
get those in place. And you could like pick whatever other
27:48
technology you want, airplanes. We have like a robust
27:51
system for this. But what's important
27:53
is that airplanes are safe, not that Boeing
27:56
doesn't develop their next airplane for six
27:58
months or six years or whatever. And
28:00
that's where I'd like to see the energy get redirected.
28:04
There were some people who felt the letter didn't go far
28:06
enough. Eliezer Yudovsky,
28:09
one of the founders of the field, or at least he
28:11
identifies himself that way, refused to
28:13
sign the letter because he said it didn't
28:15
go far enough, that it actually understated the
28:17
case. I want to read just a few lines to
28:19
you from an essay that he wrote in the wake of the letter.
28:22
Many researchers steeped in these issues, including
28:24
myself, expect that the most likely
28:26
result of building a super-humanly smart
28:28
AI, under anything remotely
28:31
like the current circumstances, is that literally
28:34
everyone on Earth will die. Not
28:37
as in maybe possibly some remote chance, but
28:39
as in that is the obvious thing that
28:41
would happen. If somebody
28:43
builds a too powerful AI under present conditions,
28:45
he writes, I expect that every single
28:48
member of the human species and all biological
28:50
life on Earth dies shortly thereafter.
28:53
There's no proposed plan for how we would do any such thing and survive.
28:57
OpenAI's openly declared intention is
28:59
to make some future AI do our AI alignment homework.
29:03
Just hearing that this is the plan ought to be enough to get
29:05
any sensible person to panic, and
29:07
the other leading AI lab, he writes, DeepMind, has
29:10
no plan at all of DeepMind run by Google. How
29:13
do you understand that letter? Someone
29:16
who doesn't know very much about this subject is
29:19
reading a brilliant man saying that every
29:22
single member of the human species
29:23
and all biological life on Earth is
29:26
going to die because of this technology.
29:29
Why are some of the smartest minds in tech
29:31
this hyperbolic about this
29:33
technology? Look,
29:36
I like Eliezer. I'm grateful he exists.
29:39
He's like a little bit of a prophet of doom. You
29:41
know, before AI was going to be nanobots, we're
29:43
going to kill us all. And the only way to stop it
29:45
was to invent AI. That's
29:48
fine. People are allowed to update
29:51
their thinking. I think that actually should
29:53
be rewarded. But if
29:55
you're convinced the world is
29:58
always about to end, and
30:01
you are not,
30:03
in my opinion, close enough to the details of
30:06
what's happening with the technology, which
30:08
is very hard in the vacuum.
30:10
I think it's hard to know what to do. So I
30:12
think Eliezer is super smart. He may
30:14
be as smart as you can get about thinking
30:16
about the problem
30:17
of AI safety in a vacuum. The
30:20
field in general, the field of AI and certainly
30:23
the field of AI safety has been one of a
30:25
lot of surprises. Things have not gone
30:27
the way people thought they were going to go.
30:29
In fact, a lot of the leading thinkers, I
30:32
believe including Eliezer, but I'm not sure, and it doesn't
30:34
matter that much, as recently as 2016,
30:37
2017, were still
30:39
not bought into the deep learning approach and
30:41
didn't think that was the thing that was going to
30:43
work. And then even if they did, they thought it was going to be
30:46
like sort of the deep-minded, RL agents playing
30:48
games approach.
30:49
The direction that things have actually gone,
30:52
or at least are going so far, because look, it's almost certainly
30:54
going to change again,
30:56
is that we have these very smart
30:58
language models that have
31:01
a lot of properties that, in my opinion, help
31:03
with the safety problem a lot. And
31:06
if you don't consider it that
31:08
way, if you don't do actual technical
31:11
hands-on alignment work with the shape
31:13
of the systems we have and the risks
31:15
and benefits that
31:16
those characteristics lead to, then
31:19
I think it's super hard to figure out how to
31:21
solve this problem in a vacuum. I
31:23
think this is the case for almost any
31:26
major scientific or technological
31:29
program in history. Things
31:31
don't work out as cleanly
31:34
and obviously as the theory would suggest. You
31:36
have to confront reality. You have
31:39
to work with the systems. You have to work with the shape
31:41
of the technology or the science,
31:43
which may not be what you think it should be theoretically, but
31:46
you deal with reality as it comes. And
31:48
then you figure out what to do about that.
31:51
A lot of people who are in the AI
31:53
safety community have
31:55
said things like, I
31:57
never expected that I'd be able to
31:59
cope with this. exist with a
32:02
system as intelligent
32:05
as GPT-4. All
32:07
of the classical thinking was, by the time we got to a system
32:10
this intelligent, either we had fully solved
32:12
the alignment problem or we were totally wiped out, and
32:14
yet here we are. So
32:17
I think the answer is we do need
32:19
to move with great
32:20
caution and continue
32:22
to emphasize figuring out how to build safer
32:24
and safer systems and have an increasing
32:28
threshold for safety guarantees as
32:30
these systems become more powerful. But
32:33
sitting in a vacuum and talking about
32:35
the problem in theory has not worked. Of
32:38
all of the various sort of
32:41
doomsaying, right, all of the safety
32:44
or security concerns of these new technologies, cyberattacks,
32:47
plagiarism, scams, spreading misinformation,
32:50
the famous paperclip maximizer thing,
32:53
not to mention that this seems like it could be a particularly
32:55
useful tool for dictators, warlords.
32:58
You could think of every scenario. Which is the
33:00
one that you, Sam, are most worried
33:02
about?
33:03
I actually find
33:05
this a very useful exercise. So
33:08
that quote you just read, every person on
33:10
Earth, an all-biological life
33:13
is going to totally cease to exist because of the AI. And
33:16
then I try to think about how that could happen. How that
33:18
would happen. Right. Can you imagine it? I
33:20
mean, I could respond if you have some suggestions. No.
33:23
Like when I read that, I just hear
33:26
this guy who knows a lot about a technology
33:29
that I know a minimal amount about beyond
33:31
having used it over the past few months is
33:34
telling me that
33:36
it's going to eradicate humanity.
33:39
He's not telling me how, but you, I feel
33:41
like, might have a better understanding
33:44
of how you could even come to that conclusion. Well,
33:47
I don't think it's going to. I think it is
33:49
within the, it's within the
33:51
full distribution in the same way that like nuclear
33:53
bombs, maybe if we had set all
33:55
of them off at the same time at the height of the Cold War,
33:58
could it eradicate humanity? But like.
34:00
I don't think that was most, there were people
34:02
who made a great name for themselves and
34:05
a lot of media attention by talking about that.
34:07
And I honestly think it's important that they
34:10
did. I think having that be such
34:13
a top of mind issue
34:15
and having society really grapple with the
34:17
existential risk of that helped
34:20
ensure we got to continue
34:22
to exist. I
34:25
support people talking about it, but
34:28
it's not, again, I think we
34:30
can manage our way through this. Fine.
34:33
Well, speaking of nuclear, it's been
34:35
reported that you've compared OpenAI's ambitions
34:37
to the ambitions of the Manhattan Project.
34:40
I wonder how you
34:43
grapple with the kind of ethical dilemmas
34:45
that the people that invented the
34:48
bomb grappled with.
34:50
One of the ones that I think about a lot is the
34:52
question of, while
34:55
the guys that signed that letter calling for
34:57
the six month pause believe
34:59
that we should pause, China, who is
35:02
using AI already to surveil its citizens
35:05
and has said that they want to become the world leader in
35:07
AI by 2030, they're not pausing.
35:10
Make the comparison
35:12
to me to the Manhattan Project. What were the ethical
35:15
guardrails and dilemmas that they
35:17
grappled with that you feel are relevant to
35:19
the advent of AI?
35:22
So I think the way that I've made the comparison
35:24
is that I think
35:26
the development of AGI should be a government
35:30
project, not a private company project
35:33
in the spirit of something like the Manhattan Project.
35:35
And I really do think that, but
35:38
given that I don't think our government is going to do a
35:40
competent job of that anytime soon, it is
35:43
far better for us to go do that than just like
35:45
wait for the Chinese government to go do it. So
35:48
I think that's what I mean by the comparison. But
35:50
I also agree with the point you were making,
35:52
which is we face
35:55
a lot of
35:58
very complex issues at the intersection.
35:59
of discovery
36:02
of new science and geopolitical
36:05
or deep societal implications
36:08
that I imagine the team working
36:11
on the Manhattan Project felt as well. And
36:13
so that complexity of like,
36:16
you know, it feels like we spend as much time
36:19
debating the issues as we do
36:22
actually working on the technology. And I
36:24
think that's a good thing. I think it's a great thing. And
36:26
I bet it was similar with
36:29
people working on the Manhattan Project. Well, right. Like
36:31
in order to ensure that nuclear energy
36:33
was properly managed after the war, they
36:36
created the Atomic Energy Commission, but it took
36:39
many, many, many people dead. It
36:41
took, you know, it took catastrophe
36:45
in order to set up those guardrails.
36:47
Do you think that there will be a similar sort
36:50
of chain of events when it comes to AI?
36:53
Or do you think that we can get to the equivalent
36:55
of the Atomic Energy Commission before
36:57
the equivalent of Hiroshima or Nagasaki?
37:00
I am very optimistic we can get to it without
37:03
that happening. And that's part of the reason that I
37:06
feel love and appreciation for all of the doomers.
37:09
I think having the conversation about the downsides
37:12
is really important. Let's talk about the economic
37:14
impacts of this technology. You've
37:17
said that AI systems like GPT
37:19
will help people live more creatively by freeing
37:22
up their time, saving them
37:24
time that they previously used to do boring
37:26
menial tasks. But that is
37:29
going to necessarily result in significant
37:31
segments of the population, I would imagine,
37:34
not needing to work. And
37:36
the scenario most people imagined was that
37:38
this technology would first eradicate blue
37:40
collar work. Now it increasingly seems like it will
37:43
be white
37:43
collar work. It's all the people over
37:45
here at Hollywood writing television shows. How
37:47
do you think it's going to play out? Whose jobs
37:49
is it going to come for first? Who's second? And
37:52
how is it just going to reconfigure the way that we think
37:54
about work more generally?
37:57
Look, I find this issue
37:59
genuinely... confusing. Even like what
38:01
we want, I feel, I think we're like confused about whether
38:04
we want people to work more or work less. You
38:06
know, there's like a huge debate in France over
38:08
moving the retirement age two years. On the other hand, there's
38:11
like a lot of ink spilled by
38:13
people who have very cushy
38:15
jobs that get paid a ton about how
38:18
awful it would be if people who have to like work unpleasant
38:20
minimum wage jobs lose their jobs.
38:22
We're confused on what we even want as
38:25
the answer here. We're also
38:27
confused, as you just pointed out, which is one of my favorite
38:29
examples of how this
38:31
is going to impact things. The experts love
38:34
to get this wrong.
38:35
Every pronouncement I have heard about
38:38
the impact AI is going to have on jobs. It's
38:40
a question to me of how wrong it sounds. So
38:42
I will try to avoid sounding like an idiot a few
38:44
years in the future and not make a super
38:46
confident prediction right now. I will
38:49
say the following things. Number one, the
38:51
like long course of technology is
38:54
increasing efficiency, often in surprising
38:56
ways, and thus increasing the
38:58
leverage of many jobs, not
39:01
affecting others as much as you would seem like it would think
39:03
and creating new ones that are difficult to imagine
39:06
before the technology is mature and
39:08
deployed to the world. Number two, it
39:11
seems to me like the human desire
39:14
to create to feel useful to
39:17
gain status in increasingly silly
39:19
ways. That does not seem to
39:21
me to have any obvious endpoint.
39:24
And so the idea that all of us
39:26
are all of a sudden going to like stop working
39:29
and hang out on the beach all day doesn't feel
39:31
intuitively right. But I think the nature
39:33
of what it means to work and what
39:36
future society's value will change as
39:39
it always does, you know, the jobs of
39:41
today are very different from the jobs of 200 years
39:43
ago and very, very different from the jobs of 2000 years
39:45
ago. And that's fine. That's good.
39:48
That's the way of the world. The thing
39:50
that is gives me anxiety
39:53
here is not that we cannot adapt
39:55
to much better jobs of the future. We certainly
39:57
can. But can we do that all inside
39:59
of
39:59
generation, which we haven't had to do in previous
40:02
technologies. Sam, you've talked
40:04
about how AI technologies will
40:07
quote, break capitalism. I've
40:10
wondered, what does that mean? And
40:12
what aspects of capitalism do you think most
40:14
need to be broken? Okay,
40:17
I am super pro capitalism. I love
40:19
capitalism. I think it's great.
40:21
I do think that over time,
40:23
the shift of
40:25
leverage from labor
40:28
to capital as technology continues
40:31
gets more and more extreme. And that's a bad thing.
40:34
And I can imagine a technology
40:36
like AI pushing that even further. And
40:39
so I believe,
40:41
maybe not for sure, but maybe we
40:44
will need to figure out a way to adapt capitalism
40:47
to acknowledge this fact
40:49
that that capital has
40:52
increasing leverage in the world and
40:54
already has a lot, but it could have much more. The
40:57
fundamental precept of capitalism,
40:59
I think, is still very sound. But
41:02
I expect it will have to evolve some as
41:04
it's already been doing.
41:08
After the break, how close are we to
41:10
having AI friends? And is
41:13
this a technology we should let our kids have?
41:15
Stay with us.
41:28
Using the internet without ExpressVPN is
41:30
like walking your dog in public without a leash. Most
41:33
of the time, you'll probably be fine. But if you're
41:35
me and you're unlucky, you'll spend the afternoon
41:37
chasing your mutt all around Los Angeles. It's
41:39
better to be careful, especially when it's as simple
41:42
as using ExpressVPN. Every
41:44
time you connect to an unencrypted network in
41:46
cafes, hotels, airports, your
41:48
online data isn't secured. Any
41:50
hacker on the same network can gain access
41:52
to and steal your personal data. But
41:55
ExpressVPN creates a secure, encrypted
41:57
tunnel between
41:58
your device and the internet.
41:59
It would take a hacker with a supercomputer
42:02
over a billion years to get past
42:04
ExpressVPN's encryption. ExpressVPN
42:07
works on all of your devices, phones, laptop,
42:09
tablet, even on your smart TV. And
42:11
it's easy to use. Just fire up the app and click
42:14
one button to get protected. That's it.
42:16
Get an extra three months of ExpressVPN
42:18
free at expressvpn.com
42:21
slash Barry. That's expressvpn.com
42:24
slash Barry.
42:26
Listen, we all have off days, but
42:28
your mental health is important. Just
42:31
as important as your physical health. If things
42:33
just don't seem to be going right or you're just
42:35
not feeling yourself, reach out
42:37
for some help. Sondermind Therapy
42:39
can connect you with a therapist who takes your insurance
42:42
and is available now. It's easy.
42:44
Visit Sondermind.com to meet with a therapist
42:47
who will help you get back to feeling like, well, you.
42:51
Everyone needs a little help once in a while. Sondermind.
42:54
Therapy works. Okay. Let's
42:57
talk a little bit about the emotional
43:00
and human concerns that to me are frankly the
43:02
most interesting. I talked to Tyler
43:04
Cowan on the show recently and he thinks that
43:06
the next generation of kids are going to have
43:09
essentially what
43:11
Walking Phoenix has in the movie Her with
43:13
Scarlett Johansson, like an AI friend
43:15
or an AI pet or an AI assistant,
43:17
whatever you want to call that. Right.
43:20
And one of the things that parents are going to have to decide is how much time
43:22
to let their kids spend with their AI
43:25
the way our parents had to decide how much TV we're
43:27
allowed to watch. I think having
43:29
a relationship with a bot brings
43:32
up all kinds of fascinating ethical
43:34
questions. The main thing
43:36
though, to me is that
43:38
it's not a real relationship or
43:40
maybe you think it is and no,
43:42
I don't. Okay. Well, given the
43:44
amount of time that we see kids already
43:47
spending on social media, what that's
43:49
doing for their emotional health. In
43:51
what world would having an AI
43:54
companion be a good step
43:56
forward for kids?
43:58
Oh, I suspect it can easily. be a good step
44:00
forward. You know, already with what
44:03
you were hearing about people who are,
44:06
you know, going through something really hard that they feel
44:08
uncomfortable talking to their friends about,
44:11
or even in some cases, they're like uncomfortable
44:13
or don't have access to a therapist, and
44:15
that they're like, relying on chat
44:18
GPT to help them process emotions.
44:21
I think that's good. We'll need some guardrails about
44:23
how that works. But people are
44:25
kind of getting very clear and
44:28
deep value from it. So I don't
44:30
know what the guidelines will be. We'll have to figure out screen
44:32
time limits or whatever. But I
44:34
think there's a role for this for sure. But don't you
44:36
fear that given how good the
44:38
AI is at telling people what
44:40
they want to hear, that we can basically
44:43
create a scenario where everyone is
44:45
living in their own isolated echo
44:47
chamber, and that children aren't
44:50
developing, especially kids that are born
44:52
into a world where they're AI natives
44:54
or whatever, where they're not learning
44:57
basic human interactions, basic social
44:59
skills, how to hear things that they don't want to hear.
45:02
Like, to me, it's basically
45:04
China and kids are
45:05
to me the things that when I think about this technology
45:08
that kind of freaks me out the most.
45:10
Yeah, we will need new regulation to
45:13
prevent companies from following
45:15
like the gradient of hacking attention to sort
45:17
of get kids to use their product all the time. But
45:20
we should address what we're concerned
45:22
about rather than just say, like, there's no value here when
45:25
clearly there is.
45:25
Okay, one more question about children. And
45:28
that's the impact that this technology is already having
45:30
on education. Some people
45:32
say that chat GPT has in a matter
45:34
of months normalized
45:36
cheating that was already rampant because of COVID,
45:38
but normalized cheating among students. According
45:40
to this one study I was reading, over
45:43
a quarter of K through 12 teachers have caught
45:45
their students cheating with chat GPT.
45:48
And roughly a third of these teachers wanted to be banned
45:50
in their schools. How
45:53
much does that worry you? Or do you see that
45:55
as just sort of like we're in the liminal
45:57
space between the old regime
45:59
and the old school?
45:59
and what we considered fair, and the
46:02
new one where this will sort of just be integrated into
46:04
the way we think about education.
46:06
The arc of this has been really interesting to watch. And
46:09
this both anecdotally matches what
46:11
I've heard from teachers that I've talked
46:13
to about this and also what we've
46:15
seen from various studies online. When
46:18
it initially came out, the reaction was
46:20
like, oh man, K through 12
46:23
education is in a total bad shape.
46:25
This is the end of the take-home essay. You know,
46:29
here, here, there, it was really
46:31
not good. And now,
46:34
and it's only been a few months, like five months, something like that,
46:37
now people are very much like, I'm
46:39
going to change the whole way I teach to take advantage
46:42
of this and it's much better than the world before.
46:44
And please don't take it away.
46:46
A lot of the story of chat GPT getting
46:48
unbanned in school districts was teachers saying, like, this
46:50
is really important to my kids' education.
46:54
And we're seeing amazing things from teachers that
46:56
are figuring out different ways to get
46:58
their students to use this or to incorporate this into their
47:00
classroom. And you know,
47:03
in a world of like very overworked
47:05
teachers and not enough of them,
47:08
the fact that there can be supplemental tutoring
47:10
by an AI system,
47:11
I think is really great. As you definitely
47:14
know, there has been a lot of discussion over
47:16
the past few months heating
47:19
up, I would say more and more about
47:21
biases in tech
47:23
broadly, including a Twitter, but especially
47:25
biases in terms of AI
47:28
because human beings are creating these programs
47:30
and therefore the AI
47:32
is not some like perfect intelligence.
47:36
It's built by humans and therefore
47:38
it's reflecting our biases. And
47:41
you know, the difference, some would argue between
47:43
something like Twitter is that we
47:46
can at least understand the biases and we
47:48
can follow the people who created
47:50
the algorithm as they talk back and forth in Slack.
47:53
But when it comes to a technology like AI,
47:56
which even its creators don't fully understand
47:58
how it works, the bias. is
48:00
not as easy to uncover, it's not as transparent.
48:03
How do we know how to find it if we don't know how to look for it? What
48:06
do you say to the people who basically
48:08
look at chat GPT and say,
48:10
you know, there's bias all over this thing and
48:13
that is unbelievably dangerous.
48:15
Forget disinformation, forget the creation
48:18
of propaganda. The system
48:20
itself is a kind of propaganda,
48:22
right? Elon Musk went on Tucker Carlson. What's
48:25
happening is they're training the AI to lie. Yes.
48:28
It's bad. To lie. To lie. That's exactly right.
48:30
Yes. And to withhold information. To lie and
48:33
yes, comment on some things, not comment
48:35
on other things, but not
48:38
to say what the data actually
48:41
demands that it say. How did it get this way? I thought
48:43
you funded it at the beginning.
48:45
And he claimed that OpenAI is
48:47
training the AI as he put it to lie.
48:50
What do you make of the conversation around the biases
48:52
in this technology? You know, I mentioned earlier
48:55
that I was embarrassed of the first version
48:57
of chat GPT. I
49:00
do think the first version did not do an
49:02
adequate job of representing say the median person
49:05
on earth, but the new versions are much
49:07
better. And in fact, one thing that I appreciate is most
49:11
of the loudest critics of the initial version
49:13
have gone out of their way to say like, wow,
49:15
OpenAI listened and the new version
49:17
is much, much better. We've really
49:19
looked at our whole training stack to see the different
49:21
places that bias seeps in, biases unavoidable,
49:24
but find out where it is, how to
49:26
measure it, how to design emails for it, like
49:29
where we need to give different instructions to human labelers,
49:31
how we need to get a more reflective set of human labelers.
49:34
And we've made a lot of progress there. And again,
49:36
I think it has gone noticed and
49:38
people have appreciated it. That
49:41
said,
49:42
I really believe that no two people
49:44
on earth will ever agree
49:46
that one AI system is fully unbiased.
49:50
And the path here is
49:52
to set a very broad
49:54
limits of what the behavior
49:57
of one of these systems should ever be. So
49:59
agree on that. some things that just we
50:01
don't do it all. And that's
50:03
got to come from society, ideally
50:06
globally if it has to be by country in some cases,
50:08
which I'm sure it will, that's fine too. And then
50:10
B within that, give each individual
50:13
user a lot of ability to say,
50:15
here is the way I want this AI to behave
50:17
for me. Here are the things I believe
50:19
here's how I would answer this contentious social issue.
50:22
And the system can then act in accordance with
50:24
that. When Elon is saying that
50:27
open AI is training the AI to lie,
50:30
is there any truth to that? You'd have to, I don't even
50:32
know what he means by that. You'd
50:34
have to ask him. Let's talk
50:36
a little bit about the ethics of running a company
50:39
with such potentially world
50:41
changing technology. When open AI
50:43
started, it started as a nonprofit.
50:45
And the reason it started as a nonprofit,
50:47
as you guys articulated it, is that you were concerned
50:50
about other companies creating potentially
50:52
dangerous technology purely for profit
50:55
motivation. But recently you've
50:57
taken that nonprofit and created a capped
51:00
for profit arm
51:02
worth $29 billion with a huge
51:04
investment from Microsoft. Talk
51:06
to me about the decision to make that change. Why
51:08
did you need to make that change?
51:10
That's like how much the computing power for these systems
51:12
cost. And we weren't
51:15
able to raise that as a nonprofit. We weren't able
51:17
to raise it from governments. And that was
51:19
really it. I recently read that
51:21
you have no stake in open
51:24
AI. Tell me about the decision to not have
51:26
any stake in a company that maybe
51:28
stands to be the most profitable company of all
51:31
time.
51:32
I
51:34
mean, I already have been
51:38
super fortunate and done super well.
51:41
I have like plenty of money. This
51:43
is like the most exciting thing I can imagine working
51:45
on. I think it's really important to the world. This is like how
51:47
I want to spend my time. As
51:51
you pointed out, we started in a way
51:53
for a particular reason. And I
51:57
found that I like personally having
51:59
like very clear motivations
52:01
and incentives. I do think we're gonna have to
52:03
make some very non-traditional decisions as a company,
52:07
but I'm like in a very fortunate position
52:09
of having the luxury of doing this, of not
52:11
having equity. So you're super rich,
52:13
and so you can make the decision not to do that, but
52:15
do you think this technology is so
52:18
powerful and the incentives, the
52:20
possibility of making so much money is so
52:22
strong that it's sort of an ethical
52:25
imperative for anyone helming
52:27
any of these companies
52:29
to sort of make the decision
52:31
to be financially monastic
52:34
about it? Like if the incentive in
52:36
a kind of AI race is
52:39
to be the first and be the fastest, you sort of alluded
52:41
to other companies that are already cutting corners
52:43
in order to do that, right? How do
52:45
you, sort of having
52:48
democratically elected heads of AI companies,
52:51
what are the guardrails that can be put in
52:53
place to prevent people from being
52:56
corrupted or incentivized
52:59
in ways that are dangerous? Actually,
53:02
I do think democratically elected heads of AI
53:04
companies, or like, you know, major
53:06
AGI efforts, let's say, I think that is probably a good
53:08
idea.
53:10
Like I don't know why we sit short of that. I think that's like pretty
53:12
reasonable. Well, that's probably not gonna happen
53:14
given that the people that are in charge
53:16
of this country don't even seem to know what sub-stack
53:19
is. And, you know, like, tell
53:21
me how that would actually work.
53:23
I don't know. Like, this is all still speculative.
53:26
I have been thinking about things in this direction like
53:28
much more, but like, what if all the users
53:31
of OpenAI got to like elect the CEO?
53:33
It's not perfect, you know, because
53:35
it impacts people who don't use it. And we're
53:38
still probably too small to have a representative. We're
53:40
still way too small to have anything near a representative sample,
53:42
but like it's
53:43
better than other things I could imagine. Okay,
53:46
well, let's talk a little bit about regulation. You've
53:48
said that you can imagine a global governance
53:50
structure, kind of like Galactic Federation, I
53:52
guess, that would oversee decisions
53:55
about the future of AI. What I
53:56
would like more than like a global
53:59
Galactic, whatever, whatever. is like something, we talked
54:01
about this earlier, something like the IAEA. Something
54:05
that has real
54:07
international power by treaty, and
54:10
that gets to inspect the labs,
54:13
set regulation, make sure we have
54:15
a cohesive global strategy.
54:17
That'd be a great start. What about the American
54:19
government right now?
54:20
What do you think our government should be doing right
54:23
now
54:24
to regulate this technology?
54:26
The one thing that I would like to see
54:28
happen today, because I think it's impossible
54:30
to screw up and I should just do it, is
54:34
insight, like government insight,
54:36
ability to audit, whatever, training
54:39
runs, models produced above a
54:41
certain threshold of compute, or
54:43
above a certain capability level would be even better. If
54:46
we could just start there, then I think the government would begin
54:48
to learn more about what to do, and it would be like a
54:50
great first step. I
54:53
guess my pushback to that would be like, do you really
54:55
want Dianne Feinstein deciding,
54:58
do you trust the people currently
55:01
in government even to understand the
55:03
nature of this technology, let alone regulate it?
55:06
I mean, I think you should trust the government
55:08
more than me, like at least you get to vote them out.
55:13
Given that you are the person though running it, what
55:16
are the things that you do to prevent,
55:20
I guess the word would be like
55:22
corruption of power that
55:25
seems to me that it would be the biggest
55:28
possible risk for you right now. Like
55:30
of me personally being corrupted by power,
55:33
the company, what do you mean? Yeah, I mean,
55:35
well,
55:36
listen, you've been a very powerful
55:38
person in your industry for many years. It
55:41
seems to me that over the past six
55:43
months or so, you've become arguably one
55:45
of the most overseeing
55:48
a technology that a lot of
55:50
really smart people are warning at
55:53
best will completely revolutionize
55:55
the world and at worst will completely swallow
55:58
it or as you said, lights out for everybody.
55:59
Like how do you deal
56:02
from, I guess I'm asking a spiritual or
56:04
an emotional or psychological question. How
56:07
do you deal with the burdens of that?
56:09
How do you prevent yourself from being, I
56:11
don't know, like another
56:14
way of asking that is like, what is your North Star?
56:17
How do you know that you're making the right choices and decisions?
56:20
Well,
56:20
first of all, I want to like talk about
56:23
having power. Like I don't have, I don't
56:26
know, I was gonna say I don't have super voting shares, but I don't have shares
56:28
at all. I don't have like
56:30
a special vote. Like I serve at
56:32
the pleasure of the board. I do this the old fashioned way
56:34
where like the board can just decide to replace the CEO. I
56:37
think I like to think I would be the first to say
56:40
if I for some reason thought
56:42
I was not doing a good job. And I
56:44
do think and I don't know what the right way to do this
56:46
is I don't know what the right timing for it is. But I
56:49
do think like, whoever
56:51
is in charge of
56:53
leading AGI efforts
56:55
should be democratically elected somehow
56:57
that seems like super reasonable and,
57:00
you know, difficult to argue with to me.
57:02
But it's not like I like have dictatorial
57:04
power over opening eye nor what I want
57:06
it. I think that's like really important.
57:09
That's not what I'm suggesting. I'm suggesting that
57:11
like, in the firmament of
57:14
a galaxy
57:16
that seems like all
57:18
of the wealth, all of the ideas, all
57:20
of the
57:22
I don't mean power in the Washington DC
57:24
sense of it, but power over the future is emanating
57:27
out of this particular group of people.
57:29
And you are one of the stars in that firmament and you
57:31
become a brighter and brighter and brighter
57:34
star, like how that's changed
57:36
you and how you think about
57:37
you mean like the tech industry in general, not like
57:40
are not like opening eyes. So I mean tech in
57:42
general, and I mean, AI as the sort
57:44
of pinnacle of the tech world. Um,
57:48
it definitely feels surreal. I
57:51
heard a story once about it's
57:53
always stuck with me for some reason, about
57:55
this like astronaut, former
57:57
astronaut that would years,
58:00
decades after going to the moon, stand
58:03
in his backyard and look up at the moon
58:06
and think it was so beautiful. And then randomly
58:08
remember that, oh, fuck. Decades
58:11
ago, I went up there and walked around on that thing. That's so
58:13
crazy. And I
58:15
think I sort of hope that's how I feel
58:17
about open AI decades from now.
58:20
It's on its 14th
58:22
democratically elected president or whatever. I'm
58:26
living this wonderful life
58:29
in this fantastic AGI
58:31
future and thinking about how marvelous
58:33
and how great it is. And then I see something about open
58:35
AI. And I remember that, oh, yeah, I used to run that thing. But
58:39
I think you are probably
58:41
overstating the degree
58:44
of power I have in the world
58:46
as an individual. And I probably underperceive
58:49
it myself.
58:50
But you still just kind
58:52
of go about your normal life with all of the normal
58:55
human drama and wonderful experiences
58:57
and just sort
58:59
of the stakes elevate around you or something.
59:02
And you're aware of it, or
59:04
I'm aware of it. And I take it super
59:06
seriously. But then I'm
59:10
running around a field laughing or whatever. And
59:13
you forget it for a little bit. And then you remember. I'm
59:15
trying to figure out how to get this across.
59:17
It is somehow
59:19
very strange and then subjectively
59:22
not that different. But I
59:24
feel the weight of it. Is there a kitchen
59:26
cabinet or, I guess, a signal
59:28
or WhatsApp group of the people that
59:31
are in your field talking about
59:33
these
59:34
kind of existential questions that this
59:36
technology is raising?
59:38
All the time. Many signal groups, even
59:41
across competitive companies. I think
59:43
everyone feels the stakes. Everyone feels the weight of
59:45
it. Let's talk a
59:47
little bit about the future and your thoughts on the future.
59:50
The computer scientist and futurist,
59:52
Rick Horswild, predicted in 2017 that AI
59:55
robots would outsmart human intelligence by 2029.
59:58
So I don't know. Maybe we'll get there.
59:59
He's also been really optimistic about
1:00:02
AI's ability to extend our lifespans
1:00:04
and heal illness, cure diseases. He
1:00:06
believes by the 2030s will be able to augment
1:00:08
our brains with AI devices and possibly
1:00:11
live forever by uploading a person's neural
1:00:13
structure onto a computer or robotic
1:00:16
body. In the Kurzweil vision
1:00:18
of the future, where
1:00:19
do you fall? Does that sound realistic
1:00:22
to you? Like it's not prevented
1:00:24
by the laws of physics. So
1:00:27
sure, but it feels really difficult to me
1:00:30
right now. You know, we
1:00:32
figure everything out eventually, so we'll get there someday, I guess.
1:00:35
There's an idea that has come up
1:00:37
a lot over the past while, right? It's this idea
1:00:39
of techno utopianism, this ideology
1:00:42
based on the premise that advances in science
1:00:44
and technology can sort of bring
1:00:46
about something like utopia, right?
1:00:48
By solving depression and cancer and obesity and poverty,
1:00:51
even possibly death. Really,
1:00:53
the technology can solve all of our problems.
1:00:56
Do you consider yourself
1:00:58
sort of of that school? Do you believe the
1:01:00
technology solves more of our problems than
1:01:02
it does create them? I
1:01:05
was gonna say, I think technology can solve all
1:01:07
problems and continuously create new ones. So
1:01:10
I am a,
1:01:11
I'm definitely a pro technologist, but I
1:01:13
don't know if I would call myself like a techno utopist. Is
1:01:16
there something that comes to mind that you know
1:01:18
technology can't solve? I
1:01:21
do not think that technology can
1:01:24
replace genuine human connection in the way I understand
1:01:26
it.
1:01:28
One of the things that comes to mind for me when
1:01:30
I think about problems that I don't think technology
1:01:32
can solve, but it seems like a lot of smarter
1:01:34
people than me disagree, is the problem
1:01:37
of death. The
1:01:39
average man in the United States born today
1:01:41
will live until about 75 years old, the average
1:01:43
woman a little higher, about 80 years old. If
1:01:46
you look back to the 1920s, this is an unbelievable
1:01:48
improvement. People then weren't expected basically
1:01:51
to live past 55. You've
1:01:53
invested 180 million dollars into
1:01:55
a startup called retro biosciences,
1:01:57
whose mission
1:01:58
is to add 10 years to the human life span.
1:01:59
span, putting us at living, let's call
1:02:02
it 85 to 9 years old on average.
1:02:04
Tell me why you decided to invest in this
1:02:07
and how realistic you think
1:02:09
it is that it's actually going to be able to achieve its goal.
1:02:12
Look, in terms of avoiding biological
1:02:14
death, I share your skepticism, although
1:02:16
maybe, you know, if the computer upload whatever, whatever
1:02:18
thing works, sure. More
1:02:22
health span, that feels super
1:02:24
doable to me. Like right now,
1:02:26
I think our healthcare system, this is
1:02:28
part of why I wanted to invest, is not very good. We
1:02:31
spend a huge amount of money on
1:02:33
a low quality of life generally for someone's later
1:02:35
years. And really what you
1:02:37
would like, or I think what most people would like is to
1:02:40
stay very
1:02:40
healthy for as long as they can and then
1:02:43
have a pretty quick decline
1:02:44
rather than the way it often happens now. And
1:02:48
that feels to me doable.
1:02:51
And I think all of the advances in partial
1:02:53
reprogramming are one of the most exciting things happening
1:02:55
in bio right now. It may turn out to be way harder
1:02:57
than we think. It may turn out to be easier,
1:02:59
but it is certainly quite interesting.
1:03:03
For the
1:03:03
person who's thinking, what
1:03:06
the hell is Sam talking about? The idea
1:03:08
of technology here to extend human
1:03:10
life, that just seems so far off. How
1:03:13
can the average person that doesn't have
1:03:15
your kind of knowledge and insight
1:03:17
into technology
1:03:18
prepare for what
1:03:21
is about to come over the next five or 10 years?
1:03:24
Before this interview, I went on Twitter and I asked people
1:03:26
what I should ask you. And there was a
1:03:28
Twitter user, Alex, who wrote, if you were a
1:03:30
college senior, what majors
1:03:33
and career pathways would you decide
1:03:36
or would you recommend Sam knowing what's sort
1:03:38
of around the bend in light of
1:03:41
AI development especially?
1:03:43
I think it's like a big mistake to
1:03:45
put too much weight on advice from other people. In
1:03:47
my life, I have been feared badly
1:03:50
by advice much more often than the other way
1:03:52
around. So you don't give
1:03:55
advice ever? I
1:03:57
think I used to give too much advice. it
1:04:00
was sort of like such a part of like
1:04:02
running YC or being a YC partner. And
1:04:05
now I try to give much less advice with
1:04:07
much more awareness of how frequently
1:04:10
advice is wrong.
1:04:11
So study whatever you want.
1:04:14
Like study whatever like follow your own personal
1:04:17
curiosity and excitement. Realizing
1:04:18
the rate of change in the world is going to be high
1:04:21
and that you need to be very resilient to such change.
1:04:24
But don't take your life advice about what
1:04:26
to go work from somebody else. There
1:04:29
have been a lot of moments in the past decade
1:04:31
where people said a new technology
1:04:33
was going to completely upend the world as we
1:04:35
know it. They said that about virtual
1:04:38
reality. They said it about crypto. And
1:04:40
personally, I don't own a VR headset
1:04:43
and I have $10,000 in Bitcoin that I don't know
1:04:45
how to get out because I forgot my Coinbase password.
1:04:48
I think of the question a lot of people are wondering is
1:04:51
what makes this different?
1:04:54
Well, we might be wrong, right?
1:04:57
Like they might be right. This might not be different.
1:04:59
This could hit a wall. This could change
1:05:02
things somehow much less than we think. Even
1:05:04
if
1:05:04
AI is really powerful, it might just mean the
1:05:06
world goes much faster, but the
1:05:09
human experience doesn't change
1:05:11
that much. I'm
1:05:15
very biased. My personal belief for the last decade
1:05:17
has been that the two most important technological
1:05:19
trends would be AI and abundant
1:05:22
energy. And I've spent all my time on
1:05:24
those things.
1:05:25
And it's
1:05:27
very much what I believe in. And it's very much
1:05:29
like my filter bubble.
1:05:30
So I think that's right. But
1:05:33
I think anyone listening should have
1:05:35
a huge amount of skepticism on me saying that.
1:05:38
And it might not be different. I mean, hopefully
1:05:40
it's going to be better than like crypto in the metaverse. But
1:05:43
like even those, I think you're going to be pretty cool. Another
1:05:46
project that I work on is this thing called Worldcoin
1:05:49
that I helped put together a few
1:05:51
years ago. And it was like horribly
1:05:54
mocked for a long time. And
1:05:57
now all of the kind of like crypto tourists
1:05:59
have gone. true believers are still there,
1:06:01
people see why we wanted to start the project,
1:06:04
and now it's like, I think, super exciting.
1:06:07
So it's just like the future
1:06:09
is hard to predict. These trends take a while to untangle.
1:06:13
Sam Altman, let's do a lightning round. All
1:06:16
right. Sam, what is the best
1:06:18
thing you've ever invested in? Financially
1:06:21
or this, like, brought me the most joy? Joy. Let's
1:06:24
go joy. All of the time spending in open AI. Okay. And
1:06:26
financially? I suspect that'll turn out to be
1:06:29
Helion. What is Helion? It's
1:06:31
a nuclear fusion company that
1:06:33
I'm pretty closely involved with. What
1:06:35
is the first thing you ever asked chat GPT?
1:06:39
That is a good question. I
1:06:43
don't remember. I think it most
1:06:45
likely would have been some sort of arithmetic question. Sam,
1:06:49
do you think UFOs are real? Like do I think
1:06:51
they're aliens or do I think there's been like flying objects
1:06:54
from other militaries that we don't
1:06:56
know what they are? Flying objects. Do you think
1:06:58
that there are aliens? No. What
1:07:00
do you look for when you're interviewing for a candidate
1:07:03
applying for a job at open AI? All
1:07:06
of the normal things that I would look for for any
1:07:09
other role, you know,
1:07:11
intelligence, drive, hard
1:07:13
work, creativity, team spirit,
1:07:16
all of the normal things, plus
1:07:18
a real
1:07:20
focus and dedication to the
1:07:23
good AGI outcome. What
1:07:25
is one book that you think everybody should read? I
1:07:28
mentioned it earlier in this conversation,
1:07:30
but I'll say at the beginning of infinity. I know you don't
1:07:32
like advice, but what's the best piece of
1:07:34
advice that you've ever received? Don't
1:07:37
listen to advice too much. What is the fundamental
1:07:39
truth that you live by? You
1:07:41
can get more done than you think. You are capable
1:07:44
of more than you think. You get
1:07:46
to have dinner tonight with anybody dead
1:07:48
or alive. Your dream dinner. Who's
1:07:50
at that dinner? I think I'd have a very different
1:07:52
answer to this question like any day
1:07:55
given like what I'm thinking about, but
1:07:57
you like what I pick for today.
1:07:58
Yeah, today. Yeah, I'd pick allen-terring.
1:08:01
Interesting.
1:08:02
A few years ago, you told a colleague,
1:08:05
and it was in the New Yorker, great profile
1:08:07
about you, that you were ready for the end of the world.
1:08:09
You sort of added yourself as a prepper. You had guns,
1:08:11
you had gold, you had batteries, you had a patch of land
1:08:13
in Big Sur. Are you still
1:08:15
a prepper?
1:08:17
No, not
1:08:19
in the way I would think about it. It was like a fun hobby,
1:08:21
but there's nothing else to do. I also,
1:08:24
for all of this stuff, about like, oh man,
1:08:26
none of this is going to help you if AGI
1:08:29
goes wrong.
1:08:30
But it's like a fun hobby. Sam,
1:08:33
you grew up Jewish. Do you believe in God?
1:08:40
I want to say yes, but not in the Jewish
1:08:42
God or the way that I think most
1:08:44
other people would define that question. What
1:08:47
do you mean by that? I
1:08:51
can't answer this in a light, Neeram. Okay,
1:08:54
here's some questions from Chat GPT. Sam,
1:08:58
GPT wants me to ask, what futuristic
1:09:01
technology do you wish existed today? Can
1:09:03
I say AGI? Sure. What
1:09:07
technology do you think will be obsolete
1:09:09
in 10 years? GPT-4.
1:09:13
What futuristic mode of transportation
1:09:16
are you most excited about? Fusion
1:09:18
powered starships. And Sam,
1:09:21
last question,
1:09:22
brought to you by your own company. When
1:09:25
were you first introduced to AI? And
1:09:28
what about the concept stuck with you? What
1:09:30
made you believe in its potential?
1:09:32
I must have heard about it first from sci-fi,
1:09:34
but my subjective memory of this
1:09:37
is as a child using
1:09:40
a computer, thinking about what would happen when
1:09:42
the computer could think. How old were you? There's
1:09:48
a million more questions I want to ask you, but we're
1:09:50
out of time and I know you need to go and do
1:09:53
a lot of things at OpenAI. So Sam
1:09:55
Altman, thanks for joining us.
1:09:57
Thanks for having me on.
1:10:04
Thanks for listening. We think AI
1:10:07
is an unbelievably interesting topic,
1:10:09
one we want to cover more on the show. If
1:10:11
you were provoked by this conversation, if
1:10:13
it educated you, if it excited
1:10:15
you, if it concerned you, if it made you
1:10:17
want to go and use chat GPT and
1:10:20
find out what it's all about, it's great.
1:10:23
Share this conversation with your community and use
1:10:25
it to have a conversation of your own. If
1:10:27
you want to support Honestly, there's just one way to
1:10:29
do it. Subscribe by going
1:10:32
to thefp.com
1:10:34
today. See you next time.
Podchaser is the ultimate destination for podcast data, search, and discovery. Learn More