Episode Transcript
Transcripts are displayed as originally observed. Some content, including advertisements may have changed.
Use Ctrl + F to search
0:00
Support for this show comes from E*TRADE from
0:02
Morgan Stanley. When the markets are
0:05
mess, you need a team that can help you invest
0:07
with confidence. With E*TRADE's easy
0:09
to use platform plus access to Morgan
0:11
Stanley's in-depth research and guidance, you'll
0:13
have the information you need to make informed
0:16
investment decisions, no matter the market
0:18
conditions. Get started today at e*TRADE
0:20
dot com slash
0:21
vox. E*TRADE Securities LLC
0:23
member SIPIC, a subsidiary of Morgan
0:25
Stanley. Support
0:28
for on with Cara Fisher comes from Polstar.
0:31
At Polstar, every inch of every vehicle
0:33
the design is thoughtfully made. They're made
0:35
to transform auto performance accelerating
0:37
from zero to sixty in less than four point
0:39
two seconds with fully electric all
0:41
wheel drive. They're made to elevate the driving
0:44
experience with LED lights and a panoramic
0:46
glass roof, and they're made to uphold a greater
0:48
responsibility to the planet using sustainable
0:50
materials and energy saving systems, the
0:53
result is a car that combines the best of
0:55
today with the technology of tomorrow, pure
0:57
performance, pure design, pulsar.
1:00
Design yours and book a test drive today
1:02
at poll star dot com.
1:11
Hi, everyone from New York Magazine and the
1:13
vox media podcast network. This is
1:15
nonprofit OpenAI, which is
1:17
now very much for profit and a hundred
1:20
percent scarier. Just
1:21
kidding. Actually, I'm not kidding. This is
1:23
on with Cara Swisher and I'm Swisher, and
1:26
I'm named Araza. It's amazing how an open
1:28
source nonprofit has moved to being a close
1:30
source private company with the big deal
1:33
with Microsoft. Are you shocked? No.
1:35
Not even slightly. It's a huge opportunity. I'm
1:38
in San Francisco now, and it's really jumping with
1:40
AI. Crypto didn't quite work out, and
1:42
even those people moved to Miami. And
1:45
so it's very AI oriented right now.
1:47
But he's thinking about a startup in
1:49
AI.
1:49
Are you more bullish on AI than web three?
1:52
Well, So that's kind of a low bar. So
1:54
Yeah. Yeah. And I've always been bullish on AI. I've talked
1:56
about lot over the years and, you know, this
1:58
is just a version of it as it becomes more
2:01
and more sophisticated and useful to people. So,
2:03
I've always thought it was important, and I think most
2:05
of the key technologists in Silicon Valley
2:07
have always thought it was
2:08
important. Agreed. I was talking to a VC yesterday,
2:11
though, about how so many things that are not AI
2:13
are being billed as AI tech companies now,
2:15
and they're really not AI. They might
2:17
have, like, a large learning model, but they're not quite
2:19
AI. Yeah. But last episode, we had read
2:21
off been on talking about what was possible.
2:24
Mhmm. With AI. And now
2:26
we have one of Reed's many mentees,
2:28
Sam Altman. Sam is the CEO
2:30
of OpenAI and leads the team that
2:32
has given us chat GPT and GPT
2:35
four. He actually burst onto
2:37
the scene as a young Stanford dropout, I think, in
2:39
two thousand five. With the startup
2:41
looped. Right? Is that when you met him? Yes. Mhmm. When
2:43
he had looped, I visited him and is small. He was a little startup,
2:45
and it didn't do very well. It
2:47
was a location based kind of
2:48
thing. I don't even remember. Social
2:50
network, Bradley GEO social network. You
2:52
know, it was not Facebook, let's just say. So
2:55
he was one of these many, many startup people
2:57
that sort of were all over the
2:58
valley. Very smart, but the company didn't
3:00
quite work out. Yeah. They kind of went bust,
3:03
I think, not many years later, but he became super important
3:05
in the valley. Especially in my generation. He's
3:07
got my age because of Y Combinator.
3:09
He has led the startup accelerator that has
3:12
incubated and launched
3:13
strip, Airbnb, Coinbase. Yeah.
3:15
He got there later. It was working before he got
3:17
there, but he
3:18
actually led it to new heights, I think, in
3:20
a lot of ways. It was very He came
3:21
in in twenty fourteen. don't remember.
3:23
I remember when he took over, but he really invigorated
3:26
it and was very involved in the startup
3:28
scene. It was a great role for him. He was a great cheerleader.
3:30
And you know, he's good at eyeing good
3:32
startups.
3:33
Do you see
3:34
him as, like, kind of one of the Elon Musk
3:36
Peter Till Rita Altman of his generation?
3:40
Kind of yeah. There's a lot of really smart
3:42
people. But, yeah, he's definitely special and he really
3:44
did, you know, he had a bigger mentality,
3:47
more like read than the others,
3:49
although they had it initially, not Peter Till,
3:51
but he was thinking of big things
3:53
with the Altman. I I really
3:55
like him. I've gotten to know him pretty well over the
3:57
years. And so I've always enjoyed talking
3:59
to him. He's very thoughtful. He's got a lot of interesting
4:02
takes on things. And this is really
4:04
big deal now that he's sort of landed on
4:06
taking OpenAI AI to these
4:08
heights? Yeah, he has. He want like you,
4:10
he wants to entertain the notion of running for office
4:12
in California. He he thought about running for governor,
4:14
something that you've talked to him about. Yeah.
4:16
We talked about it. But he went on to revolutionize
4:19
AI. So you think that's better or worse for
4:20
humanity. I don't know. We'll see. You know,
4:22
California is probably easier to fix than what
4:25
we're gonna do about AI once it gets fully
4:27
deployed. Although, you know, the whole issue is there's lots
4:29
of great things and there's lots of bad things. And
4:31
so we wanna focus on both because it's
4:33
like I
4:33
say, it's like when the Internet started. We didn't know what it
4:35
was gonna be. I think a lot of people are being very
4:38
creative around what this could be and what problems
4:40
it could solve and at the same time problems
4:42
it could create. Do you think that the fear
4:44
is overblown like this? Our jobs
4:46
are at risk. AI is gonna you
4:47
know, on those stories, yes. I Yes.
4:50
It's like saying what is, you know, the car done
4:52
for us or lights or something like that.
4:55
You know, things will change as they always do.
4:57
And so I've always thought
4:59
most of the fears were bullet, but as I
5:01
say in the book I'm working on right now, which is why
5:03
I'm in San Francisco, is everything that can
5:05
be digitized will be digitized. That's just inevitable,
5:08
and that's where it's going. So this will soon
5:10
be two bots talking to each other? No.
5:12
No. But search is so antiquated when you think about
5:14
a typing words into a bot. It's really Neanderthal
5:17
anyways. And this is this is an upright
5:20
homo sapient. Well, it's been interesting because
5:22
critics have kind of swarmed about chat GPT
5:24
earlier on and and Sam was coming back on Twitter
5:27
saying just wait for the next iteration. Right? We now
5:29
have in GPT four. We we couldn't book the interview
5:31
with him until GPT four was out. But
5:34
the model still has many issues,
5:37
and he himself has noted this. He
5:39
tweeted that it's still
5:40
flawed. Still limited, and it
5:42
still seems more impressive on first use
5:44
than it does after you spend more time with
5:46
it. This was about GPT four. Yeah. I
5:48
would agree. But that's a very interesting thing because
5:50
the fact that it's more impressive on first blush than it
5:52
is after you use it is part of the problem because
5:55
I've been using my GPT
5:57
plus and it pulls up all kinds of
5:59
interesting, like, write me a research paper and then it
6:01
will it will look really good.
6:03
Mhmm. And it will have a bunch of
6:05
false information on
6:06
it. So this can
6:07
compound the misinformation problem when something
6:09
looks slick. Well, but isn't informed.
6:11
Right? Well,
6:12
data and data out. Craping, crap
6:14
out. I mean, it's just the same it's that's very simplistic
6:16
way of saying it. But I think, you know, it's like the
6:19
early Internet really sucked too. And
6:21
now it kinda doesn't and sort of does
6:23
and there's great things about it. But if you looked
6:25
at early Yahoo or Google or Google
6:27
was a much later, but early Yahoo and others,
6:29
it was a lot of bubble gum and
6:31
bailing wire.
6:32
Alright. Well, let's see what Sam Altman has to say.
6:34
And if he feels confident in of having done open
6:37
AI versus running for governor of California.
6:39
We'll take a quick break and we'll back with the interview.
6:48
Fox Creative. This is
6:50
advertiser content brought to you by Lenovo.
6:52
Linnovo sending people off to Odessa
6:55
Island. Hi.
6:58
I'm Mark Hurst. I'm a solutions manager
7:01
for hybrid and cloud for Lenovo workstations. I
7:03
get to work with lots of awesome
7:05
companies everywhere from like Formula
7:07
One to
7:09
healthcare, to
7:11
media entertainment. The
7:13
goal for Mark and Lenovo workstations is
7:15
to help power users stay connected with
7:17
their work from
7:18
anywhere. We have to be able to spot
7:20
the people in the cities, but also the people
7:22
in remote locations. So the question
7:25
then becomes, how do you work
7:27
from a desert island? First,
7:29
Identify the challenges. One
7:32
is power. Another
7:34
one is cooling. And
7:38
big one is network connectivity. Right
7:41
now, you've got this workstation, the better
7:43
thing it lacks is flexibility. The
7:47
alternative to that is if you were able
7:49
to connect to that system just using
7:52
a satellite and connect back to
7:54
have all your data at somewhere
7:55
else. It's gonna be a lot easier. By
7:58
using Lenovo's remote workstation solutions
8:00
with TGX software, Teams can connect
8:03
from
8:03
anywhere. The Lenovo p
8:05
six twenty includes the AMD
8:08
Ryzen thread gripper Pro processor.
8:11
You've got this amazing power that can be accessed
8:13
from anywhere and everywhere on the planet.
8:15
To learn more visit lenovo dot com slash
8:18
think station p six twenty.
8:22
Support for on with Cara Swisher comes from
8:24
Cole Star. Cole Star is an electric vehicle
8:26
company driven by sustainable design.
8:28
Every inch of their vehicles are built to maximize
8:31
form function and our future. Designed
8:33
to accelerate from zero to sixty in
8:35
less than four point two seconds, with a fully
8:37
electric all wheel drive system designed
8:39
with a sleek exterior using frameless
8:42
mirrors and a panoramic glass roof,
8:44
and designed with carefully crafted cabin utilizing
8:46
completely sustainable materials. This
8:48
is an electric vehicle unlike any other,
8:51
pure performance, pure design, Pulstar.
8:54
Design yours and book a test drive today
8:56
at pulse star dot com.
9:03
Sam, it's great to be in San Francisco, rainy
9:05
San Francisco to talk to you in person. We
9:07
need the rain. It's great.
9:08
I know. This atmospheric river is not kidding.
9:10
A lot of them.
9:11
Soaked on the way here. I'm miss San
9:13
Francisco. I'm here for a couple of bucks. I'm
9:15
going to. I'm going to I'm trying to convince
9:16
you that we're having a moment here. I'm I agree. It's time
9:18
to come back. I love San Francisco. I've never really left
9:21
in my heart. So You started
9:23
looped. That's where I meant to. Explain
9:25
what it was.
9:26
It was a location based social app
9:28
for mobile phones.
9:29
Right. So what happened?
9:31
The market wasn't there, I'd say, is the number
9:33
one thing. Yeah. Because
9:36
Well, I think, like, you can't force
9:38
a market. Like, you can have an idea about what something
9:40
or what people are gonna like -- Mhmm. -- as a startup
9:42
part of your job is to be ahead of it.
9:45
And sometimes you're right about that and sometimes
9:47
you're
9:47
not. You know, sometimes you make loops, sometimes you make
9:49
open AI. Yes. Right. Right. Exactly. Right.
9:51
But you started in two thousand fifteen after
9:53
being at y combinator. And
9:56
last late last year, you launched chat, GPT.
9:58
Talk about that transition. You had been you
10:00
reinvigorated Y Combinator in a lot
10:02
of
10:02
ways. I was handed such an
10:04
easy task with y combinator. mean, like,
10:07
I don't know if I reinvigorated. It was sort of
10:09
a super great thing by the time
10:11
I took
10:11
over. Well, what I mean is I think it was it
10:13
it got more prominence. You changed things around.
10:15
I don't mean to say it was
10:16
failing. Yeah. Yeah. Not at all. Not at all. III
10:19
think I scaled it more --
10:21
Mhmm. -- and we sort of took on longer
10:24
term more ambitious projects. OpenAI
10:27
eye actually sort of got that was, like,
10:29
something I just helped start while at
10:31
IC. Mhmm. And we did funded other
10:33
companies, some of which I'm very closely involved with, like,
10:35
keenly on the nuclear fusion company. They were gonna take a
10:37
long time. So I I definitely, like, had
10:39
a thing that I was passionate about, and we did more of
10:41
it. But I kind of just
10:43
tried to, like, keep PG and Jessica's vision
10:45
going there.
10:46
This is a school program. Program. And Jessica,
10:48
you had shifted out OpenAI Why was that?
10:50
When you're in this position, which is a high profile position
10:53
in Silicon Valley, sort of king of startups
10:55
essentially. Why go
10:57
off? Is it you want it to be an entrepreneur again?
10:59
No. III
11:01
don't I am not a natural fit
11:03
for CEO, like an investor
11:06
really, I think, suits me very well. Mhmm.
11:08
I got convinced that AGI was
11:10
gonna happen and be the most important thing I
11:12
could ever work on. think it is gonna like
11:15
transform our society in many ways.
11:18
And, you know, I won't pretend
11:20
that soon as we started OpenAI, I was sure it was gonna
11:22
work. But it became clear
11:24
over the intervening years and certainly by
11:27
twenty
11:27
eighteen, twenty nineteen that we
11:29
had a real chance here.
11:30
What was it that made you think that? A
11:33
number of things I'd be hard to point to just single
11:35
one, but by the time we made GPT
11:37
two, which was still weak
11:39
in a lot of ways, but you could look at scaling
11:41
laws and see what was gonna happen. Mhmm. I
11:43
was like, this can go very, very
11:45
far. And I got super excited about
11:47
it. I've never never stopped being super excited about it.
11:49
Was there something you saw that did it just
11:51
scaled? Or what was the Yeah.
11:53
It was the like, looking at the data of
11:55
how predictably better we could make
11:57
the system with more compute with more
11:59
data. Mhmm. There'd already been
12:01
a lot of stuff going on at Google with the
12:03
mind they had bought that earlier, right, around
12:05
that. Yeah. There had been a bunch of stuff, but
12:08
somehow, like, It wasn't
12:10
quite the trajectory that has turned
12:12
out to be the one that really
12:14
works. But
12:14
in two thousand fifteen, you wrote that superhuman
12:17
machine intelligence is probably greatest threat to
12:19
the continued
12:19
existence
12:20
of humanity. Explain.
12:22
I still think so. Okay. Alright. We're gonna get into
12:24
that. Why did you write that then? And
12:26
yet you also called it the greatest technology
12:29
ever. I
12:30
still believe both of those things. I think
12:31
at this point more of the world would agree on that at the time
12:33
that was considered very extremely like
12:36
crazy position. So
12:37
explain roll it out that you you wrote was probably
12:39
the greatest threat to continue to exist in many and
12:41
also one of the greatest technologies that could
12:43
improve humanity
12:45
and all
12:45
those two things out? Well,
12:47
I think we're seeing finally little previews
12:49
of this with John Altman especially,
12:51
I'd put g p GPT four out and
12:54
people can see this vision where
12:56
just to pick one example out of the thousands could
12:58
talk about, everyone in the world can have an
13:00
amazing AI tutor on their phone with
13:02
them all the time for anything they wanna learn.
13:05
That's really we need that. I mean, that's that's
13:07
wonderful. That'll make the world much better. The
13:09
creative enhancement that people are able to
13:11
get from using these tools to do whatever their creative
13:14
work is, that's fantastic. The economic
13:16
empowerment, all of these things And
13:18
again, we're seeing this only in the
13:20
most
13:20
limited, primitive, larval way.
13:23
But at some point, it's like, well, no, we can use these things
13:25
to cure disease. So what is the threat? Because
13:28
when I try to explain it to regular people
13:30
who don't quite
13:31
No. You're not. You're not a regular person.
13:33
I'm so offended. I'm not a regular person, but
13:35
When the Internet started, nobody knew what it was
13:37
gonna do. When you thought superhuman
13:39
machine intelligence is probably the greatest
13:40
threat, what did you mean by that? I think
13:42
there's levels of threats. Mhmm. So today,
13:44
we can we can look at these systems and
13:46
say, alright, no imagination
13:49
required. We can see how this can contribute
13:51
to computer
13:53
security exploits or disinformation
13:57
or other things that can destabilize society.
13:59
Certainly, there's gonna be economic transition.
14:03
And those
14:05
those are not in the future. Those are things we can look
14:07
at now. Mhmm. In the medium
14:09
term, I think we can imagine if these systems
14:11
get much, much more powerful. Now what happens
14:14
if a really bad actor gets to use them
14:16
and tries to like figure out how much
14:18
havoc they can wreck on the world or harm they can
14:20
inflict. Yeah. And then we can go further
14:22
to all of the sort of
14:25
traditional sci fi, what happens
14:27
with the kind of runaway AGI scenarios
14:29
or anything like that? Now, the
14:31
reason we're doing this work is
14:33
because we want to
14:35
minimize those downsides while still letting society
14:38
get the big upsides. And we think it's very
14:40
possible to do that, but it requires
14:43
in our belief, this continual
14:45
deployment in the world where you let people
14:48
gradually get used to this technology, where you
14:50
get give institutions regulators
14:53
policymakers time to react to it, where
14:55
you let people feel it,
14:58
find the exploits, find the
15:00
creative energy of the world will come up with
15:02
use cases. We and all the red teamers we could
15:04
hire would never imagine. And So
15:07
we wanna see all of the good and
15:09
the bad and figure out how to continually minimize
15:11
the bad and improve the benefits. And
15:14
you can't do that in the lab. And
15:16
this idea that we have that
15:18
we have an obligation and society will be better
15:20
off for us to build in public
15:23
even if it means making some mistakes along
15:25
the
15:25
way. Right. I think that's really important.
15:27
When people critique CHAT GPT, you
15:29
said, wait for GPT four. Now that
15:31
it's out, has it met expectations? A
15:34
lot of people seem really happy with it. Mhmm. There's
15:36
plenty of
15:36
things in your expectations.
15:38
Yeah. I'm proud of it. Mhmm. Again,
15:40
very long way to go, but as a step forward, I'm
15:42
proud of So you tweeted that at first glance that
15:44
GBT four seems more impressive
15:46
than it actually is. Why is that?
15:48
Well, I think that's been an issue with every version of
15:50
these systems, not particularly GBT four.
15:52
You find these, like,
15:54
flashes of brilliance before you find
15:56
the problems. Mhmm. And so
15:59
I think that someone used to say about GPT three that
16:01
has really stuck with me is it is the world's greatest
16:04
demo creator because
16:06
you can tolerate a lot of mistakes there. But
16:08
if you need a lot of reliability for a production
16:10
system, it wasn't as good at that. Now,
16:13
four makes less mistakes. It's more reliable, more
16:15
robust. But still long way to
16:16
go. One of the issues is hallucinations.
16:18
I called who's
16:19
it was just kind of a creepy word. I have to say
16:21
that you What did you wish to call it instead? Mistakes.
16:24
Mistakes or something like hallucinations and feels like
16:26
it's
16:26
sentient. It's interesting. Hallucinations, that one
16:28
doesn't trigger for me as sentient, but
16:31
I really try to make sure we're picking words
16:33
that are in the tools
16:34
camp, not the creatures camp because
16:36
I think
16:36
it's tempting to anthropomorphize this
16:38
That's correct. That way. That's correct. And as you
16:40
know, there were a series of reporters wanting to date
16:43
three. But anyway, sometimes a bot just makes
16:45
things up kind of thinner, and that's hallucinations OpenAI.
16:47
it'll cite research papers or news
16:50
articles that don't exist. You said four
16:52
does this less than GPT
16:54
three, which don't give them actual names. But
16:56
it still happens.
16:56
Oh, that would be entrepreneur. I think it's good that his letters
16:59
plus number.
16:59
Not like Barbara. Anyway But it just it still
17:01
happens. Why is that? So the these
17:05
systems are trained to do something
17:07
which is predict the next word
17:09
in a sequence. Right. And so it's trying
17:11
to just complete a pattern and given
17:14
its training set, this is the most likely completion.
17:17
That said, the decrease from
17:19
three to three point five to four, I think, is very promising.
17:21
Mhmm. We have we track this internally. And
17:23
every week, we're able to get the number lower
17:26
and lower and
17:26
lower. think it'll require combinations
17:28
of model scale new ideas.
17:30
A lot of users Model scale is more data.
17:33
Not necessarily more data, but more compute. They're not the
17:35
problem. Human
17:37
feedback people like flagging the errors for us developing
17:39
new techniques so the model can tell when
17:41
it's about to kind of feel free to just saying
17:43
this is a mistake. Yeah. One of the issues is
17:45
that it obviously compounds a very serious misinformation
17:48
problem. Yes. So we don't we
17:50
we pay experts to flag
17:52
to go through and It's a bound data for us. Mhmm.
17:55
Not just bound. But we employ people. We have contractors.
17:57
We work with external firms. We say we need
18:00
experts in this area to help us go
18:02
through and improve things. You don't just wanna
18:04
rely totally on, you know, random users
18:06
doing whatever trying to troll you or anything
18:08
like that.
18:08
Mhmm. So humans more compute,
18:11
what else?
18:12
To reduce the -- Yeah. -- I think that
18:15
there is gonna be a big new algorithm kind
18:17
of idea that a
18:20
different way that we train or use or
18:22
tweak these models,
18:25
different architecture perhaps. So think
18:27
we'll find that at some
18:28
point. Meaning what? For the non
18:30
tech the different architecture. Oh,
18:32
well, it could be lot of things, but you could
18:34
say, like, different algorithm, but just some different idea
18:36
of the way that we create or use these models --
18:38
Mhmm. -- that encourages during
18:41
training or inference time
18:43
when you're when you're using it that encourages the
18:47
the models to really ground themselves in truth
18:49
-- Mhmm. -- be able to cite sources. Microsoft
18:51
has done some good
18:52
there. We're working on some things. Mhmm.
18:54
So talk about the next steps. How does
18:57
this move forward? I
18:59
think we're sort of on this very
19:02
long term exponential.
19:04
And that's
19:06
I don't mean that just for AI. Although AI too,
19:08
I mean that is like cumulative human
19:11
technological progress. And
19:14
it's very hard to calibrate on that, and we
19:16
keep adjusting our expectations. I
19:18
think if we told you five
19:20
years ago, we'd have GPT4 today.
19:23
You'd maybe be impressed. Mhmm. But
19:25
if we told you four months
19:27
ago after you used chat, GPT, we'd have GPT
19:29
for today, probably not that impressed. And
19:32
yet it's the same continued exponential. So
19:34
maybe where we get to, a year from
19:36
now. You're like, yeah, you know, it's better, but sort
19:39
of the new iPhone's always a little better
19:41
too. Right. But
19:43
if you look at where we'll be in ten years,
19:45
then
19:45
I think you'd be pretty impressed.
19:47
Right. Right. Actually, the old iPhone. We're not as impressive
19:49
as the new For
19:50
sure, but it's been such a gradual process.
19:52
That's correct.
19:52
unless you hold that original one and this one back
19:54
to back. Right. Right. I had I just found mine the
19:56
other day, actually. Interestingly enough, that's a very good
19:58
comparison. You're getting criticism
20:01
for being secretive and you said competition and
20:03
safety require that you do
20:04
that. Critics say that's a
20:06
cop out. It's just about competition. What's
20:08
your response? I
20:10
mean, it's clearly not. The the we
20:13
We make no secret of, like, we would
20:15
like to be a successful effort. Yeah.
20:18
And I think that's fine and good, and we try to
20:20
be clear. But also, We
20:22
have made many decisions over the years in
20:25
the name of safety that have been widely ridiculed
20:27
time that are later
20:31
people come to appreciate when we even
20:33
in the early versions of GPT when
20:35
we talked about not releasing model
20:37
weights or releasing them gradually because when I
20:39
people have time to adapt. We got really cool
20:41
for that, and I totally stand by that decision.
20:44
Would you like us to, like, push a button and open source
20:47
four and drop those weights into the
20:48
world? Probably not. Probably not. One
20:50
of the excuses that Tech always uses is you don't
20:52
understand it. We need to keep in back post. It's often
20:55
about
20:55
competition. Well, for us, it's the opposite.
20:57
I mean, we've said all along, and this is different than
20:59
what most other AGI efforts
21:01
have thought is everybody
21:04
needs to know about this. Like,
21:07
AGI should not go be built in
21:09
a secret lab -- Mhmm. -- with only the people
21:12
who are like privilege and smart enough to
21:14
understand it. Part of the reason that we
21:16
deploy this is I think we need
21:18
the input of the world and
21:20
the world needs familiarity
21:23
with what is in the process of happening, the
21:25
ability to weigh in to shape this together. Like,
21:28
we want that, we need that input and people
21:30
deserve it. So I think we're, like,
21:32
not the secretive OpenAI. We're we're quite the opposite.
21:35
Like, we put this we put the most
21:37
advanced AI in
21:39
the world in an API that anybody
21:41
can use. I don't think
21:43
that if we hadn't started doing that
21:45
a few years ago, Google anybody
21:47
else would be doing it
21:48
now. They would just be using it secretly to make
21:50
Secret to self. So
21:51
you think you're forcing it out.
21:53
Well, you're but you are in competition.
21:55
And let me let me go back to someone
21:57
who was your one of the original funders. Elon
21:59
Musk, he's been openly critical of OpenAI,
22:01
especially as it's gone to prophets. He
22:03
said OpenAI was created as an open
22:06
source, which is why I named it OpenAI,
22:08
nonprofit company to serve as a counterweight
22:10
to Google, but now has become close source
22:12
maximum profit company effectively controlled
22:14
by Microsoft, not what I intended at
22:17
all. We're talking about open source versus
22:19
closed, but what about his
22:21
critique that you're too close
22:23
to the big
22:23
guys?
22:24
I mean, most of that is not true. Okay.
22:26
And I think let's go to view on those that We're
22:29
not controlled by Microsoft. Mhmm. Microsoft
22:31
doesn't even have a board seat on us. We are
22:33
an independent company. We have an unusual
22:35
structure where we can make very
22:37
different decisions than what most companies
22:40
do. Mhmm. I think a fair part
22:42
of that is we don't open source everything anymore.
22:44
We've been clear about why we think we are wrong
22:46
there originally. We still do open source
22:49
a lot of stuff. You know, open sourcing
22:51
clip was something that kicked off
22:53
this whole generative image world. We
22:56
recently open source We OpenAI source
22:58
tools. Will open source more stuff in the future.
23:01
But I don't
23:03
think it would be good right now for
23:05
us to open source four, for example.
23:07
I think that would cause some degree
23:09
of havoc in the world or at least there's
23:11
a chance of that. We can't be certain that it wouldn't.
23:15
And by putting it out behind an API,
23:17
we are able to
23:19
get many, not Altman, many of the
23:21
benefits we want of broad access
23:23
to this society being
23:24
able to understand it, update, and think about it.
23:27
When we find some of the
23:29
scarier downsides,
23:31
we're able to then fix them. Mhmm.
23:33
How do you respond to when he's saying you're a close
23:35
source, Mac premium profit company. I'll leave out
23:37
the control by Microsoft, but in part in
23:39
strong partnership with Microsoft. We have a couch. It was
23:41
against what he said. I remember years ago when he
23:44
talked about
23:44
this. This was something he talked about a
23:46
Altman
23:46
What's his work hard? Oh, we don't want these big
23:49
companies to run it. If they run we're doomed.
23:51
You know, was much more dramatic than most
23:53
people. So we're a capped profit company.
23:55
Yeah. We we invented this new thing -- Mhmm.
23:58
-- where we we started as a nonprofit.
24:00
Explain that. Explain what a capped profit is. We
24:02
our shareholders can make us which
24:05
our employees and our investors can make a
24:07
certain return, like their their shares have
24:09
a certain price that they can get to. But
24:11
if OpenAI goes and becomes a
24:13
multi trillion dollar company, whatever, almost
24:16
all of that flows to the nonprofit
24:18
that controls us.
24:19
Not, like, people hit a cap and then
24:21
they don't hit a cap. What is a
24:22
cap? It continues to vary as we have
24:24
to raise more money, but it's, like, much,
24:26
much, much, and we'll remain much smaller
24:28
than, like, an exec OpenAI. What?
24:31
In terms of, like, a number, I truly don't know. But
24:33
it's not a sign the the nonprofit gets
24:36
the significant chunk of the OpenAI
24:38
well, it gets no. It gets everything over a
24:40
certain amount. So if we're not very successful,
24:42
the nonprofit might not well, it gets a little bit along
24:45
the way, but it won't get any appreciable amount. Mhmm.
24:47
The the goal of the cap profit is in in the
24:49
world where we do succeed at making AGI.
24:51
Mhmm. And we have a significantly different
24:54
everybody else and, you know, that could become much
24:56
more valuable, I think, than maybe
24:58
any company out there today. Mhmm. That's
25:01
when you want almost all of it to flow to a
25:02
nonprofit. Right. Wanna get back to what Elon
25:05
was talking about. He was very adamant
25:07
at the time. And again, overly dramatic
25:09
that Google and Microsoft
25:12
and Amazon were gonna kill us. I think
25:14
he had those kind of words. They
25:16
need there needed to be an alternative. What
25:20
changed in your estimation? To
25:23
do that, to change from that
25:25
idea. Oh, it was very simple. Like,
25:27
when when we realized the level
25:29
of capital we were going to need to do
25:31
this. Scaling turned out to be far more
25:33
important than we thought, and we even thought it was gonna be
25:35
important. Then, and
25:37
we tried for a while to raise to
25:40
find a path to that level of capital as
25:42
a nonprofit. Mhmm. There was no one that was willing
25:44
to do it. So we didn't wanna become a
25:46
fully for profit company. We wanted
25:49
to find something that would let us get the
25:52
access to and the power
25:54
of capitalism to finance
25:57
what we needed to do. But
25:59
still be able to
26:02
fulfill and be governed by the nonprofit
26:04
mission. So having this nonprofit that
26:06
governs this cap the profit LLC
26:09
given the playing field
26:12
that we saw at the time and I still think that
26:14
we see now was the way to get
26:16
to the best of all worlds we could
26:17
see. In a really well functioning society,
26:20
I think this would have been a government project. That's
26:22
correct. I was just gonna make that
26:24
project. And the
26:24
government would have been your funder.
26:27
We talked to them. That
26:30
was not it wouldn't have not
26:32
just been they would have been our
26:33
funder, but they would have started
26:35
the project. We've done things like
26:37
this before in this
26:38
country. Right. Sure. But the
26:40
answer is not to just say, oh, well, the government
26:43
doesn't do stuff like this anymore. So
26:45
we're just gonna sit around and, you know, let other
26:48
countries run by us and get an AGI
26:50
and do whatever they want to
26:51
us. Mhmm. It's we're gonna, like,
26:53
look at what's possible on this playing
26:55
field. Right. So Elon used to be the
26:57
cochair and you have a lot of respect for him. So you
26:59
thought deeply about his critiques. Have you spoken
27:02
to him
27:02
directly? Was there a break? Or what you
27:05
you two were very close as I was
27:06
Have you spoken directly recently? Yeah.
27:08
And what do you make of the critiques?
27:11
When you hear them from him, I mean,
27:13
it can be quite
27:14
in your face about this. He's got his style.
27:17
Yeah. And to say a
27:19
positive thing about Elon -- Yeah. -- I think he
27:21
he really does care
27:24
about a good
27:25
future.
27:25
He does. With AGI? That is correct. And
27:28
he's I mean, he's a jerk, whatever else
27:30
you wanna say about him. He has a style that
27:33
is not a style that I'd wanna have for myself. It's
27:35
changed. But I think he
27:38
he does really care and
27:40
he is feeling very stressed about
27:43
what the future is gonna look like? For humanity.
27:45
For humanity. Yeah. He he did apply that both to
27:47
when when we didn't interview at Tesla. He's like, if
27:49
this doesn't work, we're all doomed, which was sort
27:52
of centered on his car. But nonetheless, who's
27:54
correct? And the same thing with it, and
27:56
this was something he talked about almost
27:58
incessantly. The idea of either
28:01
AI taking over and killing
28:03
us or maybe it doesn't really
28:05
care, then he decided it was like Antilles.
28:07
Do you remember that?
28:07
don't know the Antilles part. He said we're like,
28:10
you know how we think when we're building a highway,
28:12
Antilles are there and we just go over them without thinking
28:14
about it. So they don't it doesn't really care. And then
28:16
he said we're like a maybe they'll feed us
28:19
and bell
28:19
us, but they don't really care about us. It
28:22
went on and on. It went
28:22
-- Yeah. -- changed and iterated over time. But I
28:25
think the most critique that I would
28:27
agree with them is that these big companies would
28:29
control this and there couldn't be
28:31
innovation in the
28:32
space.
28:32
Well, I wish they were evidence against
28:35
that.
28:35
Except Microsoft often. That's right. They're like
28:37
big investor, but again -- Yeah. -- not even a
28:39
board member.
28:40
So when you think
28:40
Like, true, full independence from them.
28:42
So you think you are a startup
28:45
in comparison with a giant partner?
28:47
Yeah. I think we're a startup with a giant. I know we're a
28:49
big startup at this point. Mhmm.
28:50
So and there was no way to be a nonprofit
28:53
that would work.
28:54
I mean, if, you know, someone wants to give us tens
28:56
of billions of dollars of nonprofit capital, we can't go
28:58
make that work. For
28:59
the government, which they're not.
29:00
We try to
29:01
you know, he and others are are
29:03
working on different things. He
29:04
has an anti woke AI play.
29:07
Greg Altman also said you
29:09
guys made a stake by creating AI with
29:12
a left leaning political
29:13
bias. How do you what do you
29:15
think of the substance of those critiques? Well,
29:18
I think
29:18
that This
29:19
was your cofounder. Yeah. Yeah. think that the
29:22
reinforcement learning from human feedback
29:24
on the first version
29:26
of chat GPT was pretty
29:29
left biased, but that is
29:32
now no longer true. It's just become
29:34
an Internet meme. Mhmm. There are people some people
29:36
who are intellectually honest about this. If you go
29:38
look at, like, GBT four and
29:40
test it on
29:40
us. So it's it's relatively neutral. Not to say don't
29:43
have more work to do. The main thing though is I don't
29:45
think you ever get to two people agreeing
29:47
that any one system is unbiased on
29:49
every topic. And so giving
29:52
users more control and also teaching people
29:54
about, like, how these systems work
29:56
that there is some randomness response that the worst
29:58
screenshot you see on Twitter is not representative
30:00
of what these things
30:01
do. I think it's important. So when
30:03
you said it had a left leaning bias, what did
30:05
that mean to you? And course,
30:07
they will run with
30:08
that. They'll run with that quite far.
30:10
People would give it these tests that
30:13
score you on, you know, the
30:15
political spectrum in America or whatever. Mhmm.
30:17
And, like, one would be all the way on the right, and one would
30:19
be all the way on the left. I would get like a
30:21
ten on all of those tests -- Mhmm. -- the first version.
30:24
Why? Because
30:26
of what was a number of
30:27
reasons, but largely because
30:29
of the the reinforcement learning
30:32
from human feedback step.
30:34
We'll be back in a minute.
30:48
Support for Anne Swisher comes
30:50
from NerdWallet. It feels like
30:52
the moment you start thinking about signing up for a
30:55
new credit card, your mail becomes about ninety
30:57
five percent junk offers. There's so
30:59
many options it'd be hard to find the cash back
31:01
credit card that's right for you. NerdWallet
31:03
can help you make smart decisions by comparing
31:06
top financial products side by side to find
31:08
a winner. NerdWallet's team of nerds
31:10
use their expertise to help you make smart
31:12
financial decisions. NerdWallet can
31:14
help you turn that infinite array of offers
31:17
into a few top options with objective
31:19
reviews and side by side comparisons. NerdWallet
31:22
can help you find a cashback card with bonus
31:24
percentages in categories you spend the
31:26
most in like gas or groceries. All
31:29
that cashback would be perfect for anyone
31:31
looking to plan a few more road trips next
31:33
year. Ready to make a financial decision,
31:35
compare and find top credit cards, savings
31:38
accounts, and more at nerd wallet dot
31:40
com. NerdWallet, the smartest decision
31:42
for all your financial decisions.
31:46
Is it possible to be an optimist anymore?
31:49
With so much difficult news, some might
31:51
say optimism requires a set of rose
31:53
colored glasses. Going around pretending
31:56
everything's just fine. Technology
31:58
leader Barbara Hampton, she's CEO of
32:00
Siemens USA, offers a different
32:02
perspective. Optimistic, she
32:04
says, isn't about looking away from problems?
32:07
It's about looking right at them, while believing
32:09
you can find solutions. Which is exactly
32:12
what Barbara does in the optimistic
32:14
outlook podcast. Take the
32:16
climate crisis. The podcast details
32:18
technologies we can use today to decarbonize
32:21
industry and infrastructure, addressing
32:23
three quarters of all global carbon emissions.
32:26
That's optimistic outlook. Subscribe
32:28
wherever you listen to podcasts.
32:34
What do you think of the most viable threat
32:36
to OpenAI As I hear you're watching Claude
32:38
very carefully, it's the bot from AnthroPIC,
32:41
a company that's founded by OpenAI folks
32:43
and backed by Alphabet. Is
32:46
that it? We're recording this on Tuesday, Bard
32:48
launched today. I'm sure you've been discussing it internally.
32:50
Talk about those tooth to start.
32:52
Honestly, I mean, I try to pay some attention
32:54
to what's happening with all these other
32:56
things. It's gonna be an unbelievable
32:59
competitive space. think this is the first
33:01
new technological platform in
33:03
a long period of time. The
33:05
thing I worry about the most is not any of
33:07
those. Because I think we can, you
33:10
know, there can there's room for a lot of
33:12
people and also think we'll just continue to offer
33:14
the best the best product. The
33:16
thing I worry about the most is that we're somehow
33:19
missing a better approach -- Mhmm. --
33:21
and that this idea, like everyone's chasing us
33:23
right now on large language models to kind of train in the
33:25
same way. I don't worry about them. I worry
33:27
about the person that has some very
33:30
different idea about how to make a more useful system.
33:33
Like a Facebook too. Probably
33:35
your Facebook, to be honest. No. Like a Facebook to
33:37
000. No. Not like Facebook. Not Facebook.
33:39
No. Facebook's not gonna come up with anything. Unless Snapchat
33:42
does, and then they'll copy it. I'm teasing
33:44
sort of. But you don't feel like these other
33:46
efforts that they're sort of in your same
33:48
lane. You're all competing. So it's the one
33:50
that is not That's what I would worry about,
33:52
Moria. Like, the people that
33:54
are trying to do exactly what
33:56
we're doing
33:57
but, you know, scrambling to
33:59
cross selling it. But is there one that
34:01
you're watching more carefully? Not
34:04
especially. Really?
34:06
I am.
34:06
I kind of don't believe you, but really I mean,
34:09
no, the things that I was gonna say, the things that I pay
34:11
the most attention to are not like language
34:13
model startup number two hundred and
34:15
seventeen. Mhmm. It's when I hear about
34:17
someone, it's like, these are like
34:20
three smart people in a garage. With
34:22
some very different theory of how to build AGI.
34:24
Mhmm. And that's when I pay
34:26
attention. Is there one that you're paying attention to
34:28
now?
34:31
There is what I don't wanna say.
34:32
Okay. You really don't wanna say? really don't wanna say. Okay.
34:35
What's the plan for making money? So we're sort
34:37
of like we have a platform, which is this API
34:39
that networking used to the model. And then we have, like,
34:41
a consumer product on top of it -- Right. -- and
34:43
the consumer product twenty bucks a month. For
34:45
the sort of premium version and the
34:47
API you just pay us per token like
34:50
basically like a meter.
34:51
Businesses would do that depending on what they're using
34:53
it for.
34:53
If they decided employee in a hotel or wherever.
34:55
The more you use it, the more you pay it. The more you use it,
34:57
you pay. One of the things that someone said to me
34:59
and I thought was very smart is if the original
35:01
Internet's started on a more pay
35:04
subscriber basis rather than an advertising
35:06
basis. It wouldn't be quite so evil. I
35:08
am excited to see if we
35:10
can really do a mass scale
35:13
subscription
35:13
funded, not ad funded business here. Mhmm.
35:16
Do you see ads funding this? That to
35:18
me is
35:18
the original center of the interview. We've
35:19
made the bet not to do that. Right.
35:22
I'm not opposed to it. Maybe we
35:23
didn't
35:23
look like it. I don't
35:24
know. We haven't thought, like, it's going great with
35:26
our current model. We're happy about it. You've been
35:28
also competing against Microsoft for clients. They're
35:30
trying to sell your software through their Azure cloud
35:33
business as as an add
35:34
on. Actually,
35:34
that, I don't Like, that's
35:36
fine. don't care. That's fine. But you're also trying
35:38
to sell directly sometimes the same clients. don't
35:40
care about. They don't care about. You don't care. How does
35:42
it
35:43
work? Does it affect your bottom line that way?
35:47
Again, we're like an unusual company here.
35:49
Mhmm. We're not like, we don't need to squeeze
35:51
out every dollar. Former Google Tristan
35:53
Harris, who's become a critic of how tech is
35:55
sloppily developed, presented to a group in
35:57
DC of regulators. I was there. Among
35:59
the points he made is that you've essentially
36:02
kicked off an AI arms race. I think
36:04
that's what struck me the most. Meta,
36:06
Microsoft, Google, Baidu, are rushing to ship
36:08
generative AI bots when the tech industry
36:10
is shedding jobs. Microsoft recently laid
36:12
off ethics and society team within its
36:14
AI org. That's not your issue. But
36:16
are you worried about a profit driven arms
36:18
race? I do think
36:21
we need regulation and
36:23
we need industry norms about this.
36:25
I am disappointed to see people
36:27
like we spent many,
36:30
many months. And actually, really,
36:32
the years that it's taken us to get good at making
36:34
these models, getting them ready before we put them up.
36:36
You know, people it obviously became somewhat
36:38
of an open secret in Silicon Valley that we had GPT4
36:41
done for a long time. Mhmm. And there were a lot
36:43
of people who were like, you gotta release this now,
36:45
you're holding this back from society, you know, existing
36:47
your closing eye, whatever. But, like, we
36:49
just wanted to take the time to get it right. Mhmm.
36:51
And there's a lot to learn
36:54
here and it's hard. And in fact, we try to
36:56
release things to help people get it
36:58
right even competitors. I
37:00
am nervous about the
37:02
shortcuts that other companies now seem
37:04
like they wanna
37:05
take. Such as Oh, just rushing
37:07
out these models without all the safety features built -- Mhmm. --
37:09
without save So they're just this is an art that
37:11
they wanna get in here and get ahead of you because
37:14
you've had the front seat.
37:16
Maybe they do, maybe they don't. They're certainly
37:18
making some noise like, you
37:20
know, they're gonna So
37:21
when you say worried, what can you do about it?
37:24
Nothing. Well, we
37:26
can and we do try to talk to them
37:28
and explain, hey, here's some Altman. You
37:30
know, here's some things we think you need get right. Yeah.
37:32
We can continue to push for regulation. We can
37:34
try to set industry norms. We can
37:36
release things that we think help other people get
37:38
towards safer systems
37:39
faster. Howard Bauchner: Can you prevent that, let
37:41
me read you this passage from the story about Stanford
37:44
doing it. They did one of their own
37:45
models. Six hundred dollars, I think it cost
37:47
them to put it They trained
37:48
the model for six hundred dollars. Yeah. Yeah. Yeah. They
37:50
did. It's called Stanford El Paso just so you
37:52
know. It's a cute name. It is. It's a cute
37:55
name. I'll send you the story, but so what's to
37:57
stop? Basically, anyone from creating their own
37:59
pet AI now for a hundred bucks or so
38:01
and training it however they choose Well,
38:03
OpenAI AI's terms of service say, you may not
38:05
use output from services developed models that
38:07
compete with open AI, and meta says it's
38:09
only letting academic researchers used under
38:12
a non commercial license at this stage, although
38:14
that's a moot point since the entire Llama model
38:16
was leaked onto four
38:17
channels. Within
38:18
hours or
38:18
so. Yeah. And this is a six hundred dollar version
38:20
of yours. One
38:22
of the other reasons that we want to talk
38:24
to the world about these things now is this is coming.
38:27
This is totally unstoppable. Yep.
38:29
And there are gonna be a lot
38:31
of very good open source versions of
38:33
this in coming years, and it's gonna come with
38:36
you know, wonderful benefits and some problems.
38:39
By getting people used to this now,
38:41
by getting regulators to begin
38:43
to take this seriously and think about it now.
38:46
I think that's our best path forward. Alright.
38:48
Two things I wanna talk about, societal impact and regulation.
38:51
You've said, I I told you this will be the greatest
38:53
technology humanity has ever developed. You know,
38:55
almost every interview you do, you're asked about the dangers
38:58
of releasing AI products. And you say it's
39:00
better test it. Gradually, in an open quote,
39:02
while the stakes are relatively low.
39:04
Can you expand on that? Why are the stakes low
39:06
now? Why aren't they high right now?
39:08
Relatively is the keyword.
39:09
Right. Okay. What happens to the stakes?
39:12
If it's not controlled now? Well,
39:14
these systems are now much
39:17
more powerful than they were a few years ago, and
39:19
we are much more cautious than we were a
39:21
few years ago in terms of how we deploy them.
39:24
We've tried to learn what we can learn.
39:26
We've made some improvements. We found ways that people
39:28
wanna use this. I, you know, in
39:30
this interview, and I totally get why. And in many
39:32
of these topics were I think we're mostly talking about
39:34
all of the
39:35
downsides. But No. I'm
39:35
gonna ask you about the upside. Okay. But
39:38
we've also found ways to, like, improve the upsides
39:40
by learning too. So mitigate downsides, maximize
39:43
upsides. That sounds good. And
39:45
it's not that the stakes are
39:48
that low anymore. In fact, think
39:50
we're in a different world than we were a few years ago.
39:52
Mhmm. I still think they are relatively
39:55
low to where we'll be a few years from now.
39:58
These systems are still they
40:00
have classes of problems, but there's things that are
40:02
totally out of the out of reach that we
40:05
know they'll be capable of. And
40:08
the learnings we have now, the feedback we get
40:10
now, seeing the ways people
40:12
hack
40:12
jailbreak, whatever, that's super valuable. I'm curious
40:15
how you think we're doing. I know you're
40:16
I think you're saying the right things. You're
40:18
absolutely not from saying. Like, I think we're doing this.
40:20
You look at the trajectory of our releases. I think the
40:22
reason people are so worried, and I think it's
40:25
legitimate worry is because the
40:27
way the early Internet rolled out, it was
40:29
g wiz almost the whole time. Yeah. Almost
40:31
up into the right. G wiz, look at these rich
40:33
guys. Isn't this great?
40:36
Doesn't this help you? And they missed every
40:38
single consequence. Never thought of them.
40:40
I remember seeing Facebook
40:43
live. And I mentioned I said, what
40:45
about, you know, people who kill each other
40:47
on it? What about, you know, murders? What about
40:49
suicides? What about and they call me a bummer
40:52
and a bummer. A bummer in this room. And I'm like,
40:54
yeah, I'm a bummer. I'm like, I don't know. I just
40:56
noticed that when people get ahead of tools, they
40:58
tend. And, you know, this is Brad Smith's thing. It's a
41:00
tool or a weapon. Weapon seemed to
41:02
come up a lot. And so I always think
41:05
same thing happened with the Google founders and they were trying
41:07
to buy Yahoo many years ago. And I
41:09
said at least Microsoft knew they were thugs
41:11
and they they called
41:13
me and they said that's really hurtful or really nice. I
41:15
said, I'm not worried about you. I'm worried about
41:17
the next guy, like, I don't know who runs your
41:19
company in twenty years with all that information on
41:21
everybody. And so I think, you
41:24
know, I am a bummer. And so if you don't
41:26
know what it's gonna be, well, you can
41:28
think of all the amazing things it's gonna do,
41:30
and it'd probably be a net positive for society.
41:33
Net positive isn't so great either sometimes.
41:35
Right? It's a net positive the Internet's
41:37
a net positive, like electricity is a net positive.
41:40
But every time it's a famous quote, every time
41:42
you when you invent electricity, you invent the electric
41:45
chair, when you invent this and that. And so
41:47
that's that's what would be the thing here that
41:49
would be the greatest
41:50
thing. Does it outweigh some
41:52
of the dangers? I think that's gonna be
41:54
the fundamental tension that we face that we have to wrestle
41:56
with, that the field as a whole has to wrestle with, society
41:58
has to wrestle with.
41:59
Especially in this world we live in now, which I
42:02
think we can all agree, has not gotten gone
42:04
forward. It's spinning backwards a little bit
42:06
in terms of authoritarians using this
42:08
stuff,
42:09
you know. I am super nervous about Yeah. What
42:11
is the greatest thing you can think? Now you're not
42:13
you and I are not creative enough to think of all the
42:15
things. We are not Then it could Not even think about it. What from
42:17
your perspective and you know, don't
42:19
do term papers. Don't do dad jokes.
42:22
What do you
42:22
think? That's fine. Is
42:23
that what you thought I would say? No. Not at all, but I'm
42:25
getting tired of that. I don't care that it can write.
42:28
A press release. I don't care. Fine. Sounds fantastic.
42:31
I hate I don't want the right way. Personally
42:33
most excited about is helping
42:36
us greatly expand our scientific
42:38
knowledge. Okay. I am a believer that
42:40
a lot of our forward progress comes from
42:43
increasing scientific discovery over
42:45
a long period of
42:45
time. In
42:46
any area? All all the areas. I think
42:48
that's just what's driven humanity forward. Mhmm.
42:50
And if these
42:52
systems can help us, in
42:54
many different ways, greatly increase
42:57
the rate of scientific understanding.
42:58
Mhmm. You know, curing diseases is an obvious
43:00
example. There's so many other things we can do
43:02
with better
43:03
knowledge and better understanding.
43:05
Yeah. It's already
43:05
moved
43:05
in that are folding proteins and things like that. So
43:07
that's the one that I personally want excited
43:09
about. Science.
43:10
Yeah. But there will be many other wonderful things
43:12
too. You just you asked me what my one was and
43:14
Is there one unusual thing that you think will
43:16
be great? That you've seen already
43:18
that you're like, that's pretty cool. Using
43:21
some of these new AI tutor
43:23
like applications -- -- it's
43:25
like I wish I had this one I was
43:27
growing up. I could have learned so much and so much
43:29
better and faster. And when I think about
43:31
what kids today will be
43:33
like, by the time they're finished with
43:35
their formal education and how much smarter
43:38
and more capable and better educated and
43:40
they can be than than us
43:41
today. I'm I'm excited for using these tools
43:43
using these tools. Yeah. I would say health
43:45
information to people who can't afford it is
43:47
probably one I think is most important.
43:49
That's gonna be transformative. We we've seen
43:51
Even if for people who can't afford it, this in
43:53
some ways will just be much
43:54
better. Yeah. Exactly. It's
43:56
a hundred percent
43:57
better. And the and the and the work we're seeing there from
43:59
bunch of early companies on the platform I
44:01
think it's remarkable. Howard Bauchner: So the last thing is regulation
44:03
because one of things that's happened is
44:05
that Internet was never regulated by anybody,
44:08
really. Except maybe in Europe. But in this country,
44:10
absolutely not. There's not a privacy bill. There's not an
44:12
antitrust bill, etcetera. It goes on and on.
44:14
They did nothing. But the EU is considering labbing
44:17
CHAT, GBT, high
44:18
risk. If it
44:19
happens, it will lead to significant restrictions on
44:21
its use in Microsoft and Google lobbying against
44:23
it. What do you think should happen? With
44:26
AI regulation in general with the
44:27
Yeah. It is one, the high risk one. I
44:30
have followed the development of
44:32
the EU's AI Act, but it is
44:35
changed it's, you know, obviously, still in development.
44:38
I don't know enough about the current version of
44:40
it to say if I think this way like, this definition
44:42
of what high risk is and this way classifying this
44:44
is what you have to do. I don't know if I
44:47
would say that's like good or bad.
44:50
I I think like totally banning this stuff is not
44:52
the right answer. Mhmm. And think did not regulate
44:54
this. I mean, you're not TikTok, but go ahead.
44:58
And think not regulating stuff at all is not the right
45:00
answer either. Mhmm. And so the question is,
45:02
like, is that gonna end in the right balance?
45:05
Like, I think the EU is saying, you know, no one
45:07
in Europe gets to use chat JBT, probably
45:09
know what I would do. Mhmm. But the EU is saying,
45:11
here's the restrictions on chat JBT and any service
45:14
like it. There's plenty versions that I could
45:15
imagine, meaning, alright, super sensible? Alright. So
45:18
after the as Silicon Valley non bailout
45:20
bailout, you tweeted we need more regulation on
45:22
banks. But what sort of regulation
45:24
know. And then someone tweeted at you. Now he's gonna
45:26
say we need him on AI, and you said we need him on
45:28
AI. But I I mean, I do think that
45:30
SVB was an unusually bad case.
45:33
But also,
45:34
if the regulators aren't
45:37
catching that, what are they doing? They
45:39
did catch it. Actually. They were giving
45:41
warnings.
45:41
They were giving warnings, but, like, there's often
45:43
an audit. You know, this thing is not quite like, that's
45:45
different.
45:46
That's It's even pretty significant. You don't
45:48
need to do something. They just didn't do anything.
45:50
Well, they could have I mean, the regulators could have taken over
45:52
-- Mhmm. -- like six months ago. So this is
45:54
what happens a lot of the time, even in well regulated
45:56
areas, which banks are compared to the Internet.
45:58
What sort of regulations does AI need in America
46:01
lay them out? I know you've been meeting with regulators
46:03
and lawmakers. I haven't done that many. Well,
46:05
they call me when you do. They wanna
46:07
say, they've knew I guess. What
46:08
did they say? Well,
46:09
you're like the guy now. So they like to say I
46:11
was with Altman.
46:12
did one. His
46:13
name's nice. Like, he's nice. I don't know what to
46:15
tell you.
46:16
I did like a three day trip to D. C. Earlier
46:18
this year. So tell me what you think the regulations
46:20
weren't. What are you telling them? And do
46:23
you find them savvy as a
46:24
group? I think they're savier than people thing. Some
46:26
of them are quite quite exceptional. Yeah. Mhmm.
46:30
I I think the thing that I would like to see
46:32
happen immediately it's just
46:34
much more insight into what companies
46:37
like ours are doing. Mhmm. You know, companies
46:39
that are training above a certain level of capability.
46:42
At a minimum, like a thing that I think could
46:44
happen now, is the government should
46:46
just have insight into the
46:48
capabilities of our latest stuff released or
46:50
not. What our internal audit
46:53
procedures and external audits we use look
46:55
like, how we collect our data,
46:57
how we're red teaming these systems. Mhmm.
46:59
What we expect to happen, which we may be totally wrong
47:01
about. We get it a wall anytime. But like our internal
47:03
road map documents, when we start a big
47:05
training run, I think there could be government. Insight
47:08
into that. And then if that can
47:10
start now, I do
47:12
think good regulation takes long time to
47:14
develop. It's a real process. Mhmm. They
47:16
can figure out how they wanna have oversight?
47:19
Read had read hoping to suggest a blue ribbon
47:21
panel so they'd learn up on this stuff, which I
47:23
mean, panels are fine. We could do that
47:25
too, but what I mean is like government
47:28
auditors sitting in our buildings.
47:29
Congressman Ted Liu said there needs to be an
47:31
agency dedicated specifically to regulating
47:34
AI. Is that a good idea?
47:36
I think there's two things you wanna do.
47:38
This is way out of my area of expertise,
47:41
but you're asking, so I'll try. I
47:43
think people like us that are creating these
47:46
very powerful systems that
47:48
could become something properly called
47:50
AGI at some
47:51
point. Mhmm. This is
47:52
explained what that is. Artificial general intelligence.
47:54
But what what people mean is just like above
47:56
some threshold where it's
47:58
really good. Right. Those efforts probably
48:00
do need a new regulatory effort, and
48:02
I think it needs to be global body,
48:06
new regulatory body. And
48:08
then people that are using AI,
48:10
like we talked about, the medical adviser. I
48:12
think FDA can give probably very
48:14
great medical regulation. Mhmm.
48:16
But they'll have to update it for the collusion
48:19
of AI. But I would say, like, creation
48:21
of the systems and having
48:23
something like an IAEA that regulates that,
48:26
is one thing and then having existing
48:28
industry regulators
48:30
still do their regulations. So people do
48:32
react badly to that because the information bureaus
48:35
that's always been a real problem in
48:37
Washington.
48:38
Yeah, not everyone.
48:38
Is who should head that agency
48:41
in the US?
48:41
I I don't know. Okay. Alright. So
48:43
one of the things that's gonna happen though is the
48:45
less intelligent ones, of which there
48:48
are many, are gonna
48:50
seize on things like they've done with TikTok
48:53
possibly deservedly, but other things,
48:55
like SNAP released a chatbot powered by
48:57
GPT that reportedly told a fifteen year old,
48:59
how to mask the smell of wheat and alcohol, and a thirteen
49:01
year old. How to set the mood for sex with an adult.
49:03
They're gonna seize on this stuff. And the
49:05
question is, who's liable? If this is
49:08
true, when a teen uses
49:10
those
49:10
instructions. And section two thirty doesn't seem
49:12
to cover generative AI. Is that
49:14
a problem? I think we will need a new law
49:16
for use of this stuff. Mhmm. I
49:18
think the liability will need
49:21
to have a a few different frameworks. If someone's
49:23
tweaking the models themselves, I think
49:25
it's gonna have to be the last person that touches
49:27
it has the
49:28
liability. Mhmm. That's and that's
49:30
But their B liability. It's not full. The
49:32
immunity that the platform is getting I
49:34
don't think we should have full immunity. No. That
49:36
said, I understand why you want limits
49:38
on it. Why you do want companies to be able to
49:40
experiment with
49:40
this? You want users to be able to get the experience they
49:42
want. Mhmm.
49:43
But the idea of, like, no one
49:45
having any limits for a generative AI for
49:47
AI in general, that feels super
49:49
wrong. Last thing, trying to quantify
49:51
the impact you personally will have
49:53
on Society is one of the leading developers of
49:55
this
49:56
technology. Do you think about
49:58
that? Do you think about your impact? Do
50:00
you
50:00
Like me OpenAI
50:01
up
50:01
for me, Sam? You, Sam.
50:05
I mean, hopefully, I'll have a positive impact.
50:08
Do you think about the impact on humanity
50:10
and the level of power that also comes with it?
50:14
Yeah, I don't. I I think about, like, what OpenAI is
50:16
gonna do a lot and the OpenAI will
50:18
have. Do
50:20
you think it's out of your hands? No.
50:23
No. But it is very much a like,
50:25
the responsibility is with me at some level,
50:27
but it's very much a team effort.
50:29
Mhmm. And so when you think
50:31
about the impact, what is your
50:33
greatest OpenAI what's your greatest
50:35
worry? My greatest
50:37
hope is that we are we create
50:39
this thing We are one of many people
50:41
that is gonna contribute to
50:43
this movement will create an AI. Other people
50:45
will create an AI. And that
50:47
this we will be a participant in
50:50
this technological revolution that I I
50:52
believe will be far greater in terms
50:54
of impact and benefit than any before.
50:57
Mhmm. My my view of the world is it's this,
50:59
like, one big, long technological
51:02
revolution, not a bunch of smaller ones, but
51:04
we'll play our part. We will be one of several
51:06
in this OpenAI. And that
51:09
this is gonna be really wonderful. This is going
51:11
to elevate humanity in ways we still
51:14
can't fully envision. And
51:17
our children, our children's children are gonna
51:19
be far better off than, you know, the best
51:22
of anyone from this time. And we're
51:24
just gonna be in a in a radically
51:26
improved world. We will live healthier,
51:28
more interesting, more fulfilling lives. We'll have
51:31
material abundance for people. And,
51:33
you know, we will be a
51:35
contributor.
51:37
And, you know, we'll put in our Your
51:39
part. Our part of that. I do sound alarming like
51:41
the people I'm twenty five years ago, I
51:43
have to say. If you
51:44
were not, I don't know how old you are, but you
51:46
weren't, you were young. You were probably very
51:48
young. Thirty seven. So Yeah. Thirty one twelve.
51:50
And they did talk like this. Many of them did, and
51:53
some of them continued to be that way. A lot of
51:55
them didn't, unfortunately. And
51:57
then the greed seeped in, the
51:59
money seeped in, the power seeped in, and it got
52:01
it got a little more complex, I would say, not
52:03
not totally. And again, because
52:05
net, it's better. But I wanna focus
52:07
on you on my last question. There seem to be two characters
52:10
of you. One that I've seen in the press is
52:12
a boyish genius who will help defeat Google and
52:14
Usher and
52:14
Utopia. The other is that you're an irresponsible
52:17
woke, tech overlord icarus that will lead us
52:19
to our demise. I
52:21
have to pick one. Is
52:23
it? No. I don't How
52:24
old do I have to be before I can, like, drop the
52:26
boyish qualifies?
52:27
Oh, you can be boyish. Tim Hanks is still
52:29
boyish. Yeah.
52:30
And what was the second one?
52:32
You know, Iker is overlord, tech overlord,
52:34
will Woke something?
52:35
Yeah. Yeah. Woke whatever.
52:36
The Iker is part
52:37
is to highlight. So boy is to you. But I'm
52:40
I think we feel like adults now.
52:41
You may be adults, but boy ish always gets
52:44
put on you. I don't ever call you boys. I think
52:46
your adults
52:47
Achorus meaning, like, I'm messing we're we
52:49
are messing around with something that we
52:50
don't fully understand.
52:51
Yes. Well, we are messing around with something we don't fully
52:53
understand. Yeah. And
52:54
we are trying to do our part in
52:57
contributing to the responsible path through it.
52:59
Alright.
53:00
Yeah. But I don't think either of those
53:01
Yeah. Either of those I mean, describe yourself
53:04
describe what you are.
53:08
Technology brother. Oh, wow.
53:10
You're gonna go for that. I just think that's such
53:12
a funny, ma'am. I don't know how to describe
53:14
myself. I think that's what you would call
53:16
me. No. I wouldn't. No. One hundred percent.
53:18
Alright. Because it's an insult now.
53:19
Let's become
53:20
an insult. I call you a technology
53:22
sister. I'll
53:23
take that. I'll leave it on that now. Let's leave
53:25
it on that now. I do have one more quick question.
53:27
We last time we talked, you were thinking running for governor.
53:29
I was thinking running for
53:30
mayor. I'm not gonna be running for mayor. Are you
53:32
gonna still run for governor?
53:33
No. No. III
53:35
think I am doing, like, the most I mean,
53:38
I think I can imagine. Really don't do anything
53:40
else. wanna do anything else? It's tiring, but I love it.
53:42
Yeah. Okay. Sam Altman. Thank you
53:44
so much. Thank
53:44
you.
53:51
You said he sounded a lot like a lot of
53:53
founders generation before him. Yes. What are
53:55
the lessons you would impart to Sam
53:58
as someone who has so much impact on the
53:59
humanity? You know, I think what I said is that they
54:01
were hopeful and they were they had
54:04
great ideas. And one of the things that I think
54:06
people get wrong is to be a tech critic
54:08
means you love tech. Like, you know, you
54:10
really love it. You do. Yeah. Of course. And
54:12
you you don't want it to fail. You want it
54:14
to create betterment for humanity. And
54:17
that if that's your goal, when
54:19
you see it being warped and misused, it's
54:21
really sad and
54:23
disappointing. And I think one of the things early
54:25
Internet people had all these amazing ideas,
54:27
the world talking to other. We'll get along with
54:29
Russia. We'll be able to communicate over
54:32
vast distances. And again, just like I talked
54:34
about with Reed Altman, it's a Star Trek
54:36
vision of the universe. And that's what it was.
54:38
And, boy, the money and the
54:40
power and the and the bad people
54:43
that came in were really
54:45
significantly shifted it, not completely.
54:48
By any
54:48
means. I love my Netflix, you know, I
54:50
just do. But the unintended
54:52
or intended consequences, ultimately
54:56
are very hard to bear even if it's
54:58
a net positive. So it's just the money and
55:00
the power that's corrupting is what you're
55:01
saying. It's inevitable. No.
55:03
Not inevitable. But often.
55:05
Often. Often. Yeah. Well, not
55:08
him. Not a lot of people. But
55:10
let's see, the standing the test of
55:12
time. Right? You're saying about Reed Hoffman and
55:14
Max Livechin versus, say, Peter Till
55:16
and Elon Musk. Well, I think Peter was always
55:18
like that. You know, I don't think he's changed
55:20
one bit. And so or not in my
55:22
not in my estimation. He's been very consistent
55:24
in how he looks at the world, which is not
55:27
a particularly positive light.
55:30
I think that a lot of them do stay the
55:32
same and they do stay true
55:34
to what they're like. And I don't know why that
55:36
is over certain people and others get
55:39
sucked into it in a way that's really I'm
55:41
thinking about this a lot because that's what my books about. Yeah.
55:43
Of course. How
55:45
people change and why and whether
55:48
that's a good thing or bad. Think because, you know, one of the
55:50
things about tech is cons certainly changing.
55:52
Mhmm. One of the poems I'm using in
55:54
the book is is a poem by Maggie Smith
55:56
called Good Bones. And I'll
55:58
just read you the last part life is
56:00
Altman the world is at least half terrible.
56:02
For an every kind stranger, there is
56:04
one who would break you, though. I keep this from
56:06
my children. I'm trying to sell them the
56:08
world. Any decent realtor walking
56:10
through a real shit hole, chirps have gone about
56:13
good bones. This place could be beautiful.
56:15
Right? You could make this place beautiful. And
56:18
that's how I feel about this. They could make this
56:20
place beautiful. And I think Sam thinks that too.
56:22
Yeah. It's not just a lie, you tell your children.
56:24
Right? Well,
56:25
no. But it is. You can't tell
56:27
them terrible
56:28
things all the time. They would be like just lying
56:30
on the ground. And yeah. But
56:32
sometimes it's so ideal like, when he said global
56:34
regulatory body to regulate AI,
56:36
I'm like,
56:37
oh, man, we're fucked. That's never gonna happen. Well, like,
56:39
when was the last good global regulatory
56:41
body? Good work. It could work.
56:43
There has to be this has to be global.
56:45
This
56:45
has to be global. But how there's
56:47
no infrastructure to set up a sustainable
56:49
life science and medicine there
56:51
is. You think the World Health Organization
56:53
has been effective?
56:54
And I think there's there's stuff around cloning
56:56
around all kinds of stuff. It's never gonna be perfect,
56:58
but boy, there's a lot of people that that hued
57:00
to those
57:01
ethics. I mean, I think it depends how bought in.
57:03
So governments are including China, but
57:05
the regulation thing is particularly tricky because
57:07
it can also become a moat. Right? That's right. Incompense,
57:10
like Facebook's, like regulate them, it's like, well,
57:12
you can afford the regulation in a way that new competitors
57:15
maybe can't.
57:16
I think the governments can play a lot of roles
57:18
here. They do it in new clear nonproliferation. It's
57:20
never perfect, but we still haven't set one
57:22
off. Have we? I think that's largely the deterrent
57:25
power and not because of any effective regulation.
57:28
I I am a great believer in nuclear
57:30
and proliferation. And I think
57:32
there's lots of examples of it work. And I think
57:34
the most significant thing that he
57:36
said here was about the government's
57:39
role, the US government's role. It shouldn't
57:41
give this all over to private sector. It should
57:43
have been the one to give them money. And
57:45
to fund them. And that is a hundred percent.
57:48
We've talked to Mariana Mazzocado about
57:50
that. Yeah. And many people, that to me
57:52
is the the big shame is the government abrogating
57:54
its role in really important things
57:57
that are important globally and important for
57:59
the US. But even when the government
58:01
has played that kind of
58:02
like, let's call it kindling role for industry,
58:04
whether it be Elon Musk's loan for Tesla,
58:06
whether it be what DARPA was doing
58:09
that became you
58:09
know, parts of Siri and --
58:11
Mhmm. -- and echo and whatnot. The government
58:13
here is bad at retaining like
58:15
a windfall from
58:16
that. That would be reinvested into but it used
58:19
to. It used to just do it because it was the right thing
58:21
to do that we would research and
58:23
investment by the government. You know, highway
58:25
system seems to have worked out pretty good. The
58:28
telephone system seems good. You know,
58:30
I mean, we always tend to, like, talk about what
58:32
they do wrong, but there's so much stuff that the government
58:34
contributed
58:35
to.
58:35
That matters
58:36
today. It used to be a culture also if
58:38
people would wanna go into government and civil service.
58:40
My father was in that generation, like,
58:43
you know, and I think that it's interesting
58:45
to hear Sam say, no, he won't run for governor.
58:47
In
58:47
fact, you think sometimes, well, it would be so great
58:49
if some of these bright minds, you know,
58:51
except he is more effective where he is. Why
58:53
would he do that when he's more effective where
58:55
he is? Arguably the right regulator for
58:57
this as a person who could have built it -- Yeah. -- or
58:59
conceit building it maybe. Did you find
59:01
his answers to the moderation questions
59:04
and this idea of hallucination and overly
59:06
impressive at first
59:07
glance? Did you find those satisfying? Yeah. Thought
59:09
if he doesn't have answers, I think one of the things I'd
59:12
like about Sam is if he doesn't have an answer, I don't think
59:14
he's hiding it. I don't think he knows, and I
59:16
think one of the the strengths of
59:18
certain entrepreneurs is
59:20
I don't really know. And I think that in
59:22
a lot around AI right now, anyone that's
59:24
gonna give you a certainty is lying to
59:26
you? Well, they had experimented with using these,
59:28
you know, these low wage workers
59:30
in Africa through Samma and Outdoors. Well, it's
59:32
not I think it was that. It was expose. They were
59:34
paying them less than two dollars an hour and training them to
59:36
build up, like, what what was reported
59:38
a content moderation AI layer -- Mhmm.
59:40
-- which is ironic when you think about it. So there
59:42
were workers in Africa being paid less than two dollars
59:45
an hour to train machines to
59:47
replace them for that
59:48
job. Well, have you been to an Amazon warehouse
59:50
late there's a lot of machines doing everything.
59:53
That's the way it's going. It doesn't that's like you're
59:55
telling me something that happens in every other
59:57
industry.
59:57
Yeah. I know. And yet, gonna grow smarter. Do you
59:59
think that's true? AI, too, everyone's gonna be
1:00:01
smarter? I do. I think we do a lot of wrote idiotic
1:00:04
work that we shouldn't be doing. And we have
1:00:06
to be more creative of what our greatest
1:00:08
use of our time
1:00:09
is.
1:00:09
My great hope for AI is actually that it takes
1:00:11
out the rope bits and all of a sudden creative industry
1:00:14
flourishes
1:00:14
because those are parts can't be replicated. And
1:00:17
Yeah. -- though I think, you know, a sad reality
1:00:19
of technology in the last generation has been that
1:00:21
kids maybe don't read as well
1:00:23
or as much or as fast as or as
1:00:25
early as used
1:00:26
to, but they make video. Right. What if they're
1:00:28
spoken to smarter? Like the idea
1:00:30
of education on these things or information
1:00:32
or healthcare in an easy way
1:00:35
is really these phones are just
1:00:37
are just getting started, and they will not just
1:00:39
be phones. They will be wrapped around
1:00:41
us in more good information you get
1:00:43
and the more communication you
1:00:44
get, that's a good thing. They might just be
1:00:46
getting started, but we are
1:00:47
ending. Do you wanna read us our credits today? Yes.
1:00:50
Remember, you we make this place beautiful or
1:00:52
ugly. Knock
1:00:54
it bones. Knock bones. Knock it bones.
1:00:56
Knock it bones. Today's
1:00:59
show was produced by name Maraza Blakeney
1:01:01
Ship, Christian Castro, Rosel and Rafaela
1:01:04
Seaport, special thanks to Haley Milliken.
1:01:06
Our engineers are Fernando Arudo and
1:01:08
Rick Kwan. Our theme music is by
1:01:10
Tracademics. If you're already following
1:01:12
the show, you get the red pill.
1:01:14
If not, Rick Deckard is coming
1:01:16
after you. Go wherever you listen to
1:01:18
podcasts, search are on with Swisher
1:01:20
and hit follow. Thanks for listening to on
1:01:22
with Cara Swisher from New York Magazine, The
1:01:24
Box Media podcast network, and us.
1:01:27
We'll be back on Friday. That's tomorrow out
1:01:29
with a special bonus episode.
Podchaser is the ultimate destination for podcast data, search, and discovery. Learn More