Episode Transcript
Transcripts are displayed as originally observed. Some content, including advertisements may have changed.
Use Ctrl + F to search
0:00
This episode is supported
0:02
by OFX Money Transfers. Currency
0:04
exchange can feel complicated. At OFX,
0:07
they keep it simple. With over 20 plus
0:09
years in business, OFX provides fast,
0:11
secure, and reliable money transfers. They're
0:14
OFX-perts, a stay on top of global
0:16
markets 24-7 so you don't have to,
0:18
giving you more time to focus on your business and
0:20
not on exchange rates. OFX
0:23
is a U.S. licensed money transmitter and
0:25
registered money service business. Reduce
0:27
your FX costs today. Go to OFX.com
0:29
to get started.
0:35
Support
0:38
for On with Kara Swisher comes from another
0:41
podcast. It's called The Science of Scaling,
0:43
hosted by Mark Roberge, a senior
0:45
lecturer at Harvard Business School.
0:47
Each week, he interviews some of the most successful
0:50
sales leaders in tech. They get tactical
0:52
about how to grow a business. Because Roberge
0:54
will tell you whether a company is valued at 1 million, 10
0:56
million, or a hundred million dollars,
0:59
the failure rate is the same. The
1:01
Science of Scaling wants to change that. Search
1:03
for The Science of Scaling wherever you get
1:05
your podcasts.
1:07
That's The Science of Scaling.
1:26
Hi everyone, from New York Magazine and the Vox
1:28
Media Podcast Network, this is Kendall
1:30
Roy pitching you the X Everything
1:32
app. Just kidding, this is On with Kara Swisher
1:35
and I'm Kara Swisher. And I'm Naima Reza
1:37
and I'm back. Yes, you are. So much has
1:39
happened in the tech world. There's a lot. There's always something
1:41
happening. No cage match. No cage... well,
1:44
that was never gonna happen. Besides you and Vinod
1:46
Khosla, which we'll get to in a better episode today.
1:48
Yeah. The rebranding
1:50
of Twitter to X. Yeah. A rebrand
1:52
more mocked than Meta's. The announcements
1:56
around
1:56
Elon's separate company XAI. And also
1:58
increasing regulatory
2:00
scrutiny from Washington on a couple of fronts, on monopolies
2:03
and on AI. Yeah, absolutely. Washington's
2:06
getting it together on AI more
2:08
quickly than before than anything else, and there is
2:10
a series of legislation happening around
2:13
tech. We'll see if it goes anywhere, it never has before,
2:15
but it's encouraging that it's early, and
2:18
there's a series of lawsuits, whether
2:20
it's Sarah Silverman, and there's more to come. People
2:23
like Barry Diller, I think, threatened to sue AI companies.
2:26
So I think people are thinking about not letting
2:28
it happen again when it happened
2:29
with the first round of tech taking
2:32
over everything. Yes, and the Sarah Silverman case is really
2:34
interesting because her argument is basically like, look,
2:36
my jokes are being ripped off. Copyright. It's a copyright
2:39
case, and it's not un-similar to
2:41
when Viacom sued YouTube, but
2:44
in this case, it's more significant because they're mashing
2:46
up other people's... It's plagiarism. I don't
2:48
know how else to put it. I keep saying that. Remember, I was talking
2:50
about copyright being the most important legal
2:52
thing, and I think that there'll be a lot of that, and these companies,
2:56
OpenAI just licensed AP stuff.
2:58
So there's a lot going
2:59
to be happening here. This is the very interesting thing
3:02
because OpenAI, having gotten out ahead of
3:04
the gate and having a first mover advantage, means that they
3:06
have actual customers' revenues, et
3:08
cetera, that can support their paying
3:10
licenses. Yeah, subscription. Yes.
3:13
As a result, they're able to build a bit of a mode
3:15
around them. They're able to afford these license fees. It'll
3:17
become very hard for smaller companies to
3:19
do so. And that, the ability of a company
3:21
like OpenAI to get so big begs the
3:23
question of bigness, which is this unlikely
3:26
alliance of Elizabeth Warren and Lindsey Graham.
3:28
Yeah, I know. The pair, the
3:29
two pair you want to go out to dinner with to go have
3:32
some fun. No, they're
3:34
getting together to try to do a number of things,
3:36
including create an agency around tech, which
3:38
I think is an interesting idea. It's something
3:41
I've thought of a lot. Every other industry,
3:43
whether it's Wall Street or whoever has agencies
3:46
around them, FDA, FTC,
3:49
every other major industry has agencies
3:51
attached to it, and tech does not. So the question is, should
3:53
there be a separate agency dealing with algorithms
3:56
and AI and privacy and
3:58
things like that? Right now the FTC
4:01
does that and the Justice Department, to an extent,
4:03
but it's all based on enforcement
4:06
and lawsuits. And so maybe there
4:08
should be someone who's more regulatory
4:10
on a day-to-day basis.
4:12
We'll see if it passes. I don't think there's-
4:14
Tech is really, in many ways, like when I think about your reporting
4:16
over decades and you talk about the
4:18
bigness and the power and the
4:21
lobbying power, the power over consumers
4:23
that companies have, strikes me that it's
4:26
a history of capitalism in America. But do
4:28
you think it's something very endemic to tack that
4:30
it requires something
4:31
to- No, they're just like regular capitalists
4:33
and they deserve regular regulators, that's all. They're
4:36
not special by any means and they're just always
4:38
trying to make money and they have
4:40
more power than, well, I
4:42
mean, the railroads had power right and then they didn't. You
4:45
know, the television industry
4:47
was a very, they're all consolidated.
4:50
So there's all kinds of versions of this, but
4:53
they're more powerful. And we live in an age where
4:55
just the distribution mechanism is that they are celebrated
4:58
so much. We've lionized a lot of these
5:01
founders and also a lot of these venture capitalists
5:03
over the years. And Twitter has been
5:05
a big part of that or X, whatever it is, but
5:08
has been a big part of it, boosting the voices of
5:10
these people. Yeah, Senators Warren
5:12
Graham did have a piece in the New York Times last
5:14
week saying, when it comes to big, tough, enough
5:16
is enough. I think that's probably a right thing. I just
5:18
don't think it's gonna pass. That's all. You know,
5:21
there's not a taste for new agencies, but
5:24
I think they're right in this one line. Nobody
5:27
elected big tech executives to govern anything,
5:29
let alone the entire digital world. If democracy means
5:31
anything, it means that leaders on both sides of the aisle must
5:33
take responsibility for protecting the free, marital people, ever-changing
5:36
whims of these powerful
5:37
companies and their unaccountable CEOs. I think that's
5:39
absolutely right. And Barack Obama said
5:41
a similar thing. Tech is not gonna solve our
5:43
problems or the really hard ones. They just leave
5:45
us the hard ones and then they take all the good stuff.
5:48
But with President Obama, one of the critiques that
5:50
he's had for many people, including yourself and
5:52
myself, it's like, well, we
5:54
let it get too big. Well, at the end, he
5:57
changed his tune quite a bit. He's changed his tune. So
5:59
let him do it.
5:59
So our guest today is a longtime venture
6:02
capitalist, Vinod Khosla, someone who's been in the
6:05
Valley for ever, I don't know, 40 years.
6:07
I don't know, a long time. He was a co-founder of Sun
6:10
Microsystems in the 1980s, a
6:12
former Kleiner Perkins partner in the 1990s,
6:14
and then started Coastal Ventures, his own fund
6:16
in this millennium. And he's also one
6:18
of your favorite people to battle
6:21
with on Twitter, Kara. Explain. Yeah,
6:23
no, I don't battle with him as much. I battle with much different
6:25
deals. But he's very, you know, I've known for years,
6:27
you know, he's been at the forefront of a lot of stuff.
6:29
He's been very early. He's a genuine techie
6:32
and a genuine business person and really
6:34
does think way ahead. He also is also
6:37
willing to debate, which many of these people, they get to a
6:39
certain point and they just don't want to be bothered by irritating
6:42
reporters who may know as much as
6:44
they do. He's recently been doing a
6:46
lot more. And we have been talking about doing this interview for a
6:48
while, but you are a little reticent. I really
6:50
wanted to do it. You did not want to do it. Explain.
6:53
I have come around. I thought it was a really great interview.
6:56
And I think he's I wanted
6:58
to do it sort of I
6:59
would rather do a startup person, a new startup
7:02
person in AI and lean forward. But that's just
7:04
my tendency. You know, I just think he's
7:06
he's a he's a legendary venture capitalist. If we
7:08
did John Dewar, he's in that range of
7:11
John Dewar and him and several others.
7:13
Reed Hoffman, I'd say so we might as well do
7:15
Vinot. Yeah, I thought he was important for two reasons.
7:18
And one is his early work on climate and AI,
7:20
which he started writing about AI as
7:22
early as 2011. He did. And
7:24
he's writing it as the next platform after mobile, which
7:26
was early to that game. And then the
7:28
second reason I think he's a really
7:29
interesting character is because of his
7:32
global view. I having spent time in Silicon
7:34
Valley, Stanford, it
7:36
in at Santel Road, I would say that
7:39
people like there's a there's less
7:41
globalism in Silicon Valley than I would expect
7:44
for what has been the frontier
7:46
creator of technologies.
7:49
And there's been this kind of like build and they will
7:51
come philosophy. Someone
7:53
like Vinot has an appreciation, I
7:55
think of the geopolitics and the role
7:58
of China and India.
7:59
giants and the competition from them as
8:02
well. Yep, he does. He definitely has much more
8:04
of a global, most people in Silicon Valley just are
8:06
American, American, American. So, you know, and
8:08
he is too, he's invested in mostly
8:11
US companies, but he certainly has
8:13
a global perspective. All right, we'll hear that global
8:15
perspective when we get back. Let's take a quick
8:17
break and we'll be back with Fenod Kosa.
8:21
Support for ON with Kara Swisher comes from Unisys.
8:24
Sometimes you hear people say, get your head out
8:26
of the clouds. Well, Unisys disagrees.
8:28
They say, keep your head in the clouds,
8:30
keep dreaming and breaking through by getting
8:33
your business the best cloud you can. Unisys
8:35
offers app modernization that helps companies move
8:37
their apps to the cloud, build new
8:40
cloud-needed apps, and support for on with Kara Swisher is
8:43
a way to do that. Unisys is a
8:45
global company that's committed to helping
8:47
you get your business move their apps to the cloud, build
8:49
new cloud native apps, and unlock powerful
8:51
insights and performance. With Unisys
8:53
cloud solutions, the sky is not the limit. It's
8:55
just the beginning of unlimited potential. That's
8:58
the place where one breakthrough leads to another.
9:00
Unisys, keep breaking through.
9:06
This episode is brought to you by Shopify.
9:09
That's the sound of switching your business to Shopify,
9:12
the global commerce platform that supercharges
9:14
your selling. Harness the best converting checkout
9:16
and same intuitive features, trusted
9:19
apps, and powerful analytics used by
9:21
the world's leading brands. Stop
9:23
leaving sales on the table. Discover
9:25
why millions trust Shopify to build,
9:27
grow, and run their business. Sign
9:29
up today for your $1 per month trial period
9:32
at shopify.com slash tech23.
9:36
It is on. All
9:38
right. How long have we known
9:40
each other? Long, long time. Many
9:43
decades, more than- 25 years. Yeah,
9:46
I think I met you in the 90s when I got out of here
9:48
in 1996. I think you're one of the first people
9:50
I called because you were obviously well-known for
9:52
creating all kinds of companies, but then you were in venture
9:55
capital at the time. I visited two people, you and
9:57
John Doerr here on Sand Hill Road, where you
9:59
still are.
9:59
Still on Sand Hill Road. Yeah, we are still on Sand Hill Road.
10:02
Where were you at the time? What firm were you at? I
10:04
was a Kleiner. You were a Kleiner. I was
10:06
a Kleiner from after Sun when I joined
10:08
the world. Sun Microsystems. Till 2004 when
10:10
we started Coastal Ventures. But
10:13
actually we started it within Kleiner Perkins.
10:15
So the first two years we stayed
10:17
inside their building and then got
10:20
to where.
10:20
Which was just down the road and I visited the two of you. I
10:22
remember that. A couple of different people
10:24
I visited, but you were among the first. But I'm
10:26
going to start talking about, we're going to talk about a lot of things. We're going
10:29
to talk about AI, geopolitics, climate
10:31
tech, your tweets. I'm very interested in your latest
10:33
tweets if you know it. I'm not sure what's going on. I'd like to hear
10:35
about it. But we'll get to that at the end. But
10:37
you were very early to the topic writing about AI
10:40
for TechCrunch in 2012. Mustafa
10:43
Suleiman told me in a recent interview
10:45
that if you look at the pace of change in
10:47
AI, it's actually been quite incremental. We've been
10:49
working on this for
10:50
decades. It's data and computing power that have grown
10:52
exponentially in the last few years. I'd love
10:55
your thoughts about why you started
10:57
to write about it so early.
11:00
Are we about to see exponential improvement in
11:02
AI in the years to come? Let
11:05
me start with an analogy. When we first
11:07
met in 1996, that's the year
11:09
we started Juniper.
11:10
Explain what Juniper does for those who don't know.
11:12
Juniper is a TCP IP router
11:15
company or a router for the internet. It
11:18
was a time when nobody in the world
11:20
believed that TCP IP would be the
11:22
protocol for the internet. Cisco
11:24
had gotten essentially all
11:26
TCP IP plans off their docket and
11:29
bought a company called Stratacom. The
11:32
reason why that's relevant, though, is
11:36
Sun adapted TCP IP in 1982. It
11:39
was puttering along at a slow pace like
11:42
AI was puttering around for a while. Then
11:45
in 1996, when we started Juniper, it
11:47
saw an inflection. When
11:50
we started Juniper, not a single customer
11:52
said they would buy it. We built it anyway,
11:55
very much in the field of dreams kind
11:57
of vision of things.
11:59
clear then that
12:01
this puttering low end of
12:04
the exponential was
12:06
already happening.
12:07
The reason I bring that up and
12:10
why Juniper was such
12:12
a large payback for Kleiner, about
12:15
$7 billion in distributed prospects on a few
12:17
million dollar investment, was
12:20
we guessed right on the exponential.
12:24
For 10 years, I've watched
12:26
AI do the same thing. Right. Putter
12:29
around incrementally. When
12:31
I first wrote my piece on Do
12:33
We Need Doctors? Do We Need Teachers? in 2011.
12:37
It was December 2011 when I
12:39
wrote it, January 2012 that I published
12:42
it. It was clear
12:44
AI was going to be large
12:47
to me. To you. And at
12:49
some point, capability would explode
12:51
for two reasons. One, there was effort
12:54
being put in, but the best minds were moving
12:56
to AI. And the potential
12:58
existed for very large progress.
13:01
Right.
13:01
So let's talk about that, the future impact.
13:03
First, health care. In 10 years, you
13:05
have talked about this and you caused
13:08
quite a bit of controversy when you said it. And I think you
13:10
were correct at the time of how A will change the experience
13:12
of medical care.
13:13
Well, it's very,
13:15
very clear that expertise
13:17
will be very close to free in
13:20
a GPT-5, GPT-6
13:23
world.
13:24
And if you imagine that, then
13:26
a physician's expertise will
13:29
be in an AI. Definitely in the
13:31
next five years, whether
13:33
you're talking about a primary care doctor,
13:36
an oncologist, so specialists,
13:39
or a mental health, a therapist,
13:42
a psychiatrist, those will
13:44
be in AI. Whether we allow them
13:46
to practice or not is a different regulatory
13:49
question. We can get to that. But
13:51
the capability will be there and many
13:54
countries around the world will be using that capability.
13:56
So a doctor that's not a person. A
13:59
doctor that's not a doctor. a person having
14:01
more knowledge than the median
14:03
doctor or 90%
14:06
of doctors probably where I would set the benchmark
14:08
in 5E. Wow. So
14:12
everybody can get a quality doctor. And
14:15
it'll be in conjunction with a doctor, possibly
14:17
for the human element of care.
14:19
Well, she's looking in your eyes and telling you that things. Looking
14:21
in your eyes, giving you a hug when they
14:24
have to tell you you have cancer. But
14:27
otherwise, an AI will know more and have
14:29
better intuition or whatever
14:32
you want to call it. It will have a broader range
14:34
of knowledge, better knowledge of state
14:36
of the art, research, and
14:38
outcome. The best therapies,
14:40
every possible interaction. Do you know
14:42
drug interaction kills more people than breast
14:44
cancer in this country this year? And
14:47
so you can avoid those, those avoidable
14:50
errors. And that should happen
14:53
if the medical establishment lets
14:55
it happen.
14:55
But 90% of doctors, meaning almost
14:58
all of them. Almost all of
15:00
their expertise. Right. So
15:02
what's their use of the 90% that gets
15:05
their expertise useless? The number I've used
15:07
in the past is 80%
15:08
of what doctors do will
15:10
be done by an AI. But
15:13
the 20% will provide much more of the human
15:15
element of care, which they don't have time for today.
15:18
Right. You know, to make you feel like they care. So
15:20
they're still useful. In other words, they're still useful.
15:22
I think most AI systems will be used in
15:24
conjunction with humans and
15:26
humans who care about what they're doing.
15:28
Okay. One of the things you
15:30
told Sam before in 25 years, 80% of all
15:33
jobs are capable of being done by AI. You
15:35
called this quote, the opportunity to free humanity
15:38
from the need to work, though you admit
15:40
it is, quote, terrible to be the disrupted
15:42
one on the way to utopia. Talk
15:45
about that. How is that level of job loss utopia?
15:48
Now I have said a similar thing. Like there's
15:50
no reason for many people to be doing
15:52
what they're doing.
15:53
So let me be precise. What I said is 80%
15:55
of 80% of all jobs. So 64%
15:58
of all jobs. OK,
16:00
yeah, still a lot to know. Yeah, a
16:02
lot of jobs will be displaced. Now,
16:05
we've gone through that displacement before. You
16:07
know, in 1900s, agriculture was 50% or
16:10
so of all jobs. By 1980,
16:12
it was 4% of all jobs.
16:15
But I do think this time it's different because
16:17
we've exceeded the capability of
16:20
human beings, if I'm right, in the next
16:22
five years. And most
16:24
of these jobs will not need to be done
16:26
by a
16:27
human.
16:29
And humans will have other jobs
16:31
to do or other things to do. More
16:34
importantly, there'll be enough abundance. And
16:36
I wrote a piece about 2014, where
16:39
AI will cause great abundance,
16:42
great productivity growth, great GDP
16:44
growth, almost everything economists
16:47
measure, and increasing
16:49
income disparity.
16:50
Because people don't have jobs, right? People
16:53
don't have jobs. So what do we do with all those people?
16:55
Now, the good news is the abundance
16:57
will allow for redistribution. If
17:00
you take a simple metric like GDP growth for
17:02
the next 50 years in this country, and you
17:04
assume 2% a year, GDP
17:07
will go from 70, 75,000 to 175,000, that 2%
17:11
per year real growth over 50 years. If
17:15
AI accelerates that, turbocharges
17:17
it to 4%,
17:19
the GDP growth will be $475,000.
17:23
If that's the case, there's ample room
17:25
for redistribution, something
17:27
the Republicans hate. But I do
17:29
think it'll become essential. And there
17:32
will be enough to afford the
17:34
minimum standard of living for everybody
17:37
in 2004. To pay them.
17:39
Pay them to live. Yeah, pay them to live
17:42
or do things that are useful, but not in
17:44
today's jobs. And so when you think about
17:47
it that way, that's a profound shift
17:49
in the way we live, which the
17:51
US particularly has not been open to. How
17:54
do you make that shift without it being the
17:57
farming change was very, we
17:59
don't remember it.
17:59
but it was quite difficult from a social perspective
18:02
and people were very dissatisfied
18:05
to say the least. The same thing's gonna happen here. We're
18:07
already in a politically polarized.
18:09
You know, it's romantic to talk about disruption.
18:12
Like I said before, it's not fun to
18:14
be disrupted. Right,
18:15
so what has to happen then
18:17
if people, like you don't need to exist
18:20
anymore in the work sense
18:22
at least. You know, most people
18:24
don't want the work they want. Do you wanna work on
18:26
the same assembly line, assembling the same
18:28
car for 30 years in a row? No.
18:31
Nobody wants those jobs. They have
18:34
a need to work and I
18:36
think the need part will go away. I'd
18:39
be quite happy maintaining a hiking trail
18:41
out there. That's a great
18:43
job for me.
18:44
Right, which AI cannot do. Which
18:46
AI probably can't do. A robot could.
18:48
Or I may still prefer to do it even if an AI
18:51
can do it. There are jobs people will want
18:53
to do.
18:54
I love my job. You love
18:56
your job. Most people don't work
18:58
in a job they would do for free if they
19:00
didn't need to.
19:02
So would AI replace venture capitalists?
19:04
Could they figure out investments better than you? You
19:07
know, can AI do investments? Possibly.
19:12
You're pausing. Let's get rid of the doctors.
19:14
But venture capitalists are a special group of hummingbirds.
19:16
No, I don't subscribe to this.
19:19
My job is special. Right. You
19:21
know, a sufficiently advanced intelligence
19:24
will make good investments. Even
19:26
if we don't need to make investments. The
19:29
pause was in the world
19:31
of great abundance,
19:33
capitalism needs to be redefined. Capitalism,
19:36
as we've seen it, is great
19:39
for economic efficiency.
19:42
People compete, they have to get better,
19:44
things become more efficient.
19:47
Capitalism works
19:50
when there is no abundance. When resources
19:52
are scarce and you have to make the most of them.
19:56
When in fact, abundance becomes easy.
19:59
Then you have to...
19:59
rethink the role of capitalism.
20:01
I'm not
20:02
saying we need to go to socialism, I'm
20:05
a total capitalist. But
20:07
how we think about the role of capitalism
20:10
will transition over the next 30, 50 years, probably
20:13
within our lifetime.
20:14
Within our lifetimes. All right, let's talk about OpenAI.
20:16
You were the first VC to invest. People don't realize
20:19
when the company switched to its so-called capped profit
20:21
model in 2019, Reed
20:24
and Elon Musk and Reed Hoffman were
20:26
already in there who invested in it as a nonprofit.
20:29
Why did you wait to invest in it when
20:31
you were writing papers about it as early as 2012? What
20:35
was the wait for your perspective?
20:37
Well, let me be clear. We were
20:39
investing in the best AI we knew
20:42
how to invest. The picture of how it emerges
20:44
wasn't clear. It was pretty fuzzy. I
20:46
don't believe in big vision.
20:48
I'm sort of like you engage
20:51
and you learn.
20:52
Now we invest in a deep learning company called
20:54
Jetpack which failed, which would
20:56
classify Instagram images and
20:59
do a pretty good job of saying, I need
21:01
X, give me some pictures of that. The
21:03
company failed. It was acquired by
21:05
Google and that's fine.
21:08
We invested in Caption Health that started
21:10
to read, essentially guide ultrasound.
21:14
And we invested in a company that was
21:16
self-driving for MRI
21:18
machines. So you could do a cardiac MRI
21:21
without an MRI technician. So
21:23
we were investing for a while
21:25
in these areas. When the
21:27
OpenAI came around, there really wasn't
21:30
a mechanism to invest in sort
21:32
of the core platform. It didn't exist.
21:35
But as soon as I remember Sam
21:37
calling and asking if we were interested,
21:40
and Sam wasn't gonna take venture capital firms
21:43
because it wasn't clear where this was
21:45
going or what would emerge. He wanted individuals
21:47
who cared about AI and
21:50
its trajectory. Right.
21:52
So explain how a small cap structure
21:54
works very briefly. To me, that's a
21:56
technical detail. It's a 100X, you
21:59
get up to...
21:59
100x your investment, not more, and
22:02
the rest goes for societal good.
22:04
And Sam called us because he thought we
22:06
cared about societal good as
22:08
much as we cared about sort of the profits.
22:11
100x is pretty good. 100x I'd
22:13
be happy with any day. What
22:15
really was the risk was where would
22:17
AI go? Would it ever generate any
22:20
profits? What would be the capability?
22:23
And would this be the way to get to
22:25
scale in AGI? In
22:27
order to have the money to do so. Now, Elon,
22:29
who was the early investor too in the nonprofit,
22:31
tweeted that OpenAI had become quote, a closed
22:33
source maximum profit company effectively
22:36
controlled by Microsoft. Not what I intended
22:38
at all. Basically saying it was a bait and switch.
22:41
He's now today started his own competitor. He's
22:43
talked about it and made a lot of noise about it. Why
22:46
is it not a bait and switch to go from nonprofit to
22:48
problem even if capped? And
22:51
what do you think of his new effort?
22:53
Well,
22:54
so I haven't asked Elon
22:56
about his effort. You're in a better position to ask
22:59
him.
22:59
No, he's not speaking to me, but go ahead. But
23:03
at one level, he's calling for a pause
23:05
in research. And the other he's accelerating
23:07
his efforts. It
23:11
speaks volume on the importance of AI. Whether
23:15
it's his self-driving cars or
23:18
Twitter or other mechanisms. I
23:20
think he believes just like I do,
23:23
like Sam does, everybody does on the
23:25
strong power of AI to be a powerful
23:27
economic force.
23:29
And so, you know, it behooves him and
23:32
others to do the most they can.
23:34
What I would say is when there's such a strong
23:36
economic force, lots of people will play and
23:38
that's a healthy thing. And so when he
23:40
says that, how do you respond? Well, too bad.
23:43
Now it's too big, right? Is that essentially now
23:45
it's a thing? Well,
23:46
I don't think he quite imagined
23:48
it would be what it is. So in some
23:50
ways, these are, I think, sour
23:53
grapes, you know, and
23:55
he's trying to catch up.
23:56
He's trying to catch up. I was interested
23:58
as someone asked me, I said, well, he was... early and then he's late.
24:01
Now he's late, which is kind of an interesting thing. What's
24:04
your relationship with OpenAI now?
24:06
You're just one of their investors,
24:08
correct? Well, I've always had a great
24:10
relationship with Sam. With Sam Altman. We
24:13
share a similar vision on
24:15
the role of technology in
24:17
society. I think that's
24:19
why he invited us and probably the
24:22
only venture firm he invited to invest in
24:24
OpenAI because we had
24:26
aligned goals
24:27
with what was the original purpose of
24:30
OpenAI, which is maximum
24:32
good, societal good. He says he's
24:34
earning and he told me in an interview, an immaterial
24:36
amount from his initial investment through Y Combinator.
24:39
Is that a fair statement? Is he not going to be
24:41
benefiting from this upsurge?
24:44
I think Sam is very direct and very honest,
24:47
so you can take his word for what he's saying. At
24:49
least I believe him when he
24:51
says something. He's
24:53
not benefiting from the success
24:55
of OpenAI, so he can be true to the original
24:58
objective.
24:58
Let's talk about government's role in
25:01
regulating AI. It's something that's been much discussed
25:03
around the world. There's efforts in China,
25:05
everywhere else. The US is a little slower, as always.
25:08
Sam told me there needs to be a global regulatory
25:11
body like the International Atomic
25:13
Energy Commission. Reed Hoffman
25:15
told me he would do the equivalent of the Blue Ribbon Commission
25:17
to declare a set of outcomes to prioritize
25:19
and a set to avoid. What's your idea
25:21
for regulation?
25:23
I think any international treaty
25:25
is a really bad idea. Okay, why?
25:28
It would be the equivalent of saying I'll trust
25:31
Xi and Putin with the future
25:33
of this planet.
25:34
Nuclear was very verifiable
25:37
when used. Biowarfare was
25:39
verifiable when used. These
25:41
things are verifiable. AI is not
25:44
verifiable. I'd be
25:46
shocked if China doesn't
25:48
have an effort to have bots talk
25:50
to every US voter in
25:53
the next election to the best
25:55
of its ability, one-on-one conversations
25:58
manipulating their views.
26:01
They'll definitely do it in Taiwan.
26:02
Certainly. So,
26:05
you can't track it. Yeah. So
26:07
the question you have to ask with regulation is what
26:09
danger are you trying to eliminate?
26:11
You know, there's sentient AI, which some people
26:14
talked about. I think... Killer
26:16
robots. Killer robots.
26:19
You know, it's silly to imagine today
26:21
a robot could get away from us
26:24
and really kill all humanity you would want to
26:26
and all those things.
26:27
What's the current plot of Mission Impossible in case you're
26:29
interested? It's great for fiction. Yeah.
26:33
But
26:34
that danger, I would venture to guess,
26:36
is a lower probability than the risk
26:38
of an asteroid hitting planet Earth in
26:40
the next 50 years or 100 years. So
26:45
it's a risk. I'm not saying it's not a risk.
26:48
It's evolving rapidly. So we have to keep
26:51
very close eyes on it.
26:53
But far greater risk than sentient
26:56
AI is
26:59
the risk that China
27:02
uses it to influence global
27:04
economics and hence global politics. They've
27:07
done a lot to
27:11
build influence globally outside
27:13
of China, which is new under President Xi. Yeah.
27:17
The Belt Road Initiative. Minerals, everything.
27:19
Minerals. They
27:21
have a trillion dollars in loans outstanding
27:24
to nations.
27:25
They own the place. They own the place.
27:28
So they have the physical resources, which are essential
27:30
to economic development. And then they offer
27:33
AI globally
27:34
for good.
27:37
But it'll come with the Chinese style
27:39
of political system. And I want Western values
27:42
to win over the Chinese political system.
27:44
And by the way, it's a reasonable assertion
27:47
for them to say most people will be much better
27:49
off under China's rule, under China's
27:51
political system. And they genuinely
27:54
believe it's a better system for society,
27:56
not just for President Xi.
28:00
They believe it's a superior system. I happen
28:02
to prefer our system.
28:03
So that's the doomsday scenario, that it's
28:05
China's century, in other words. Yeah.
28:08
I'm not as bullish on China long-term, but
28:11
the fact that they get ahead of us
28:13
in the AI race,
28:14
use it for cyber warfare,
28:17
real warfare in the field, intelligent
28:20
drones, all that stuff.
28:23
But more importantly,
28:25
in the social sphere, like influencing
28:27
elections and opinions,
28:30
but also real true economic
28:33
development. So you've said that that's
28:35
why, as Musk and others have said, that AI
28:37
should take a pause. You've advocated,
28:39
because of the quote, we need to win the race against China.
28:41
You've also worried about, as you've noted,
28:43
the existential threat. What does an
28:45
arms race with China
28:47
mean that we have to do? Obviously,
28:50
we have to do it. We have to fight with them
28:53
on this issue. There's no question
28:55
we are in a techno-economic war with China.
28:58
And by the way, there's a great book
29:00
I read recently called The Danger Zone,
29:03
which argues the next 10 years, say 2033,
29:06
China will peak.
29:09
And it argues that authoritarian
29:13
regimes get extremely dangerous
29:16
when they're reaching their peak power. Because
29:19
of demographics, because of slowing GDP
29:21
growth, this trillion dollars
29:24
in debt all comes due and will
29:26
not be repaid. So
29:28
it causes real consternation
29:31
internally in China.
29:33
And that makes them much more dangerous than
29:35
they would be if they were just happily growing
29:38
at 5% a year for the next 30
29:40
years.
29:41
So I do think they're particularly dangerous because
29:44
they will be peaking over the next decade
29:46
if I buy this book's thesis, which I do. And
29:50
because of that, they'll deploy their AI
29:52
to their benefit, both
29:55
for soft power and hard
29:57
power. So what does the US do then? You
29:59
say democracy. and freedom of speech is at stake, but
30:01
you're also gonna make a lot of money. Is democracy
30:04
at stake here or what should the US do?
30:06
What should the US policy be right now
30:08
on this? And I assume you're working
30:11
in concert with
30:12
government.
30:14
Well, I talk to government a lot, trying
30:16
to influence against
30:19
those who are calling for rapid
30:21
regulation or pauses in development
30:23
because the biggest danger, it's
30:25
not saying that we don't have a danger in AI.
30:28
We have a much bigger danger
30:31
if we fall behind China, especially
30:33
because of the broad economic
30:35
power it conveys to the nation that wins.
30:38
And there isn't this notion of one winner
30:41
takes all. It may not be, it might
30:43
be. I just wanna change the probability
30:45
that they win or get ahead of us.
30:48
We'll be back in a minute.
31:02
Hi, it's Kara. I know you're a curious person
31:05
who deserve answers to life's most fascinating
31:07
questions. So I have a podcast rec for
31:09
you. On Freakonomics Radio, journalist
31:11
Steven Dubner explores well everything.
31:14
It dives into the hidden state of business history,
31:17
culture and economics with the help of CEOs,
31:19
artists and Nobel laureates to reveal just
31:21
how interconnected these topics are.
31:24
Like how the CEO of Microsoft is tackling
31:26
the AI revolution or the recent
31:28
episode about ESG investors and
31:30
if they're actually helping the environment
31:32
and a new series on the economics of
31:35
whaling. That's a new fresh topic. New
31:37
episodes of Freakonomics Radio are available
31:39
every week wherever you get your podcasts.
31:50
Listen, even if you don't have
31:52
student loans yourself, you probably
31:54
know someone that does, which means you
31:57
also know the system is broken. But...
31:59
Have you ever wondered how
32:02
exactly it got this way?
32:05
There's a high demand for college because
32:07
of this huge economic shift in the
32:09
U.S. economy and the global economy. And
32:12
then you have this, like, you know,
32:14
free-flowing access to credit. And
32:17
then you had uninformed consumers. I mean,
32:19
this is just a recipe for
32:22
disaster.
32:24
Your student loan debt and then some explained.
32:27
That's This Week on the Weeds. Listen
32:29
wherever you get your podcasts.
32:34
You in March, just before TikTok CEO
32:37
testified for Congress, you hosted a dinner with Peter
32:39
Thiel in D.C. for investors and lawmakers
32:41
to press China as a techno-economic threat.
32:44
House Speaker Kevin McCarthy was there. What
32:47
was your goal? Because at that dinner,
32:49
Thiel compared TikTok to homelessness
32:51
in his speech. And he said that, quote, both are a
32:53
really obvious problem. I'd
32:55
love you to talk a little bit about that dinner and then
32:58
TikTok itself, because it's emblematic. TikTok
33:01
is sort of the shiny object. Here's
33:04
the broader perspective I
33:06
would give.
33:07
President Xi in the 14th five-year plan
33:09
in China specifically
33:11
called for winning the AI race and
33:13
the 5G race. Those
33:16
were two things he really called for. Now
33:19
ask yourself why? Because AI will
33:22
exert great economic power globally,
33:25
which is very much in their interest.
33:27
Winning the 5G race, they
33:30
already have their telecom
33:32
equipment in a
33:34
hundred countries, and that lets
33:37
them spy on nations and
33:39
individuals within nations the way they can
33:41
do it with impunity within China.
33:45
And so I think
33:46
5G race is about surveillance
33:49
of citizens everywhere. The AI
33:50
race is about economic power.
33:53
And TikTok is an extension of the
33:55
surveillance capability. And that may
33:57
not happen, but it's not a risk-free.
33:59
we can afford to have. So what do you think they should do? I'm
34:02
gonna point to TikTok only because I do think
34:04
there's a lot more important things happening like 5G
34:07
and everything else than that. But what's to be
34:09
done about something like TikTok because it's so symbolic
34:12
and emblematic of the issue. Here's
34:14
a very popular app. A couple of years
34:16
ago, about four or five years ago, I wrote a piece that
34:18
said, best product I've seen in a long
34:20
time, I'm using it on a burner phone, we gotta
34:22
get China out of here. And I got a lot of pushback
34:24
that I was anti-Chinese and anti-Asian and I
34:26
was like, no, I'm anti-Chinese Communist Party. But
34:30
now everybody's like,
34:30
oh, we should ban them in this. What is
34:32
your feeling of what to do about that particular
34:35
issue? I wouldn't have a question,
34:37
we should ban TikTok. Okay.
34:40
Just
34:41
because of potential danger of
34:43
it being controlled by the CCP,
34:46
the Chinese Communist Party. We have to do
34:48
everything we have to do for national security
34:51
and protecting our citizens and we are very
34:54
influencable. If you look
34:56
at China,
34:57
the Chinese Communist Party controls companies.
35:01
In US, it works the other way.
35:03
The companies control government. People
35:07
haven't realized how different those systems
35:09
are and what purpose these
35:11
influence vectors serve. And
35:13
we have to realize that.
35:15
You know, it's not about, I'm not saying any
35:17
of this will happen. I'm saying
35:19
there's a risk of it happening and it's
35:22
about adjusting the probability of
35:24
these things happening.
35:25
Yeah, you've criticized Apple and
35:27
Tesla's business in China. Explain
35:29
why it's a problem and what they should do differently. Well,
35:32
to be clear, I haven't criticized
35:34
Apple's business in China or Tesla's business
35:36
in China.
35:38
What I have said is they can't
35:40
be independent views on
35:43
what is the right thing for Western world
35:46
because of their interests are conflicted and
35:49
Elon Musk would get shut down in China
35:51
or in Russia.
35:52
So he can't say what he thinks. So he
35:54
can't say what he thinks. Neither can Tim Cook.
35:57
I think that's a very different statement. in
36:00
their own self-interest as capitalist
36:02
companies, they have to do
36:04
what's good for their shareholders. And that means
36:07
they can't be trusted to represent- American
36:10
interests. American interests. Should
36:12
they be there at all? Should they be there at all?
36:15
Or should they move somewhere else? Look,
36:17
in the current system, it's for every company
36:19
to decide what they do, not my choice.
36:22
Now, we don't have a China operation, and
36:25
there's a reason we've never had a China operation.
36:27
When I was at Kleiner, I'd resisted
36:30
Kleiner having a China operation. It
36:32
was only after I left in 2004 they
36:34
started the China effort. So I've
36:37
never been a fan because the Chinese
36:39
government and the Communist Party
36:41
will not let an American company win
36:44
long-term.
36:45
They will let them strategically win short-term,
36:49
but not long-term. That's correct. Okay. So
36:52
let's move to India and its role in tech
36:54
over the next couple of decades. Sam Altman
36:56
met with the Prime Minister Modi during
36:59
a recent visit to DC and reportedly discussed
37:01
collaborating on AI. I want to know what collaboration
37:04
would look like. His visit was controversial given
37:06
human rights records there. What do you
37:08
make of him?
37:10
Well, I think Modi has
37:12
done a very good job on the economy
37:15
and getting government aligned.
37:17
I completely disagree with his
37:19
Hindu without
37:21
movement bias. I
37:24
think the biggest danger for a country
37:26
like India and continued economic development
37:29
and GDP growth is leaving a population
37:31
behind. So leaving the Muslim population behind.
37:34
Even if you were only interested in the
37:37
self-interest of the Hindu population
37:39
or his constituency, I'd
37:41
make sure they're not left behind. 200 million
37:43
people left behind is a recipe
37:46
for social unrest and disaster which
37:48
could set back the economy. So
37:50
my view is on
37:52
that front, he's being
37:54
politically opportune but causing
37:57
a long-term problem.
37:59
But of course, politicians
38:02
are okay to postpone problems by a decade
38:04
or two. But it is the single biggest risk
38:07
in India's economic development, I believe, leaving
38:10
one population behind.
38:11
And so what's to be done? Because this
38:13
is not a solvable problem. Would you think it'd be
38:15
good? You're so well known in India to
38:17
take a stronger stand there.
38:19
Well,
38:20
I don't think it's an unsolvable
38:22
problem, especially
38:24
with the power it convenes on us for
38:27
coming full circle to what
38:29
India can do with OpenAI. First,
38:32
India has the talent base to develop OpenAI
38:35
or applications that could
38:37
do a lot of good. Free doctors,
38:39
free teachers, free oncologists,
38:41
free robots. I've
38:44
had a number of conversations along that line.
38:48
So there is a lot of good to be done, and you
38:50
don't need to impoverish
38:52
one population to give to another.
38:55
We will have enough abundance
38:57
to solve the income disparity problem
39:00
or the social distribution problem. I
39:02
think the minimum standards on the planet by 2050
39:05
could be awesome, the minimum
39:07
standard.
39:08
So is it important for you to take
39:10
a political stand in some more promising like
39:12
India if you want them to be on this journey?
39:15
Yeah. You know, where I spend my
39:17
time is a whole different question
39:20
we can get to. You know, I have to
39:22
trade off between working with Bob Mungard
39:25
on Fusion. We're working on a
39:27
public transit system that should be in every
39:30
one of the 4,000 cities around the planet
39:32
instead of just the 200 cities that have
39:34
public transit today. There's
39:37
enough important things I'm working
39:39
on. So it's a question of
39:41
how effective can I be?
39:43
By getting into a beef with Modi. You
39:45
want him to develop AI versus focused
39:48
on the problems you see. Well, I
39:50
think those are independent vectors.
39:53
And I think on the economic development
39:56
front and the technology development front, they'll
39:58
probably do a reasonable job.
40:00
I could help but probably
40:02
not materially. Can
40:06
I stay in touch a little bit,
40:08
but I don't allocate a huge amount
40:10
of my time to that.
40:12
To that, doing that, okay. You just
40:14
mentioned climate. Let's talk about climate. You
40:16
mentioned a number of projects, public transportation,
40:19
fusion. At the Breakthrough Energy Summit
40:21
in October last year, you said, quote, if we try
40:23
and reduce carbon by 2030, we'll have much
40:25
worse off than set a reduction target of 2040.
40:29
Slow down, explain.
40:31
I don't mean slow down in
40:33
the research and development science, but
40:35
I do mean forcing
40:39
un-economic solutions
40:43
is the wrong approach to solving the climate
40:45
problem.
40:46
I wrote a paper about 10 years
40:48
ago, maybe 15 years ago,
40:51
in which I defined what I call the Chindia
40:53
price.
40:55
It's the price for any technology
40:57
at which it would be
40:59
broadly adapted in India and China
41:02
against its fossil competitors. I think
41:04
this was in 2010. I think
41:05
Gates calls it the green premium, right? When
41:07
does it end? Like at
41:10
that price,
41:12
it gets broadly adapted. Above
41:14
that price
41:16
is going to be just showcase,
41:18
you know, dressings on the cake, not the cake
41:21
itself. Right, and so you think doing
41:23
that too quickly is a problem. You've had this
41:25
experience, you said in 2007, the
41:28
clean tech field was ripe for VCs and you said there are easy
41:30
pluckings. That was a tough time for you, those
41:32
investments. And I know you and I have talked
41:34
about that. You had investments in biofuel,
41:37
you had solar investments. It
41:39
didn't work out at the time. Was that what you're
41:41
talking about too early? No,
41:42
that's not what I'm talking about. Solar was actually
41:45
quite profitable for us. Each
41:47
had a different trajectory. Biofuels,
41:51
some things worked out, some things didn't.
41:53
And I think for aviation, long distance
41:55
aviation, for example, biofuels
41:58
is still the right answer.
41:59
And LonserTech is a pretty popular public
42:02
company developing it.
42:03
Were you there too early? Do you feel
42:05
regretful of these investments? Or do you?
42:08
Well, actually the climate investments
42:10
for us worked out fine. Some
42:12
of the solar things worked out fine. Some
42:15
didn't, some did. Enough did to
42:17
make it profitable for us.
42:19
LonserTech is a very successful company
42:22
in biofuels. Yes, we lost
42:24
some companies, but we made more money than we
42:26
lost,
42:27
which is the key metric in venture capital. No,
42:30
I
42:30
know, but when I interviewed Dor, he was
42:32
like, it wasn't the hit we wanted
42:35
it to be. He was also early in a lot of
42:37
those things and felt it was too early. Now,
42:39
Kleiner's strategy was different
42:42
than ours, right?
42:44
And I'd left Kleiner by 2004
42:47
and working on our own. Right, but he was doing it
42:49
separately. Yeah. QuantumScape
42:52
has been a successful investment in batteries,
42:54
and we bet on batteries. At the same time,
42:56
we bet on biofuels for cars and
42:59
aviation. Biofuels for cars,
43:01
lost to batteries.
43:03
Biofuels and aviation looks like the
43:05
lead horse still.
43:07
All right, so when the biofuel company you did back,
43:09
it's CURE, is it CURE, K-I? K-I-O-R,
43:12
CURE. It went bankrupt.
43:15
Fortune actually asked, and I'd love you to comment
43:18
on this, if it was, quote, evidence that fast-moving
43:20
venture capital investors are ill-suited
43:22
to tackle such technically demanding time-consuming
43:25
endeavors.
43:26
I disagree with that. Tell me why.
43:29
You know, QuantumScape, for example,
43:31
slow-moving, long time to develop,
43:34
but in a very, very large market,
43:37
has been very successful. LancerTech,
43:40
if you measure profits today, has
43:42
been very, very profitable,
43:44
makes up for many, many cures in
43:46
terms of our losses, which
43:49
is the nature of venture capital. High-risk bets,
43:51
small percentage,
43:53
win.
43:54
They make up for all the losses. And
43:56
I think biofuels will work
43:58
out for us, as worked out for us.
43:59
with LancerTech, batteries
44:02
worked out for us. We bet on both. It's
44:04
impossible foods.
44:05
Impossible foods has been very, very profitable.
44:08
Remember, we bought 50% of the company
44:10
for $3 million. So it's been, even
44:12
at depressed prices, it's a very
44:15
profitable effort for us. And
44:17
that's the nature of venture capital.
44:18
So you don't feel that you
44:20
were early to those or is this the time now
44:22
to really invest? One of the things, you brought up nuclear
44:25
fusion. This is the tech you've invested
44:27
in. Sam Altman just announced this back. Bezos,
44:30
Peter Thiel, Bill Gates. Talk about
44:32
fusion, for example, which Mark
44:34
Benioff has called the holy grail. Many
44:37
people think we're decades away from functioning fusion
44:39
power in plants. There are efforts to make small
44:41
modular nuclear devices. People
44:44
are thinking about nuclear, nuclear energy
44:46
plants to restart them. Where are
44:48
you
44:48
looking at in climate? So my
44:50
view is, and
44:53
I tweeted about this recently,
44:55
that fission will take longer to permit
44:58
than fusion will take to develop
45:00
from scratch. So
45:03
I connected with Bob Mungard
45:06
on starting Commonwealth Fusion when
45:08
he was a postdoc at the MIT
45:11
Plasma Fusion Lab.
45:12
So you think that's faster than any political?
45:15
My bet is that technology will be developed
45:18
faster than you could permit one
45:20
nuclear fission plant here
45:23
in the US
45:24
with all the risks associated
45:27
with it. So
45:29
I do believe that's a large solution.
45:32
It's not the only solution we are working in.
45:35
We have some investments in solar
45:37
still, but for dispatchable
45:39
power, which is reliable power when the
45:41
sun isn't shining and the wind isn't blowing.
45:43
Right, no, renewables aren't enough. The wind and solar
45:45
are great. Wind and solar are great. Wind and water are not enough.
45:47
We've invested in those, not a lot in wind.
45:50
Right. You need reliable dispatchable
45:52
power and for which there's two
45:55
technologies. One is fusion
45:57
and the other is geo-t
45:59
term often ignored, but
46:02
we've invested both and both seem to be
46:04
working out. And that I
46:06
think either path would provide great
46:09
amount of power economically. And
46:11
you're leaving out nuclear, small modular
46:14
devices, all these. So we've not invested,
46:17
I used to be bullish on nuclear. I'm
46:20
less bullish, so
46:21
because of the permitting cycles, the
46:24
social objections, the extension
46:26
and permitting, I think you could develop fusion
46:29
faster than you can permit one new
46:31
nuclear
46:32
plant. And there's no way to overcome that, because many
46:34
people think it should be. I don't think we need
46:36
it. I do think some of the risks are
46:38
real.
46:39
I do think we can address risks,
46:41
and we were investors in terra power
46:43
too. But I think fusion is close
46:46
enough
46:47
that it will far exceed
46:50
the scaling capacity of any
46:53
other technology. All right. Let's
46:55
talk, you did talk about tweet. That was a funny tweet, but
46:58
you're kind of active on Twitter these days. You're
47:00
getting a little trolly there, Vinod. What's happening?
47:03
What tweet are you talking about? I'm not even asking about your beach controversy.
47:06
How is the beach, by the way? What tweet are you talking
47:08
about? Oh, you have a lot. Let me read. You
47:10
want me to read you a couple? I really enjoyed your English major tweet.
47:13
You said most English majors lack a goal
47:15
or purpose. Why else would you major in your native
47:17
language? Justine Musk, ex-wife
47:20
of Elon Musk said most English majors are women.
47:22
So tech bros are basically saying that women who go
47:24
to college and study literature are lacking in personalizing
47:27
goals. No wonder we love
47:28
them so. Why
47:31
do this, Vinod? I'm just curious. It's been
47:33
a curious development. It was half
47:35
and just, but not really. Yeah, I know. I
47:37
do think when
47:38
you spend $100,000 or $200,000 for college in the US, you'd better get an
47:43
employable skill. Okay.
47:46
Now it won't be important once we...
47:48
Yeah, but you were saying we're not going to be working. Yeah,
47:50
if we aren't working and reduce the need for
47:53
employment, then we'll be fine. In
47:55
fact, I've said that more recently.
47:59
I've also... As I said, by the way, I'm
48:01
not against English in graduate degrees.
48:05
Just undergrad, purposeless,
48:07
people end
48:09
up with a lot of debt and a job invading
48:11
at a restaurant.
48:12
So pointless. So pointless jobs. You
48:15
don't like them just because... You know, English majors are not grammar.
48:17
It's about literature, but okay. All right. Why?
48:21
You know, and I'm not saying nobody should
48:23
read literature. And it's not everybody.
48:27
Half the people who do English will do great
48:29
because their knowledge, education,
48:32
diploma, Stanford,
48:35
stamp, whatever. But
48:37
I'm talking about the downside case of
48:39
the people who won't get jobs because they don't
48:41
have a skill. Now,
48:43
this will become less important in the age of
48:45
abundance in the eye if you buy my thesis.
48:48
But... So go
48:50
ahead and read Dickens. Go for it. It
48:53
just doesn't matter. What you enjoy if humanity
48:56
has the leisure of not needing to
48:58
work. Need to do it. But I was curious why you did
49:00
that. I'm fascinated by it. It was
49:02
unusual for me to see it because you
49:05
don't strike me as a dunker and
49:07
a meme lord or anything like that, a
49:09
dank meme lord. You don't have that reputation.
49:12
I'm
49:12
trying to encourage more kids
49:14
to do... Think about the right things. Okay. All
49:17
right. So it's not sort of the current trend of venture capitalists
49:19
opining on everything from Ukraine
49:22
to whatever they feel like talking
49:24
about that day. Look,
49:26
you can tweet because you're trying
49:28
to get the most number of followers. I don't try
49:30
and do that. When I have
49:32
time, I will tweet
49:35
to transmit some insight
49:37
that I have. Right.
49:38
Right. So you find it useful.
49:41
You find it useful. I find it useful. Sometimes
49:43
it's funny and that's okay. I think I
49:45
had a tweet around Elon and Marx.
49:48
Oh, fight. What's your feeling
49:51
on that? Seems juvenile. It's
49:53
funny. Okay. You know, I don't think
49:55
it's important enough for us to worry about it. If
49:57
it makes a few people laugh, then that's a good
49:59
time.
49:59
A few smiles is a good contribution.
50:02
All right,
50:02
okay. I think it's slightly toxic,
50:04
a lot of it. I don't spend a whole lot of
50:06
time on Twitter. No, I know, you recently said you probably joined threads.
50:08
Have you done it? What do you think about the competition?
50:11
You know, to be honest, I did join threads because
50:14
I think it could be an interesting
50:16
platform. I haven't done much
50:18
on threads.
50:18
Okay, so my last question, you've been in venture
50:20
capital for so long. How do you look
50:22
at venture capitalists now and their
50:25
use? Here we are in Sand Hill Road, the famous place.
50:27
It's changed drastically. Has
50:30
it changed for the good or bad? Do
50:32
you see innovation in your own industry?
50:35
Look, as industries mature,
50:37
they diversify. Venture capital isn't one
50:39
thing. My view is at different
50:42
stages in your life, you worry about different
50:44
things.
50:45
I'm most worried about maximum impact
50:48
and a better society.
50:51
And I think venture capital is huge
50:53
leverage on that.
50:55
Much better climate technologies, much better
50:57
transportation, much better medicine,
51:00
much better education. All those are really,
51:02
really exciting things to do. They
51:04
won't happen on their own. If you leave
51:07
the world to electric cars, then
51:10
it'll be on the GM schedule by 2035. I
51:13
think the forecast was we'd have 50,000 vehicles
51:16
or some ridiculous number. Then a
51:19
instigator of change like Elon Musk
51:21
comes along and causes everybody
51:23
to rethink.
51:24
I think that's possible in every area, from
51:27
housing to cars to transportation
51:30
to medicine, entertainment in every area.
51:32
And I'm very, very excited. Frankly,
51:35
it's much more fun than playing golf for-
51:37
So you're not retiring? I'm not retiring.
51:40
I, as long as health permits with
51:43
that one caveat, I'm not playing
51:45
golf, I'm not going sailing. I have
51:47
much more fun uses of my time
51:50
that really excite me.
51:51
Okay. All right, Vinod, thank you
51:53
so much. I really appreciate it. Thanks.
52:05
So you and Beno Kostla won't be going golfing
52:07
or sailing anytime soon, it seems.
52:10
We never would have, anyway. It doesn't matter. So
52:12
we're just continuing to do what we don't do together.
52:14
That's true. Maybe I'll do another interview 20 years from
52:16
now, since both of you will still be working. No,
52:18
I shall not. You will not be in the 80% of the 80%
52:20
whose jobs are taken. No,
52:23
never actually. I thought it was interesting
52:26
that he attributed the jump in AI not
52:28
to the transformer paper and not
52:30
to the increasing scale of computer
52:32
power, but to the idea that people are moving there.
52:35
That was a leading indicator for him, was that humans were moving
52:38
there. Yeah, I think he's right. It's a very VC
52:40
way to look at it. Well, that's how they watch
52:42
things, like where people are going, whether it's crypto
52:44
or anything else. And I do think people
52:46
do things in their self-interest, and so they're watching
52:48
self-interest happening. And that's one of the things that
52:50
drives Silicon Valley. Everyone rushes to the internet
52:53
or rushes to crypto or rushes to whatever. But
52:55
Kara, it's not all
52:56
benevolence? No, nothing. None
52:58
of it. Shocking. Self-aggrandizement
53:01
fuels everything in Silicon Valley. So
53:04
that makes sense for me to say
53:06
that. And of course, those underlying trends, the transformer paper,
53:08
the scaling of power is what also those people are watching,
53:10
not just their self-interest, but where are the
53:13
jobs? What can they build? Yeah,
53:15
but I don't even think they read. You don't think they read?
53:17
No, I don't. Where can
53:20
they get money? Oh my gosh. I don't know. I
53:22
think engineers are... So I've met so many
53:24
earnest engineers who are just excited about fixing the next
53:27
problem. Well, they get bored. They get bored and they
53:29
move on to other things. They have very hummingbird-like
53:32
personalities in a lot of ways. They shift to the next
53:34
thing. They do. He outlined this vision
53:37
of AI doctors. It's all
53:39
kind of dystopian. And in a way, it kind
53:41
of makes sense, like this 80% of 80%, because I've heard doctor friends
53:44
say to me, look, I should be able to look
53:46
at numbers for
53:48
most of my patients and then spend more time
53:50
with a smaller set of patients that's needing
53:52
intervention. So
53:55
that makes sense.
53:56
But don't you think that's a bit dystopian or you think that is
53:58
what the world will look like? I don't know if it's
54:00
dystopian. It's happened to farmers.
54:03
It's happened to manufacturing people. It's
54:05
just because it's happening to richer people.
54:07
I think, look, doctors should
54:09
spend more time with patients and not doing forms and
54:12
other things that they're not as good at. I don't think there's
54:14
anything wrong with replacing rote work,
54:16
wherever it may be, with digital
54:19
versions of that that's better. There's not
54:21
any great talent
54:23
in reading a radiology like an x-ray. I
54:27
think there are, of course, jobs that come
54:29
mute with technology.
54:32
That's definitely true. But I think the challenge
54:34
for me was actually, he said, one, humans
54:36
will have other jobs to do. And you've said this
54:39
before with drivers and cars.
54:41
But my big question is, what jobs? I
54:43
don't know. I don't know. I don't know. Nobody
54:45
imagined the internet would create Uber. Nobody
54:48
imagined all kinds of things.
54:50
I don't know something else, but none of us are
54:52
felling trees with axes anymore. We're
54:54
not doing it. Yeah. I think
54:56
the question, the consideration of like, okay, well, what are
54:58
those jobs? And the second problem is, well, he
55:00
says, abundance will allow for redistribution.
55:03
Sure,
55:03
abundance will allow for redistribution of the math,
55:06
but people's politics and mindset
55:08
in this country of, well, we don't really want to
55:10
pay for people who don't work to have health insurance
55:12
or basic income. Those things are hard
55:15
to change. No, they're not. That's
55:17
how life happens. I mean, they didn't have gay marriage 30
55:20
years ago, and now they do. One out of five
55:22
children in the US live in poverty and they haven't found
55:25
a way to fix that. So it's not like just by people
55:27
suffering, you're going to change something. What
55:29
has happened is that a lot more, we've got to come
55:32
up with new jobs. And that's the challenge
55:33
rather than saying, oh, no, oh, no, on everything
55:36
is let's get together with smart people
55:38
working together to figure out what the
55:40
next jobs, how we should reform things. Maybe
55:43
we won't be able to do it, but saying
55:45
it's not going to happen is not really particularly
55:47
helpful. I think neither thing is helpful saying it's not going
55:50
to happen is not helpful, but saying that it's assuredly
55:52
going to happen is also not helpful. It is a very
55:54
big problem. But it is. It is. It's like
55:56
saying cars aren't coming. It's like saying that redistribution
55:59
will happen or.
55:59
humans will have other jobs. Well, yeah, but that's his, it's his
56:02
opinion. It's not easy, no one says
56:04
it's easy. I think it's just, this is, he's just
56:06
stating what I think is probably accurate
56:08
is that this is gonna happen and so this
56:11
is my idea. Yeah, and which isn't dissimilar
56:13
to most of Slaemon's idea, I think the
56:15
question is, is it going to be an idea that
56:17
countries can get behind? In Europe, I think
56:20
there's a lot more inkling and desire
56:22
for that than in the United States, which is just built
56:24
on a more, a different ethos. Well,
56:26
that's an old, that's an old idea
56:28
of the US.
56:29
That has been dying for years and so I
56:32
do think we've changed a lot more than you realize and
56:35
I think there is an openness to
56:37
a lot more creative solutions
56:40
for people. I think people have reached a limit where
56:42
selfishness is not necessarily helping
56:44
us. It's not quite as powerful. or
56:46
not people believe in big government or people believe,
56:49
I think actually government's being gutted more and more and
56:52
people, whether people wanna get behind social
56:54
security programs and other programs and
56:56
there's healthcare, all kinds of things, free education,
56:59
look at the pushback, Biden's getting on loan forgiveness,
57:01
which by the way, has problems. Okay,
57:04
we suck, so I don't know what to say. I either have to
57:06
not suck or not, we will have to do something. I
57:08
think that this is like the hopeful way to look at
57:10
this and the way that I'm trying to look at it is,
57:12
okay, this is going to be the push that we need to
57:14
get to a more
57:17
collective society and a society
57:20
that addresses some of these huge underlying
57:22
problems, chasms that we haven't been able to cross,
57:24
but we were able to cross, for example, in the pandemic,
57:26
there was a coming together and I'm hopeful
57:29
with AI that there will be a coming together to deal
57:31
with some of these things.
57:33
Well, we'll have to deal, again, I think
57:35
there's, most things spin forward,
57:38
not everything, and we go backwards, but kids
57:40
used to work in factories, just like things
57:42
change and we didn't think they were gonna change and
57:44
they do. And so this is an opportunity
57:46
to change in a good way and hopefully, I
57:49
am am heartened by government's movement
57:51
so far, I don't know if it's gonna go anywhere, but to say, oh,
57:54
they'll do nothing again. I hope they
57:56
don't, they have an opportunity.
Podchaser is the ultimate destination for podcast data, search, and discovery. Learn More