Episode Transcript
Transcripts are displayed as originally observed. Some content, including advertisements may have changed.
Use Ctrl + F to search
0:00
Reboot your credit card
0:02
with Apple Card, the credit card created by
0:05
Apple. It gives you unlimited daily
0:07
cash back that you can now choose
0:09
to grow in a high-yield savings
0:11
account that's built right into the Wallet app.
0:13
Apply for Apple Card now in the Wallet
0:16
app on iPhone and start growing
0:18
your daily cash with savings today.
0:21
Apple Card subject to credit approval,
0:23
savings is available to Apple Card
0:25
owners subject to eligibility
0:28
requirements. Savings accounts provided
0:30
by Goldman Sachs Bank USA, member
0:33
FDIC, terms apply.
0:59
Are you having fun in your job?
1:11
I'm loving every day of it, Steven. Most
1:16
CEOs of big technology firms are
1:18
not loving every day right now. They've
1:20
been facing all sorts of headwinds and
1:22
backlash. But you can see why Satya
1:24
Nadella might be the exception. He's
1:27
worked at Microsoft for more than 30 years, nearly 10
1:29
as CEO. At
1:31
the start of the personal computer era,
1:34
Bill Gates' Microsoft was a behemoth,
1:37
eager to win every competition and
1:40
crush every rival. But the Internet
1:42
era put the company on its heels. Newer
1:44
firms like Google,
1:45
Facebook, and Amazon were more
1:47
nimble, more innovative, and maybe
1:50
hungrier. Jeff Bezos of Amazon
1:52
would reportedly refer to Microsoft as a
1:54
country club. But under
1:57
Nadella, Microsoft has come roaring
1:59
back.
1:59
invested heavily in what turned out to be big
2:02
growth areas like cloud computing. Microsoft
2:05
has always been in the business of acquiring other
2:07
companies, more than 250 over its history, but
2:10
some of the biggest acquisitions have been Nadella's,
2:13
LinkedIn, Nuance Communications,
2:16
and if regulators allow, the gaming
2:18
firm Activision Blizzard. And
2:20
there have been many more key acquisitions like
2:23
GitHub, where computer programmers store
2:25
and share their code. Once again,
2:28
Microsoft is a behemoth, the
2:30
second most valuable company in the world, trailing
2:32
only Apple. Its stock price is up
2:35
nearly 50% since the start of 2023. But
2:39
that's not even the reason why Microsoft
2:41
has been all over the news lately. They're
2:43
in the news because of their very splashy
2:46
push into artificial intelligence
2:48
in the form of chat GPT, the
2:50
next level chat bot created by
2:52
a firm called OpenAI. Microsoft
2:55
has invested $13 billion
2:57
in OpenAI for a reported 49%
2:59
stake in the company, and
3:02
they quickly integrated OpenAI's tech
3:04
into many of their products, including
3:06
the Microsoft search engine Bing.
3:09
For years, Bing was thought
3:11
of as something between footnote and
3:14
joke, running a very distant
3:16
second to Google. But suddenly,
3:18
Bing with chat GPT is
3:21
on the move, and Google is trying to
3:23
play catch up with its own chat bot called
3:26
Bard. So how exactly
3:28
did Satya Nadella turn the country
3:30
club into a bleeding edge
3:32
tech firm with a valuation of
3:35
more than two and a half trillion dollars?
3:37
Our mission, Stephen, is to empower
3:40
every person and every organization
3:42
on the planet to achieve more. And so
3:45
as the world around us achieves
3:47
more, we make money.
3:48
I like that. I mean, I assume
3:51
you actually believe that. You're not just saying that, are you? No, 100%.
3:54
You have to have a business model that is
3:57
aligned with the world around you doing well.
3:59
Today
4:02
on Freakonomics Radio, we speak with
4:04
Satya Nadella about the blessings
4:06
and perils of AI.
4:09
We talk about Google and Heidegger,
4:12
about living with pain, and
4:14
about Microsoft's succession
4:16
plan.
4:20
No, it'll be nothing like that. Nadella
4:23
promises. We will take succession
4:25
seriously.
4:38
This is Freakonomics Radio, the
4:40
podcast that explores the hidden side
4:42
of everything with your host,
4:45
Stephen Dubner.
4:53
I spoke with Satya Nadella one afternoon
4:55
earlier this month. I was in New York and he
4:58
was in his office at Microsoft's headquarters
5:00
near Seattle. It's fantastic to
5:03
have a conversation again. We first
5:05
interviewed Nadella in 2017 for
5:07
a series called The Secret Life of
5:09
a CEO. Even then, he
5:12
was extremely excited about
5:14
AI. At the time, Microsoft
5:16
was high on a virtual reality headset
5:19
called the HoloLens. Think
5:21
about it. The field of view, what
5:23
you see is a blend of the
5:25
analog and digital. The
5:27
ability to blend analog
5:30
and digital is what we
5:32
describe as mixed reality. There are times when
5:34
it'll be fully immersive. That's called virtual
5:36
reality. Sometimes when you can
5:38
see both the real world and the artificial
5:41
world, that's what is augmented reality.
5:44
But to me, that's just a dial that you set.
5:47
Just imagine if your hologram was right
5:49
here interviewing me as
5:51
opposed to just on the phone.
5:53
Back then, Nadella cautioned that there
5:55
was still a lot of work to do. Ultimately,
5:58
I believe in order to bring about some of these
6:00
magical experiences and AI capability,
6:03
we will have to break free of some of the limits we're
6:05
hitting of physics, really.
6:07
The limits of physics haven't been
6:09
broken yet, and the hollow lens has
6:11
not been the hit that Microsoft was
6:13
hoping for. But
6:15
Nadella's devotion to AI is
6:17
paying off big time in the form of chat
6:20
GPT, which quickly captured
6:22
the imagination of millions. GPT
6:25
stands for Generative Pre-trained
6:27
Transformer, and chat GPT
6:29
is what is known as a Large Language Model,
6:32
or LLM. It takes in vast
6:34
amounts of data from all over the internet so
6:36
it can learn how to read and
6:38
answer questions very much like a
6:41
human, but
6:42
a really, really smart human,
6:44
or perhaps a million smart humans.
6:47
And the more we ask chat GPT
6:49
to answer questions, or summarize
6:52
arguments, or plan itineraries,
6:55
the more finely tuned it gets, which
6:57
proves at the very least that we humans are
6:59
still good for something. The current
7:02
iteration is called GPT-4, and what's
7:05
the relationship between chat GPT
7:08
and Bing?
7:08
Basically, Bing is part of chat GPT
7:11
and chat is part of Bing, so in either way,
7:13
it doesn't matter which entry point you come to, you
7:15
will have Bing.
7:16
So Satya, I asked chat GPT
7:18
for some help in this interview. I said
7:21
I'm a journalist interviewing Satya Nadella and I want
7:23
to get candid and forthright answers.
7:25
You know, I just didn't want corporate boilerplate,
7:28
and what chat told me was to
7:30
do my homework, which you know, I did, I usually
7:32
do that, to ask open-ended questions,
7:35
which I typically try to do. But one
7:37
that hung me up a little bit was I need to build
7:39
rapport. Now, we have a relatively
7:42
short time together today. Are there
7:44
any shortcuts to building rapport?
7:46
Yeah, what's your knowledge of cricket?
7:50
Oh, I blew it. I
7:52
knew that you're a big cricketer, you
7:54
played as a kid, I knew you cared more about
7:57
cricket than schoolwork as a kid, but no,
7:59
I blew it.
7:59
That's too bad because there's a world test
8:02
championship starting tomorrow. I
8:04
was going to ask you about it, but hey, look,
8:06
your love for economics builds me an
8:09
instant ripple.
8:10
I'd like you to walk us through Microsoft's
8:12
decision to bet big on open AI,
8:15
the firm behind chat GPT. There was an early
8:17
investment of a billion dollars, but then much, much
8:19
more since then. I've
8:22
read that you were pretty upset
8:24
when the Microsoft research team came to
8:26
you with their findings about open AI is
8:28
LLM large language model. They said that they were
8:31
blown away at how good
8:33
it was and that it had surpassed
8:36
Microsoft's internal AI research
8:38
project with a much smaller
8:40
research team in much less
8:42
time. Let's start there. I'd like you to describe
8:45
that meeting. Tell me if what I've read first of all is true.
8:47
Were you surprised and upset with your
8:49
internal AI development? Yeah, I
8:51
think that this was all very recent. This
8:55
was after GPT four was very much
8:57
there. And then that was just mostly me pushing
9:00
some of our teams as to, Hey, what did we miss
9:02
you got to learn?
9:03
You know, there were a lot of people at Microsoft who got
9:06
it and did a great job of, for example,
9:08
betting on open AI and partnering
9:10
with open AI. And to me four
9:12
years ago, that
9:14
was the idea. And then as we went
9:16
down that journey, I started saying, okay, let's
9:18
apply these models for product building,
9:20
right? Models are not products. Models can
9:22
be part of products. The first real
9:25
product effort, which we started was get
9:27
up co-pilot. And, you
9:29
know, quite frankly, the first attempts on
9:31
get up co-pilot were hard because, you know, the
9:33
model was not that capable, but it is only
9:36
once we got to GPT three, when
9:38
it started to learn to code that
9:40
we said, Oh, wow, this emergent phenomena,
9:43
the scaling effects
9:44
of these transformer models are
9:46
really showing promise.
9:49
Nadella may be underplaying the tension between
9:52
Microsoft and open AI, at least
9:54
according to recent wall street journal article called
9:56
the awkward partnership leading the
9:58
AI boom.
9:59
It describes, quote, conflict and
10:02
confusion behind the scenes. And
10:05
because the OpenAI deal is
10:07
a partnership and not an acquisition, the
10:09
journal piece makes the argument that Microsoft
10:11
has influence without control
10:14
as OpenAI is allowed to partner
10:16
with Microsoft rivals. Still,
10:19
you get the sense that Nadella is excited
10:21
about the competitive momentum chat GPT
10:24
has given Microsoft, as you can tell from
10:26
this next part of our conversation.
10:30
Google still handles about 90% of
10:33
online global search activity.
10:36
An AI search enabled model
10:38
is a different kind of search plainly
10:40
than what Google's been doing. Google is trying
10:43
to catch up to you now. How
10:45
do you see market share in search
10:47
playing out via Bing, via chat
10:49
GPT in the next five and 10 years? And
10:52
I'm curious to know how significant that might be
10:54
to the Microsoft business plan overall.
10:56
This is a very general purpose technology,
10:59
right? So beyond the specific use
11:01
cases of Bing chat or chat
11:03
GPT, what we have are reasoning
11:06
engines that will be part of every
11:08
product. In our case, they're part of Bing and chat
11:11
GPT. They're part of Microsoft 365. They're
11:14
part of Dynamics 365. And so in
11:16
that context, I'm very excited about what
11:19
it means for search. After all, Google,
11:21
as you said, rightfully, they're dominant
11:23
in search by a country mile. And
11:26
we've hung in there over the decade. We've
11:28
been at it to sort of say, hey, look,
11:30
our time will come where there will be a real
11:33
inflection point in how search
11:35
will change. We welcome Bing versus
11:38
Bard
11:38
as competition. It'll
11:40
be like anything else, which is so
11:42
dominant in terms of share and also
11:45
so dominant in terms of user habit,
11:47
right? We also know that defaults
11:49
matter. And obviously, Google controls the
11:51
default on Android, default on iOS,
11:54
default on Chrome. And so they have a great
11:56
structural position. But at
11:58
the same time, we're going to have a great conversation. whenever there is a
12:00
change in the game, it is all up
12:02
for grabs again to some degree, and I know
12:05
it'll come down to users and user choice.
12:08
We finally have a competitive angle
12:10
here, and so we're going to push it super hard.
12:13
What are some of your favorite uses,
12:15
personal or professional, for chat
12:18
GPT? The thing that I've talked about,
12:20
which I love is the cross-lingual
12:23
understanding. That's kind of my term for
12:26
it. You can go from Hindi
12:28
to English or English to Arabic
12:30
or what have you, and they've done a good job. If
12:32
you take any poetry in any one language
12:36
and translate it into another language.
12:38
In fact, if you even do multiple languages,
12:41
so my favorite query was, I said, I always,
12:43
as a kid growing up in Hyderabad, India,
12:45
said, I want to read Rumi,
12:48
translated into Urdu and translated
12:50
into English. And one
12:52
shot, it does it. But the most interesting
12:55
thing about that is it captures
12:57
the depth of poetry. So it
13:00
finds somehow in that latent
13:02
space, meaning that's
13:04
beyond just the words and their
13:07
translation. That I
13:09
find is just phenomenal.
13:11
This amazes me. You're saying, you, the CEO
13:14
of a big tech firm, is saying
13:16
that one of the highest callings of chat GPT
13:18
or a large language model is the translation of poetry.
13:21
I love it. I mean, I know you love
13:23
poetry, but what excites
13:25
you more about that than
13:28
more typical
13:30
business, societal, political, economic
13:32
applications?
13:33
I love a lot of things. I
13:36
remember my father trying to read
13:38
Heidegger in his forties
13:41
and struggling with it, and I've attempted
13:43
it thousand times and failed. And,
13:45
you know, he's written this essay. Somebody pointed
13:47
me to somebody said, Oh, you got to read that
13:49
because after all, there's a lot of talk about AI
13:52
and what it means to humanity. And
13:54
I said, let me read it. But I must say, you
13:56
know, going and asking chat GPT
13:58
or big chat to summarize.
13:59
Heidegger is the best way to read
14:02
Heidegger.
14:05
According to ChatGPT, Heidegger
14:08
himself would not have been a fan
14:10
of AI. In Heidegger's
14:13
view, Chat tells us, technology,
14:16
including AI, can contribute to what
14:18
he called the forgetting of being.
14:22
And Heidegger is hardly alone. After
14:25
all, philosophy and poetry will
14:27
likely not be the main use
14:29
cases for AI. So after
14:32
the break, we talk about potential downsides
14:34
of an AI
14:35
revolution and the degree to
14:37
which Microsoft cares. I want
14:39
all 200,000 people at Microsoft working
14:42
on products to think of AI safety. I'm
14:44
Stephen Dubner. This is Freakonomics Radio.
14:46
We'll be right back.
14:57
Freakonomics Radio is sponsored by Capital
15:00
One. Capital One offers commercial solutions
15:02
you can bank on. Now more than ever, your
15:05
business faces specific challenges
15:07
and unique opportunities. That's
15:09
why Capital One offers a comprehensive
15:12
suite of financial services, custom
15:14
tailored to your short and long-term
15:16
goals. Backed by the expertise,
15:19
strategy and resources of a top 10
15:21
commercial bank, a dedicated team
15:23
works with you to support your success
15:26
and help achieve your goals.
15:27
Explore the possibilities at
15:30
CapitalOne.com slash
15:32
commercial.
15:33
Hey there, Stephen again. Communication
15:36
is key to success in business and in
15:38
life, but things like speaking off the cuff,
15:41
negotiating a raise and giving feedback
15:43
can take specific skills and a lot of
15:45
practice to do well. Think Fast,
15:48
Talk Smart is a podcast that teaches
15:50
these skills in a fun, engaging way. Sponsored
15:54
by Stanford University lecturer, Matt Abrahams,
15:56
each episode covers all aspects of
15:58
communication, including... actionable
16:00
tools. Hone your communication
16:03
skills and learn valuable career insights.
16:05
Listen to Think Fast, Talk Smart
16:08
wherever you get your podcasts.
16:18
Last month, a group of leaders from across
16:21
the tech industry issued a TERS one-sentence
16:24
warning. Understanding the risk
16:26
of extinction from AI should
16:29
be a global priority alongside
16:31
other societal-scale risks such
16:33
as pandemics and nuclear war. The
16:36
extinction they are talking about is
16:39
human extinction. Among
16:41
the signatories were Sam Altman, the
16:43
CEO of OpenAI, and
16:45
two senior Microsoft executives.
16:48
Altman, Satya Nadella, and
16:50
other executives from firms working on AI
16:53
recently met with President Biden to
16:55
talk about how the new technology should
16:57
be regulated. I asked Nadella
16:59
where he stands on that issue. I
17:02
think the fact that
17:04
we are having the conversation
17:07
simultaneously about both the
17:10
potential good that can come
17:12
from this technology in terms of
17:14
economic growth that is more
17:16
equitable and what have
17:18
you. At the same time that
17:20
we are having the conversation on
17:22
all the risks, both here and now
17:24
and the future risk, I think it's a super
17:27
healthy thing. Somebody gave me this analogy,
17:29
which I love. Just imagine when the steam engine
17:31
first came out if we had a conversation
17:34
both about all the things that the steam engine
17:36
can do for the world and the industrial production
17:38
and the industrial revolution and how it will change
17:41
livelihoods. At the same time, we were
17:43
talking about pollution and factory
17:46
filth and child labor. We
17:48
would have avoided whatever, 100 years plus
17:50
of terrible history. So
17:52
then it's best to be grounded on
17:55
what does the risk framework look like. If
17:57
AI is used to create more and more people, then we can do that.
17:59
more disinformation, that's a problem
18:02
for our democracy and democratic institutions.
18:05
Second, if AI is being
18:07
used to create cyber attacks or bioterrorism
18:10
attacks, that's a risk. If there is
18:12
real world harms around bias, that's
18:14
a risk. Or employment displacement,
18:16
that's a risk. So let's just take those four.
18:19
In fact, those were the four even the White
18:21
House was upfront on and saying, hey,
18:23
look, how do we really then have
18:25
real answers to all these four risks?
18:28
So in terms of, for example, take
18:29
this information, can we have techniques
18:32
around watermarking that help verify
18:35
where the content come from? When
18:37
it comes to cyber, what can we do to
18:39
ensure that there is some regime
18:42
around how these frontier models are being
18:44
developed? Maybe there is licensing,
18:46
I don't know, this is for regulators to decide.
18:49
Microsoft
18:51
itself has been working on provisions to
18:53
best govern AI. For instance, safety
18:56
breaks for AI systems that control
18:58
infrastructure like electricity or
19:01
transportation. Also a certain
19:03
level of transparency so that academic
19:05
researchers can study AI systems.
19:08
But what about the big question?
19:11
What about the doomsday scenario wherein
19:13
an AI system gets beyond
19:16
the control of its human inventors?
19:18
Essentially, the biggest unsolved
19:21
problem is how do you ensure
19:23
both at sort of a scientific understanding
19:26
level and then the practical engineering level
19:29
that you can make sure that
19:31
the AI never goes out of control? And
19:33
that's where I think there needs to be a
19:35
certain like project where both
19:38
the academics, along with
19:40
corporations and governments all
19:42
come together to perhaps solve
19:44
that alignment problem and accelerate the
19:46
solution to the alignment problem.
19:48
But even a certain like project after
19:50
the fact, once it's been made available
19:52
to the world, especially without watermarks
19:55
and so on, does it seem a little
19:57
backwards? Do you ever think that you're a good scientist?
20:00
excitement over the technology led you
20:02
and others to release it publicly too early?
20:04
No, I actually think first of all, we're in very early
20:07
days and there has been a lot of work. See,
20:09
there's no way you can do all of this
20:12
just as a research project. And we
20:14
spent a lot of time, right? In fact, if anything,
20:16
that, for example, all the work we did in launching
20:19
Bing Chat and the lessons learned in
20:21
launching Bing Chat is now all available
20:24
as a safety service, which, by the way, can
20:26
be used with any open source model. So
20:29
that's, I think, how the industry and the
20:31
ecosystem gets better at AI safety.
20:34
But at any point in time, anyone
20:36
who's a responsible actor does need
20:39
to think about everything that they can do for
20:41
safety. In fact, my sort of mantra internally
20:43
is the best feature of AI is AI
20:46
safety.
20:47
I did read, though, Satya, that as part
20:49
of a broader, a much broader, layoff earlier
20:51
this year that Microsoft laid off its entire
20:53
ethics and society team, which presumably
20:56
would help build these various guardrails
20:58
for AI, from the outside,
21:00
that doesn't look good. Can you explain that?
21:02
Yeah, I saw that article too. At the same time,
21:04
I saw all the headcount that was increasing
21:07
at Microsoft because it's kind of like saying,
21:09
hey, should we have a test organization
21:12
that is somewhere on the side? I think
21:14
the point is that work that AI
21:16
safety teams are doing are now become
21:19
so mainstream, critical
21:21
part of all product making that
21:24
we have actually, if anything, doubled down
21:26
on it. So I'm sure there was
21:28
some amount of reorganization and any reorganization
21:31
nowadays seems to get written
21:32
about, and that's fantastic. We love that.
21:35
But
21:35
to me, AI safety is
21:37
like saying performance or quality
21:40
of any software project. You can't separate
21:42
out. I want all 200,000 people
21:44
at Microsoft working on products to think
21:46
of AI safety.
21:49
One particular
21:49
concern about the future of AI is
21:52
how intensely concentrated the technology
21:54
is within the walls of a relatively
21:57
few firms and institutions. The economy
21:59
is a little bit
21:59
Economists Daron Acemoglu and
22:02
Simon Johnson recently published a book on
22:04
this theme called Power and Progress,
22:06
Our 1,000-Year Struggle Over
22:09
Technology and Prosperity. And
22:11
here's what they wrote in a recent New York Times
22:14
op-ed. Tech giants Microsoft
22:17
and Alphabet Google have seized a large
22:19
lead in shaping our potentially AI-dominated
22:22
future. This is not
22:24
good news. History has shown
22:27
us that when the distribution of information
22:29
is left
22:29
in the hands of a few, the result
22:32
is political and economic oppression.
22:34
Without intervention, this history
22:37
will repeat itself. Their
22:39
piece was called Big Tech is Bad,
22:42
Big AI Will Be Worse.
22:45
You could argue we are fortunate to have
22:47
a CEO as measured as Satya
22:50
Nadella leading the way at Microsoft. But
22:52
of course he won't be there forever. After
22:55
the break, what does a Microsoft
22:58
succession look like? I'm
22:59
Stephen Dubner. This is Freakonomics Radio.
23:02
We'll be right back.
23:06
No matter how strong
23:08
your portfolio is or how well you budget
23:10
your spending, the truth is life can
23:12
be unpredictable and anything could happen.
23:14
So how can you protect yourself from anything happening?
23:17
The answer might surprise you. An annuity or
23:19
life insurance policy from Lincoln Financial
23:21
might be what you need. Their annuities offer
23:23
options to help you safeguard your money against unpredictable
23:26
market swings, to help you manage sudden dips,
23:28
and can provide you with a steady income during
23:30
retirement. Their life insurance policies have
23:32
a death benefit for your loved ones. Plus, they
23:35
offer some access to the cash in your policy
23:37
in case of an emergency. Because anything happening
23:39
sometimes happens. Go to LincolnFinancial.com
23:42
slash Get Started Now to learn how to plan,
23:44
protect, and retire today. Lincoln
23:47
annuities and life insurance are issued by the Lincoln National Life
23:49
Insurance Company, Fort Wayne, Indiana. Products sold
23:51
in New York are issued by Lincoln Life & Annuity Company
23:53
of New York, Syracuse, New York. Distributed
23:55
by Lincoln Financial Distributors, Inc., a broker-dealer.
23:58
Freakonomics Radio.
23:59
is sponsored by Whole Foods Market. Nothing
24:02
says summer fun like grilling outside
24:05
with help from Whole Foods Market. From
24:08
family reunions to hanging with friends,
24:10
you will find the perfect spread, like
24:12
animal welfare certified bone-in
24:15
ribeye, top sirloin, and chicken
24:17
kebabs. You can just imagine those
24:19
perfect grill marks. Keep your
24:21
wallet happy with sides and fixings from 365
24:24
by Whole Foods Market, plus beer,
24:27
pie, fresh produce, the list goes
24:29
on.
24:29
Can't take the heat? Let them cater
24:32
your event. Make summer happen at
24:34
Whole Foods Market.
24:37
["The Whole
24:50
Foods Market"] ["The
24:56
Whole Foods Market"] ["The
25:26
Whole Foods Market"] ["The
25:56
Whole Foods Market"]
25:59
Rather than try to come
26:02
up with long lists of ways
26:04
of vilifying predecessors,
26:07
what Nadella did is he
26:09
was able to be on a frontier
26:12
at this exact same moment as the early investors
26:14
in open AI as well as in
26:17
reinventing their own artificial
26:19
intelligence opportunities so
26:22
that Bing, surprised at all, might
26:24
soar past everybody. He got
26:26
people excited about building a new
26:28
future, investing $25 billion in R&D each year.
26:33
That's perhaps twice as much as the average pharma company
26:35
invests, and that's amazing for an
26:37
IT company to do that.
26:39
A big part of Nadella's success
26:41
came from expanding Microsoft's
26:43
footprint in cloud computing with their
26:46
Azure platform.
26:47
Their footprint across the board at Enterprise
26:49
Software was flourishing, where he knew
26:52
how to invest in Azure and a commercial
26:54
cloud business where his revenues
26:56
grew 42% over the past year.
26:58
I asked Nadella himself if he
27:01
had been surprised by how valuable
27:03
cloud computing has become for Microsoft.
27:05
Both surprised and not surprised in
27:08
the following sense. We were leaders in client
27:10
server, but while we were leaders
27:12
in client server, Oracle did
27:14
well, IBM did well, and
27:17
so in fact it shaped even my
27:19
thinking of how the cloud may sort
27:21
of emerge, which is that it will have
27:23
a similar structure. They will be at least two
27:26
to three players who will be at scale,
27:28
and there will still be many other smaller
27:30
niche players perhaps. So in that
27:33
sense, it is not that surprising. What
27:35
has been surprising
27:35
is
27:37
how big and expansive
27:40
the market is. Let's think about
27:42
it. We sold a few servers in India,
27:44
but oh my god, did I think that
27:47
cloud computing in India would be this
27:49
big? No. The market is
27:51
much bigger than I originally thought.
27:54
I have a fairly long and pretentious
27:56
question to ask you. There are economists
27:59
and philosophers.
27:59
and psychologists who argue that most
28:02
of us still operate under a scarcity
28:04
mindset that might have been appropriate on
28:06
the Savannah a million years ago, but now
28:09
we live in an era of abundance. So
28:12
rather than competing for scarce resources,
28:15
we should collaborate more to grow
28:17
the overall resource pool. From
28:19
what I know about your time
28:22
as CEO at Microsoft, it seems you
28:24
have embraced the collaborative model over the
28:26
competitive model. One example being how nicely
28:28
Microsoft now plays with Apple devices,
28:30
whereas the previous administration didn't even
28:33
want Microsoft employees owning Apple
28:35
devices. So I'd like to hear your
28:38
thoughts generally on
28:40
this idea of collaboration versus
28:43
competition and scarcity versus
28:45
abundance.
28:47
That's
28:48
a very deep question. I
28:50
mean, at the macro level, Stephen,
28:53
I actually do believe that
28:55
the best technique humanity
28:58
has come up with to create,
29:00
I would say, economic
29:03
growth and growth in our well-being
29:05
as humanity is through cooperation.
29:08
So let's start there, right? So the more
29:11
countries cooperate with countries, people
29:13
cooperate with people, corporations cooperate
29:16
with other corporations, the better off we are.
29:19
And then at a micro level,
29:21
I think you want to be very careful
29:24
in
29:24
how you think about zero-sum
29:27
games, right? I think we overstate
29:30
the number of zero-sum games that
29:32
we play. In many cases,
29:34
I think growing your overall
29:37
share of the pie is probably
29:40
even more possible when the pie itself
29:42
is becoming bigger. So I've always approached it
29:44
that way. That's kind of how I grew up, actually,
29:46
at Microsoft. And so
29:49
all of what we have done in the last whatever,
29:51
close to 10 years, has been to look
29:54
at the
29:54
opportunity set first
29:57
as something that expands the opportunity
29:59
for all. players and in their being competitive.
30:02
Were there people within the firm though who
30:05
said or felt, wait a minute, I
30:07
know you're the new CEO and I know you have
30:09
a new way of doing things, but Google is our enemy.
30:12
Apple is our enemy. We can't do that.
30:14
Did you have pushback?
30:15
Yeah, I mean, look, it's a very fierce
30:17
competitive industry. And even
30:19
if we didn't think of them as our competitors,
30:22
our competitors probably think of us
30:24
as competitors. But I think at the end of the day, I think
30:27
it helps to step back and say, you
30:29
know, it doesn't mean that you back away from some
30:31
real zero sum competitive
30:34
battles, because after all, that's kind of what fosters
30:36
innovation. And that's what creates consumer surplus
30:38
and opportunity. And so that's all fine.
30:41
But at the same time, leaders in
30:43
positions like mine have
30:45
to also be questioning
30:50
what's the way to create economic
30:52
opportunity. And sometimes,
30:55
you know, construing it as zero sum is probably
30:57
the right approach. But sometimes it's not.
30:59
So
31:00
Microsoft is a huge company and huge
31:02
companies get bigger by acquisition.
31:05
Typically, let's go through a couple. I know you
31:07
tried a few times to buy Zoom. You
31:09
haven't succeeded yet. You're
31:11
still in the middle of trying to acquire Activision. That's
31:14
tied up in the US at least in an FTC
31:16
lawsuit. A few years
31:18
ago, I read you tried to buy TikTok.
31:20
You called those negotiations the
31:22
strangest thing I've ever worked on. What
31:25
was so strange about that?
31:26
At least let me talk to all the acquisitions
31:29
that we did that actually have succeeded.
31:31
And we feel thrilled about it, right? Whether it's
31:33
LinkedIn or GitHub or Nuance
31:36
or ZeniMax or Minecraft.
31:39
These are all things that we bought.
31:41
I feel that these properties are better
31:44
off after we acquired them because
31:46
we were able to innovate and then make
31:48
sure that we straight through to the core
31:51
mission of those products and those customers
31:53
who depended on those products.
31:55
What about TikTok though? What was so strange
31:57
about that negotiation or those conversations?
32:00
everything. First of
32:02
all, I mean, just to be straight
32:04
about it, TikTok came
32:06
to us because they at that time
32:09
sort of said, hey, we need some help
32:11
in thinking about our structure
32:14
and given what at that time at least
32:16
was perceived by them as some
32:18
kind of a restructuring that the
32:20
United States government was asking.
32:22
They needed a US partner, in other words,
32:24
yes. Yeah, so at that point we
32:27
said, look, if that is the case that you want
32:29
to separate out your US operations or worldwide
32:31
operations, we would be interested in being engaged
32:34
in a dialogue and it is just
32:36
that's just say an interesting
32:38
summer that I spent on it. Okay,
32:41
so not long ago Satya, you became the chair
32:43
of the Microsoft board in addition to CEO.
32:46
Now, a lot of corporate governance people
32:48
hate the idea of one person having
32:51
both jobs. I
32:52
asked chat GPT about it. What's
32:54
the downside? One potential
32:56
conflict of interest, chat GPT told me
32:58
is the roles of CEO and board chair can
33:01
sometimes be at odds. The CEO is typically focused
33:03
on the day to day yada yada, but there can be potential
33:06
conflicts of interest. Can you give
33:09
an example of one
33:11
conflict that you've had or maybe you haven't,
33:14
which would give the Corp governance people even
33:16
more headache?
33:17
The reality is we have a lead independent
33:20
director, a fantastic lead independent
33:22
director in Sandy Peterson. She
33:24
has the ultimate responsibility of hiring
33:26
and firing me. That said, I think
33:28
the chair role as I see it is
33:30
more about me being able to sort of,
33:33
you know, having been close to 10 years in my role
33:35
to use my knowledge of what it is that
33:37
Microsoft's getting done in the short and the long
33:40
run to be able to coordinate the board
33:42
agendas and make sure that the topics
33:44
that we're discussing are most helpful
33:47
for both the board
33:47
and the management team. And so
33:50
it's kind of as much about, you know, program
33:52
managing the board versus being responsible
33:54
for the governance of the board. And the governance
33:57
of the board is definitely with the independent directors.
33:59
Can you name a time when the board
34:01
voted down a big idea of yours?
34:04
I don't know. There is a particular vote that
34:06
they voted me down, but I take all of
34:08
the board feedback on any idea
34:11
that I or my management team
34:13
has. We have a good format where
34:15
every time we get together, we kind of do a left
34:17
to right, I'll call it overview
34:20
of our business. And we have a written doc,
34:23
which basically is a living document
34:25
which captures our strategy and performance
34:27
and having that rich discussion where
34:29
you can benefit from the
34:32
perspective of the board and
34:34
then change course based on that perspective
34:36
is something that I look forward to and I welcome.
34:39
Now the last time we spoke, which was several
34:41
years ago, you talked about how the birth of
34:44
your son Zane changed you
34:46
a great deal. He was born with cerebral palsy
34:48
and you said that empathy didn't
34:51
come naturally to you, certainly
34:53
not compared to your wife, but
34:55
that over time being a parent to a
34:57
child with a severe handicap was a powerful
35:00
experience for you on many levels. I
35:03
was so sorry to read
35:05
that Zane died not long ago in
35:07
just his mid 20s. So my
35:09
deepest condolences on
35:12
that Satya. I'm also
35:14
curious to know if or how
35:17
his death has changed
35:19
you as well. No,
35:20
I appreciate that, Stephen. It's
35:23
probably it's hard, Stephen, for me
35:25
to even reflect on it
35:28
that much. It's been for both
35:30
my wife and me.
35:31
In some sense, he was the one sort of
35:35
constant that gave us a lot of purpose,
35:37
I would say, in his short life.
35:39
And so I think, you know, I think we're still
35:42
getting through it and it'll, I think,
35:44
take time. But I just
35:47
say the thing that I perhaps
35:49
have been more struck by is
35:52
what
35:53
an unbelievable support
35:56
system that got built
35:58
around us in.
35:59
even the local community around Seattle.
36:02
At his memorial, I look back at it,
36:04
all the people who came, right? All
36:06
the therapists, the doctors, the
36:08
friends, the family, the colleagues
36:11
at work. I even was thinking about it, right?
36:13
After all, Zane was born when I was working at
36:15
Microsoft and he passed when I was working at Microsoft
36:19
and everything, even from the benefits
36:21
programs of Microsoft to the managers
36:23
who gave me the flexibility. I think that sort
36:25
of was a big reminder to me that all of us
36:27
have things happen in our lives. Sometimes
36:30
things like pandemics or the passing of a
36:32
loved one or the health issues of elderly
36:35
parents. And we get by
36:38
because of the kindness of people
36:40
around us and the support of communities
36:42
around us. And so if anything, both
36:44
my wife and I have been super, super thankful
36:47
to all the people and
36:49
the institutions that were very much part
36:52
of his life and thereby part of our lives.
36:56
You
36:56
are a young man, still 55 years
36:58
old, but you've been at Microsoft
37:00
a long time now, been CEO almost 10 years.
37:02
I'm curious about a succession plan, especially,
37:06
I don't know if you watched the HBO show, Succession.
37:09
Do you watch Succession, Satya or no? I
37:13
watched, I think, the first season a bit
37:15
and I was never able to get back to it. Okay,
37:18
so I'll give you a small spoiler. It doesn't go
37:20
well. And there are succession plans
37:23
turns out to be, I think the technical term
37:25
is total show, okay?
37:26
So I am curious if your succession
37:29
plan will be somewhat more orderly than
37:31
the succession plan on succession. Obviously,
37:34
the next CEO of Microsoft is going to
37:36
be appointed by the lead independent directors of
37:39
Microsoft and not by me. But
37:42
to your point, it's a board topic when
37:45
we have a real update on it every year
37:47
as it should be. And I take that as a serious
37:50
job of mine. Like one of
37:52
the things that I always say is long
37:55
after I'm gone
37:56
from Microsoft, if Microsoft's doing
37:58
well,
37:59
then maybe I did a decent
38:02
job, because I always think about the strength
38:04
of the institution long
38:06
after the person is gone is the
38:08
only way to measure the leader. I'm
38:11
very, very suspicious of people who
38:13
come in and say, before me, it was horrible,
38:15
and during my time, it was great, and after
38:17
me, it is horrible. I mean, that's, first of all,
38:19
means you didn't do anything to build institutional
38:21
strength. So yes, I take that
38:24
job that I have in terms of surfacing
38:26
the talent and having the conversation
38:28
with the board of directors
38:29
seriously. And when the time
38:32
comes, I'm pretty positive that they will
38:34
have a lot of candidates internally, and
38:36
they'll look outside as well. And so
38:38
yes, we will take succession seriously.
38:41
That was Satya Nadella, CEO of Microsoft.
38:46
His
38:50
intelligence, I think you will agree, doesn't
38:52
feel artificial at all.
38:57
Coming up next time on the show. Most
38:59
people, when they think about marriage, they think
39:02
about it in terms of preferences
39:04
and in terms of love. But economists
39:08
aren't most people. So this
39:10
idea is what encapsulates
39:12
the idea of the marriage market. Is
39:14
marriage really a market?
39:16
I think people truly misunderstand
39:19
these dating services. Why
39:22
did you marry that person? That's
39:24
next time on the show. Until then, take
39:26
care of yourself. And if you can,
39:29
someone else too. Freakonomics
39:32
Radio is produced by Stitcher and Renbud Radio.
39:34
You can find our entire archive on any
39:37
podcast app or at freakonomics.com,
39:39
where we also publish transcripts and show
39:42
notes. This episode was produced
39:44
by Zach Lipinski with research
39:46
help from Daniel Moritz-Rabson. It
39:48
was mixed by Greg Rippin with help from
39:50
Jeremy Johnston. Our staff also
39:52
includes Alina Kullman, Eleanor Osborne,
39:59
Catherine Moncure, Lyric Bowditch, Morgan
40:02
Levy, Neil Caruth, Rebecca Lee Douglas,
40:04
Ryan Kelly and Sarah Lilly. Our
40:06
theme song is Mr. Fortune by the
40:08
Hitchhikers. All the other music was composed
40:11
by Luis Guerra. As always,
40:13
thanks for listening.
40:15
I blew an opportunity here. I need to
40:17
ask ChatGPT how to get over intense
40:20
disappointment at myself.
40:28
The Freakonomics Radio Network.
40:31
The hidden side of everything.
40:40
Old Dominion Freightline was built
40:42
on keeping promises. With an industry leading
40:44
on-time delivery record and low claims rate,
40:47
we keep promises better than any other LTL
40:49
freight carrier because we treat every shipment
40:51
like it's our most important one, which means
40:54
we do the little things right so that we can
40:56
keep our promises and you can keep yours
40:58
too. That's what drives us. To
41:00
learn how OD can help your business keep
41:02
its promises, visit odfl.com.
41:05
Old Dominion, helping the world keep promises.
41:10
As the weather warms, Ohioans
41:13
are finding more ticks. Avoiding
41:15
tall grass and wearing light-colored,
41:17
long pants and long-sleeved shirts
41:20
are just a few things you can do to protect
41:22
yourself. After being outdoors, be
41:24
sure to check for ticks and if you find
41:27
any, remove them and watch
41:29
for any symptoms if you find any bites.
41:32
For a full list of ways to prevent tick
41:34
bites, visit ohio.gov
41:37
slash ticks.
Podchaser is the ultimate destination for podcast data, search, and discovery. Learn More