Episode Transcript
Transcripts are displayed as originally observed. Some content, including advertisements may have changed.
Use Ctrl + F to search
0:00
Support for this show comes from Kraken.
0:03
Crypto is like the financial system, but
0:05
different. It doesn't care where you come from,
0:07
what you look like, your credit score, or your
0:09
outrageous food delivery habits. Crypto
0:12
is finance for everyone, everywhere,
0:14
all the time. Kraken, see
0:16
what crypto can be, not investment
0:19
advice. Crypto trading involves risk of loss.
0:21
Cryptocurrency services are provided to US and
0:23
US territory customers by Payword Ventures Incorporated.
0:26
View PBI's disclosures at kraken.com slash
0:28
legal slash disclosures.
0:30
Ever wonder what's on a tech startup's Mac?
0:33
Multiple apps and endless presentations
0:36
to make the startup stand out. Mail
0:38
to connect with big name investors. And
0:40
Keynote to craft the big pitch to lock
0:42
them in. Adobe Acrobat to
0:45
sign contracts that secure development funding.
0:47
Xcode to build the app that solves a problem.
0:50
And Todoist to keep the team on track
0:53
for delivery. All from one
0:55
powerful machine. Launch
0:57
a startup on Mac.
1:01
This is Recode Media, Peter Kofka. That is
1:03
me. This is not the episode we were gonna
1:05
bring you this week. We taped that one last week. It
1:07
was a big look back at the year in AI. Kevin
1:09
Roos from the New York Times. Really helped us sort of walk
1:12
through the hype cycle, the reality. It was great.
1:14
And now we can't use it because of
1:16
the events of the last week. And so Kevin has graciously
1:19
agreed to come back again. He's on no sleep. He may
1:21
collapse during the podcast. It
1:23
could be a good episode. Welcome back, Kevin. That'd
1:26
be great content. Yeah, if I just start snoring
1:29
during the podcast, just wake
1:30
me up, okay? It's a good clip at least.
1:33
We're recording this at 1.34 on Monday, November 20.
1:36
If things could change by the time you hear this. But Kevin,
1:39
let's do some really basic stuff. Who is the CEO
1:41
of OpenAI right now?
1:43
As of this moment, and as
1:46
you said, things can change by the minute, it
1:49
seems like. So as of this moment, the
1:52
CEO of OpenAI is Emmett Shear. He
1:55
is the former CEO
1:57
of Twitch, and he was appointed.
1:59
last night by the board
2:02
of OpenAI on an intern basis. And
2:07
prior to Friday afternoon, if you just
2:09
turned off your computers and didn't check your phones Friday
2:12
afternoon, you might have thought the CEO of OpenAI
2:14
was Sam Altman. Where is Sam Altman
2:17
now? Well, I
2:19
don't know physically where he is in the world.
2:22
I assume he's at home in San Francisco,
2:25
but spiritually and sort
2:27
of organizationally, he is now
2:29
a, at least purportedly
2:32
an employee of Microsoft. He and
2:35
Greg Brockman, the president and co-founder
2:37
of OpenAI are
2:40
leading a new advanced
2:43
AI research team at Microsoft. Again,
2:46
this is all as of announcements
2:49
made late last night in sort of the fog
2:51
of war, but it appears that he
2:53
and Greg and hundreds
2:56
of other OpenAI employees
2:58
are prepared to decamp to
3:00
Microsoft. So to back up before
3:03
we go any further, last week OpenAI
3:05
was the breakout startup of the AI
3:07
boom. It was worth a theoretical $86 billion. Now
3:12
Microsoft has essentially aqua hired most
3:14
of the talent from there and it's now a unit of
3:17
Microsoft or there's a version of OpenAI that's now
3:19
a version, a unit of Microsoft.
3:21
We still don't know the exact details because again,
3:23
all this is shifting so quickly, but
3:26
what we know is that Microsoft
3:29
CEO Satya Nadella announced on
3:32
Sunday night that Sam
3:35
and Greg Brockman were going to be
3:37
the heads of this
3:40
new AI research
3:42
team, essentially kind of a mini
3:44
OpenAI, inside Microsoft
3:47
and that they had, they
3:51
were doing what they could to bring
3:53
over as many OpenAI employees
3:56
as wanted to come and the
3:58
sort of plan. is that these
4:01
people, unless by
4:03
some 11th hour miracle, Sam Altman
4:05
and Greg Brockman are reinstalled
4:08
at OpenAI, and unless the board of OpenAI
4:11
resigns, the plan as
4:13
of right now is for those people to all go
4:15
work for Microsoft. So this is an amazing
4:18
story in tech. It's the biggest story
4:20
of many, many months. You had to spend the whole
4:22
weekend reporting on it. Again, thank you for coming.
4:25
So we're gonna dig into how all this happened, what it means.
4:28
But if you're just a regular person who's used
4:30
Chat GPT, that's the product OpenAI
4:32
makes, and played with it,
4:34
and maybe they did a homework assignment for you, or
4:37
that's kind of your interest in this, why
4:39
should you care about what happens to
4:41
OpenAI and the fact that OpenAI now
4:43
seems to basically move to Microsoft
4:46
in the middle of the weekend? Well, it matters because
4:48
OpenAI is a massively
4:51
important company in Silicon
4:53
Valley. Not only do they
4:56
make Chat GPT, which
4:58
is the most popular AI product
5:00
on the market, it's got 100 million weekly
5:03
users, but it's also the
5:05
company that's sort of at the front edge,
5:08
the leading edge of the AI boom.
5:10
They are the people
5:13
with the most advanced models, the
5:15
biggest sort of bench of top
5:18
AI talent, and they have been
5:20
kind of setting the agenda for the entire
5:22
tech world over the past year. And
5:25
so it is really a stunning turn
5:27
of events that Sam Altman, who
5:31
has been leading this company through a period of intense
5:33
growth, was pushed
5:35
out by the board in this dramatic, sort
5:38
of mysterious coup led
5:40
by his chief scientist,
5:43
and a number of board members who are also
5:45
affiliated with this movement-effective
5:48
altruism that thinks that AI could take
5:50
over the world and kill us all. It is
5:52
just an insane story and so
5:55
dramatic and cinematic and it's
5:57
almost certainly the most unusual.
6:00
story that I've ever covered in my career
6:02
as a tech reporter. And I want to get into all those
6:04
details, but I just want to channel one of my coworkers
6:07
in Slack today who's not covering this stuff
6:09
but cares about the world. He says, this just
6:11
seems like the same old, same old. There was a company,
6:13
a big important startup. Now
6:15
Microsoft owns a big important startup. It's
6:18
just the same old, same old. That's
6:20
a jaded Monday response, but also I
6:22
get it. Why should you care who
6:25
owns this stuff? Well I think
6:27
it's just intrinsically interesting because
6:29
of all the drama and the back
6:31
and forth. I mean this is not how
6:33
tech business as usual gets
6:36
conducted over a weekend, before
6:38
a holiday, in this kind
6:40
of cloak and dagger way. But
6:43
you've probably seen the social network and
6:45
that looks like guys screwing each other over
6:47
for money and it's a sad story. Maybe you've heard
6:49
about Twitter firing at CEOs one
6:51
after another. It seems like
6:54
if you want to be really reductive about
6:56
it, this is just a bunch of rich white guys messing
6:58
around with each other again. I
7:01
mean if that's how you want to think about it, I
7:03
don't want to dissuade anyone. This is my straw man.
7:05
I'm trying to get you to do it. Right. I
7:07
mean no one has to care about this story. I'm just saying why I care
7:09
about this story and why I think it's actually quite
7:12
important. I also think the
7:14
stakes here are much different than
7:16
the traditional kind of business
7:19
dispute. These are not people
7:21
squabbling over money or control
7:24
of some company. This
7:27
is really a fight between two visions
7:30
of the future. In one of them,
7:32
AI is an exciting
7:35
product and an opportunity to build
7:38
new apps and new systems
7:41
to transform the global economy
7:43
and to sort of lead us into a glorious utopia.
7:46
In the other one, it is a menace
7:49
and a kind of looming
7:51
threat to humanity that is getting
7:54
ever more powerful and that could someday wipe
7:56
us out. I mean it sounds like the plot
7:58
of a science fiction movie. But
8:00
it is really happening. These are really
8:02
the stakes that the people involved think are
8:05
involved here. So for that
8:07
reason, I think it's of interest to a lot of people. So
8:09
let's talk about that fight because
8:11
I think some of this was, if you cared to
8:13
look, was evident but most people weren't paying attention
8:16
to this stuff. When OpenAI
8:18
put out its press release on Friday afternoon,
8:21
it seemed like this might have been a good old-fashioned
8:23
business dispute. They said, Sam
8:25
Altman's leaving and then they normally
8:28
in a press release like that would say he's going
8:30
to pursue other interests, spend more time with his
8:32
family, he's going to stay on as a consultant.
8:34
But they made it very clear that they were firing him and
8:36
they said they were firing him because of a lack of candor.
8:39
So there was a brief period where my phone was blowing
8:42
up with rumors about what Sam Altman
8:44
had done to get himself fired
8:46
by the OpenAI board because it had to be something
8:49
really bad to get fired as
8:51
the head of... You're the face of AI,
8:53
you're the face and founder, you're
8:55
the guy running this $96 billion company,
8:58
you must have done something terrible. But
9:01
it looks like, we haven't heard any evidence that that's
9:03
actually the case and what looks like there's actually this sort
9:05
of theological schism you were just talking
9:07
about. And so you're hearing terms like effective
9:10
altruism, doomerism. Can
9:13
you just get a little deeper into
9:15
how there were two different camps at
9:17
this same company and how they split? Yeah,
9:20
so it's a long and very convoluted
9:22
story and it would take maybe an hour to
9:25
explain the whole thing but I'll just do the... And a flowchart,
9:27
flowchart tend to help. But basically
9:30
here's the deal, there is this movement called
9:32
effective altruism and effective altruism if
9:34
you've heard of it, you've probably heard of it during the Sam
9:36
Bankman freed FTX scandal.
9:39
It's all coming together, it's just the Sam and effective altruism
9:41
means something bad. Yeah, so this is a group
9:43
of people that started
9:46
maybe a decade
9:48
ago in earnest, although the ideas
9:50
are somewhat older than that. They're basically
9:52
trying to use rationality
9:55
and logic to determine how to do the
9:57
most good in the world. And several...
10:00
Several years ago, a lot of people in this community
10:02
got worried about AI risk,
10:04
and specifically what they call X
10:06
risk, which is the risk that AI
10:08
could cause us to go extinct
10:11
as a species. And
10:14
so for various reasons, they decided
10:16
this was a top priority for them. And so
10:18
a lot of effective altruists went
10:20
to work in the field of AI
10:23
and AI safety, especially at
10:25
companies like OpenAI. A
10:27
lot of their early researchers
10:29
were either affiliated with effective altruism
10:31
or sort of sympathetic to their views.
10:35
And these people
10:37
have sort of fallen out of favor with
10:39
OpenAI in more recent months,
10:42
in part because they are seen
10:45
as being overly negative, overly
10:47
pessimistic. Sometimes you hear people
10:49
call them doomers. And so
10:52
there is another crowd inside of OpenAI
10:55
that is more excited about AI
10:57
and less worried about existential risk
11:00
and more commercial in how they're pursuing
11:03
it. They want to build AI into a profitable
11:05
business and build it into popular products.
11:08
So those are the kind of two warring camps
11:10
that have been inside OpenAI
11:12
this whole time and really came to a head last
11:15
week. But Sam Altman has been doing this tour where
11:17
he says, I'm building this new technology. It's really great.
11:20
It could be really dangerous. Is he both
11:22
an enthusiast and a doomer? It seems like
11:25
he's pro enthusiast. He has
11:27
been trying to walk a very careful
11:29
line. So funnily
11:31
enough, you are not the only podcaster who
11:33
had an episode become suddenly
11:36
obsolete last week. My
11:38
co-host and I on Hard Fork
11:40
Casey Newton actually interviewed Sam Altman
11:43
two days before he was fired as
11:46
the CEO. And
11:48
he had no idea any of this was about to happen to him.
11:50
We obviously had no idea. So we had this long
11:53
conversation. But one of the things we talked about
11:55
was this question of whether he is more
11:58
of a doomer or more of a – what they would call
12:00
an accelerationist, someone who thinks that
12:02
AI is good and should move faster
12:05
and we should kind of get out of the way. And he
12:07
said basically he thinks AI is
12:09
going to be amazing, but he also
12:11
thinks that if we don't get it right, it
12:14
could be sort of disastrous. And
12:16
so he's sort of trying to play both
12:18
sides of this debate and really
12:20
listen to and express the concerns of
12:22
both sides. And the other thing that is confusing
12:25
and confounding to a lot of people, including me
12:28
is that OpenAI is a nonprofit
12:31
that also has a for-profit arm.
12:33
So when we talk about this thing theoretically
12:35
being worth $96 billion, that's like a subsidiary of
12:39
the nonprofit org and the nonprofit
12:41
board is the one that fired Sam Altman. Yeah,
12:44
it's a very confusing and convoluted
12:47
structure. But basically, you know, OpenAI
12:49
was started in 2015 as a nonprofit. A
12:53
couple of years after that in 2019, they
12:55
decide, hey, we're building these AI models.
12:58
They're expensive. We need a lot of hardware. We're
13:00
going to need to bring on a lot of investments from
13:03
other companies. They started
13:05
a for-profit subsidiary, technically
13:07
it's a capped-profit subsidiary. And
13:10
that is sort of what people think of as OpenAI.
13:13
But there is still this nonprofit board
13:15
that is in charge of the
13:18
for-profit subsidiary and
13:20
gets to make decisions like whether or not to
13:23
fire the CEO. So now you have
13:25
this structure that is very unusual
13:27
in the tech world. Usually the board is
13:30
composed of VCs,
13:32
investors, early
13:34
employees, top executives, things like that.
13:37
And they don't generally do much. In theory,
13:39
the one thing they can do is fire the CEO. And
13:42
in theory, what they're supposed to do is look out for the
13:44
good of the company and its investors. The
13:47
idea is they're supposed to sort of help the company be
13:49
better and more profitable. That's not the charge here.
13:51
Exactly. So at most companies, the
13:54
board has a fiduciary duty to
13:56
shareholders. At OpenAI,
13:58
since they have this weird nonprofit
14:00
structure, that board, the nonprofit
14:03
board that controls the company, is
14:05
actually not beholden to shareholders at all.
14:07
And in fact, many of the people on the board don't
14:09
own any shares. And they set it
14:12
up that way on purpose, so
14:14
that in the event that the
14:17
sort of humanitarian mission of
14:19
OpenAI to build safe and
14:21
beneficial AGI, wherever
14:24
to come in conflict with more commercial
14:26
priorities, the nonprofit
14:28
would essentially be able to shut it down or at
14:31
least make some big changes. And so there
14:33
is an argument. I mean, most people in my Twitter
14:35
feed are very pro Sam Altman. I think that's
14:37
not surprising. They may actually like him. He's also very
14:40
powerful, very rich, very well connected,
14:42
running this powerful startup. He's the kind of guy,
14:45
the valley rallies behind. But there are folks
14:47
saying, hey, if the nonprofit
14:49
board thinks that Sam Altman is
14:51
behaving irresponsibly and against
14:53
what they're trying to get done, which is responsible
14:56
safe AI, maybe they made the
14:59
right call. Can you make
15:02
a case on behalf of the board along those
15:04
lines? I mean, I think
15:06
if we're trying to steel man the board's
15:09
position, you could say that, yeah,
15:11
this is things working as they
15:13
were designed. The nonprofit
15:16
board got worried about
15:18
the direction of the company, and they made
15:20
a move and took out the CEO.
15:23
I think the thing that's hard to
15:26
reconcile there is two things actually.
15:28
One is that they still haven't explained what the initial
15:31
dispute was over. There was clearly
15:33
something that convinced them that
15:36
this was an urgent threat to
15:38
the humanitarian mission of open AI,
15:40
but they have not communicated what that was
15:43
at all. And so people are just kind of scratching their
15:45
heads. And the second thing is
15:47
that immediately after they made this decision,
15:50
they appeared to start trying
15:52
to backtrack, start trying to think
15:56
about bringing Sam Altman back.
16:00
that looked like that was gonna happen. They
16:02
have not been consistent in their messaging
16:05
at all about this decision, and
16:07
so that's part of what is making people so
16:09
frustrated with them. Including, and
16:11
one of the people saying, oh, I regret this, I'm
16:13
gonna butcher his name, but it's Ilya Skitsavar?
16:16
Sutsavar, yes. Ilya Skitsavar is the
16:19
chief scientist of OpenAI, and
16:22
was basically the driving
16:25
force behind the decision to
16:27
fire Sam Altman, according to
16:29
our reporting and the reporting of others.
16:32
He was the person who sort of brought these
16:34
concerns to the board, and
16:36
now he is backtracking and saying,
16:39
I made a mistake and I regret doing
16:41
that. He put out a tweet
16:43
this morning, said, yeah, I never meant for this to happen. I
16:45
shouldn't have signed that as if it was just sort of a document
16:48
put in front of him. He's also signed
16:50
a document saying, we want Sam Altman back
16:52
or we're gonna quit. So it's
16:54
very confusing. One of the takeaways,
16:57
again, this can change in
16:59
my Twitter feed, is, hey,
17:02
you know what? This kind of shows you that
17:04
the business model that everyone complains
17:06
about in the Valley, where investors,
17:09
VCs, and big companies have a lot
17:11
to say about what happens to these important technologies
17:14
and all in the service of capitalism. Maybe
17:16
that's a pretty good model. Maybe it's a better
17:19
model than having a non-profit in
17:21
that, and for this important stuff,
17:23
and that actually, yay
17:25
capitalism, essentially. Ben
17:28
Thompson, the very smart analyst, had a note out
17:30
basically saying that this morning. Do you
17:32
think this is going to, what effect
17:34
do you think this is gonna have on sort of this kind
17:37
of model? I mean, it's a
17:39
very unique model, so maybe it's one-on-one. Will
17:42
we see something like this again? I
17:44
think it'll be a while before
17:46
we see something like this again with
17:49
this kind of outside investment.
17:51
I think if you are Microsoft, and
17:53
you have just spent $13 billion, if
17:57
you've just invested that much money into a company
17:59
with a company.
19:28
so
20:00
many trends, but it's also
20:02
something that everyone uses, a
20:05
shared, universal part of
20:07
life today. And it's about time
20:09
that it gets a proper tribute. Technology
20:12
will always continue to accelerate. Culture
20:15
will always continue to evolve. And
20:17
it's with that in mind that this groundbreaking
20:19
exhibition asks a bigger question. What's
20:22
the future of email? And
20:24
how will email shape our future? Visit
20:28
MailChimp.com slash emailisdead
20:31
to learn more.
20:34
And we're back. Winners and losers, you
20:36
did a column on that. Let's break down a couple of
20:38
them. Again, things
20:41
can change, but as of now at 1.51 p.m. on Monday,
20:43
November 20th, what's
20:46
the status of OpenAI? Can it be a company
20:49
if those, I'm seeing now numbers 650 of
20:51
its 700 employees say
20:54
they're gonna leave. Is there a company
20:56
if they leave? I mean, I guess
20:59
there is like on paper a company. It's
21:02
not the same company, certainly, if the
21:05
vast majority of employees leave. And
21:07
it's not even clear what the
21:09
board would be the board
21:12
of. I mean, it's got a commercial
21:14
deal with Microsoft that's still in place, right? That's
21:16
a thing, right? Right, but you know, who would keep ChatGPT
21:19
up and running? Who is going to build
21:22
the next generation of these models?
21:25
And who would want to go
21:27
work for them now? So I think there's a real
21:30
existential threat to OpenAI. Yeah,
21:34
I don't imagine things
21:36
are looking good for them if they don't
21:38
manage to bring Sam Altman
21:40
back and stop this employee exodus.
21:43
And assuming that Sam Altman stays
21:45
at Microsoft, and then again, that was in the report, that
21:48
was Satya Nadella saying, I've hired these guys,
21:50
we're making this happen. He's the CEO of Microsoft.
21:52
So it appears to be happening. Again,
21:54
could change. On the one hand, he sunk $13
21:56
billion into this weird company
21:59
and it all kind of blew. apart. On the
22:01
other hand, if you said you could buy open
22:03
AI for $13 billion, by today's standards,
22:09
that would be a deal of a century. Did Microsoft do really
22:11
well here in the end? Absolutely. I mean,
22:13
they are the biggest winner in
22:15
this whole thing. They managed to turn
22:19
what could have been a real threat to them
22:22
into a real opportunity. Now, they not
22:24
only have this
22:27
existing deal with open AI,
22:29
and they can continue to use open AI's
22:32
models to power their products. But
22:34
they also, if the
22:37
status quo holds, and
22:39
they do end up getting a big infusion
22:41
of people from open
22:43
AI on this new team led by
22:45
Sam Altman and Greg Brockman, they
22:47
will effectively have a new
22:50
open AI that they own 100% of, and whose
22:52
IP will belong to
22:55
them, and they will have control
22:58
over this new team in a way that they did not
23:00
have control over open
23:02
AI under the old arrangement. Why would
23:04
Sam Altman agree to that? In theory, he could
23:06
have gone and opened up shop today
23:09
and had tens of billions of dollars in investment
23:11
capital, in the same way that Elon Musk
23:14
had people just over the transom throwing a billion
23:16
dollars here and there. I imagine Sam Altman
23:19
would have the exact same offer available
23:21
to him right now. Yeah, I've been calling
23:23
around on this question over
23:26
the past like 12 hours or so.
23:28
And I would say there are
23:31
a couple things that I've heard. So Sam
23:33
Altman and Greg Brockman, they could have
23:35
gone out, they could have raised a ton of money. Any
23:38
investor in Silicon Valley would have written them a
23:41
giant check to start a new AI company.
23:44
But starting a new AI company has
23:46
a cost associated with it. You
23:48
know, you've got to form the structure
23:51
and write the article of incorporation.
23:53
And you have to do all this
23:55
sort of paperwork that comes with starting a company.
23:58
But you also have to do things like that. like acquire
24:00
a bunch of compute. You have to go out and buy
24:03
a bunch of GPUs from somewhere, and
24:05
that's not easy to do right now. And
24:07
so with Microsoft jumping
24:10
in, they can sort of hit the ground running.
24:13
Microsoft already has all of
24:16
these data centers filled with GPUs
24:19
that they can use. And so it's just going
24:21
to be a much easier place for them to get
24:23
up and running. And then Sam, now we're
24:25
full on speculation, but one Sam Altman
24:27
famously in tech did not own
24:29
any parts of open AI. Maybe there's
24:32
indirect ownership. I imagine if
24:34
he's going to go to work for Microsoft, he's not going to do
24:36
that out of altruism. He's going to say, actually,
24:38
I would like a significant piece of whatever
24:40
this $9,600 billion, whatever, whatever
24:43
we think this is going to be worth. I'd like I'd like to own
24:45
part of that. Do you assume that Microsoft
24:47
will have to write him some version
24:50
of a very big check to make this happen? I
24:52
don't know. I mean, that's that remains
24:54
to be seen. I can
24:57
speculate that I don't think money
24:59
is all that important to Sam Altman
25:01
at this point in his career. I mean, he's
25:03
already fabulously wealthy. He
25:06
was running open
25:08
AI with no real stake
25:10
in the company, and nothing financial
25:14
to really gain from its success.
25:16
So maybe he doesn't care about
25:18
money, even if he is working for Microsoft,
25:20
or maybe this is sort of his chance
25:23
to, I don't know, get a bigger
25:25
stake in something that he's been working really hard
25:27
on. I don't know the answer to that, but we'll see. Back
25:30
to the bigger picture takeaway, the
25:32
argument that AI is this powerful
25:35
technology that could also doom us has
25:37
been floating out for a while. It's got
25:40
skeptics sort of across the spectrum. They're
25:42
like, there's sort of leftish leaning people
25:44
who say, oh, when you make that argument, you're actually saying
25:46
that AI is more powerful than it is. There's people like Mark
25:49
Andreessen, who say this is doomerism. Right
25:51
now, that argument seems to be associated
25:53
with real corporate ineptitude.
25:57
Again, the story may evolve
25:59
over time, but right now, Now our understanding is the
26:01
board really sort of handled this the
26:03
worst possible way. Does
26:05
this make it harder for anyone to sort of realistically
26:08
argue that AI really is a threat
26:10
that we need to be worried about? It sounds like
26:13
these are sort of like the weird hippies
26:16
complaining about some sort of
26:18
thing that never came to pass and they're stuck in the 60s. These
26:21
guys seem out of touch and clueless. Is
26:23
this going to set that argument back? It could.
26:27
This is all speculation since all this is unfolding
26:29
so quickly. I have talked to people
26:32
who are sort of in the effective altruist,
26:34
what you could call the doomer
26:36
camp. And they're worried
26:39
that this will sort of set back the movement because
26:41
now when people think of effective
26:43
altruism, they might think, oh,
26:45
that's the sort of ideology
26:48
that took down open AI or
26:50
the... And FTX. And FTX, right. So
26:54
I think there is an argument. But I also think
26:56
there's an argument that this was actually a success
26:58
for the movement. That in a very
27:01
real way, this is a movement that has been concerned
27:04
that AI was moving too quickly and
27:06
they wanted a pause. They wanted for
27:09
there to be some interruption to
27:11
the acceleration. And
27:13
this is a pause. It's going
27:16
to be months
27:18
before this new... They've
27:21
thrown their bodies on the gears of the machinery.
27:23
Yeah. And whether you think it was effective
27:25
or not, they certainly made
27:28
an impact and they were
27:30
able to make a strong statement in
27:32
favor of caution and safety
27:35
around AI. One last, and I'm going to
27:37
let you go and you can nap or report or whatever you want to do,
27:39
do both things at the same time. I originally had you on
27:41
because we're basically at the one year anniversary
27:43
of open AI rolling out chat
27:45
GPT, became an immediate sensation. There
27:48
was an enormous amount of consumer popular
27:51
interest in AI for a bit that
27:53
has faded. Pretty considerably. And
27:56
there's a little bit, there had been some sourness about
27:58
sort of maybe what AI means for the vast... If
28:00
we step out of all that, we're now a year
28:03
in the sort of AI being mainstream.
28:06
What are you most excited about as someone who covers
28:08
this stuff all the time? Aside from
28:11
this soap opera on steroids, what
28:13
are you most excited about when you think about AI? Oh,
28:15
man. It's hard to
28:17
pull my brain out of the soap opera. It's
28:20
been an all weekend. I mean, you
28:23
know, I think I would say I'm really
28:26
interested in just how AI
28:28
is working its
28:33
way into normal people's lives.
28:36
I'll have talked to doctors
28:39
and lawyers and teachers and musicians
28:41
and filmmakers and all kinds of people who are
28:44
using this stuff, grappling with the
28:46
ethics of using this stuff. And
28:49
it's just been really fun to
28:51
kind of see how this technology that
28:54
kind of arrived unannounced
28:56
and with very little fanfare has
28:59
just totally upended
29:01
some portion of the world.
29:05
And yeah, it's actually like something that I was
29:07
excited to ask Sam
29:09
Altman about before all of this went down.
29:12
What was his answer, by the way? If I can plug
29:14
my podcast. Because
29:16
we're going to be also releasing the
29:19
interview that we did with Sam Altman last
29:21
week on our show Hard Fork today.
29:24
So you can go listen to that if you want. But
29:27
he was just sort of like, look, we
29:29
didn't know how big this was going to be, but
29:31
it's been really gratifying to see all of the
29:34
interesting and useful and heartwarming
29:37
and weird ways that people are using
29:40
this. And people argue about
29:42
what he said was he doesn't really like
29:44
this sort of argument about AGI
29:47
and what counts as being superhuman
29:49
intelligence. His basic message was, look,
29:51
we built a thing and people
29:53
find it useful. And that's all that matters. Kevin
29:56
Roos, it is 2 0 1. We're going to end the
29:58
conversation here if this thing is obviated
30:00
by the time we published. We tried our best. We
30:03
did it twice. Hopefully the second time
30:05
around works. Kevin Rizzo at the New York Times
30:08
writes the shift column there. Does great work there. Also
30:10
does, as he said, hard fork the great podcast
30:12
with him and Casey Newton. Thank you, Kevin. Thanks.
30:14
And I'm not coming back a third time. See
30:16
ya. Thanks
30:19
again to Travis and Jelani and Jolyne
30:21
Myers as well. Thanks to our sponsors
30:23
and thanks to our listeners. This is Recode
30:26
Media. We'll see you soon. Support
30:30
for this show comes from Kraken. Crypto
30:32
is like finance, but business.
30:35
It doesn't care when you invest, trade
30:37
or save. Do it on weekends or
30:40
at 5 a.m. or on vacation
30:42
at 5 a.m. Crypto is finance
30:44
for everyone, everywhere, all
30:46
the time. Kraken. See what crypto
30:49
can be, not investment advice. Crypto
30:51
trading involves risk of loss. Crypto currency
30:53
services are provided to US and US territory
30:55
customers by Payword Ventures Incorporated. View
30:57
PVI's disclosures at Kraken.com slash
31:00
legal slash disclosures. As
31:02
a marketer, you're speaking to a vast
31:05
audience. Some people need to be converted
31:07
into customers. Some need to be reunited
31:10
with their carts. And others have
31:12
just made a purchase. But when
31:14
you fail to segment your audience and
31:16
personalize your messaging, you can get
31:18
what's called a customer. One
31:21
big cluster of customers who
31:23
may seem alike, but actually all have
31:25
different behaviors. So how do
31:27
you turn those customers back
31:29
into customers? With Intuit
31:32
Mailchimp, you can use personalization
31:34
tools that segment customers into groups,
31:37
break them up into like-minded target audiences
31:39
and send them personalized marketing. Intuit
31:42
Mailchimp, the number one email marketing
31:45
and automations brand. Based
31:47
on competitor brands publicly available data
31:50
on worldwide numbers of customers in 2021 and 2022, availability
31:55
of features and functionality vary by plan,
31:57
which are subject to change. you
Podchaser is the ultimate destination for podcast data, search, and discovery. Learn More