Podchaser Logo
Home
Sam Altman & the Battle for OpenAI | Misalignment

Sam Altman & the Battle for OpenAI | Misalignment

Released Wednesday, 27th March 2024
 1 person rated this episode
Sam Altman & the Battle for OpenAI | Misalignment

Sam Altman & the Battle for OpenAI | Misalignment

Sam Altman & the Battle for OpenAI | Misalignment

Sam Altman & the Battle for OpenAI | Misalignment

Wednesday, 27th March 2024
 1 person rated this episode
Rate Episode

Episode Transcript

Transcripts are displayed as originally observed. Some content, including advertisements may have changed.

Use Ctrl + F to search

0:00

Wondery Plus subscribers can binge new

0:02

seasons of Business Wars ad-free right

0:05

now. Join Wondery Plus in the

0:07

Wondery app or on Apple Podcasts.

0:20

It's just before noon on Friday, November 17, 2023 in

0:22

Las Vegas, Nevada. OpenAI

0:27

CEO Sam Altman hustles into his hotel

0:30

suite on the Las Vegas Strip. He's

0:34

38 years old with a small elf in

0:36

frame and large blue eyes. Over

0:39

the past year, he's become one of the

0:41

most powerful people in Silicon Valley. In

0:44

November 2022, OpenAI released

0:47

ChatGPT version 3.5, a

0:50

chatbot that can write code, answer

0:52

questions, and even tell jokes, all

0:55

while seeming uncannily human. Within

0:59

a year, this iteration of ChatGPT has

1:01

gained roughly 100 million users. It

1:05

has leapfrogged giant tech companies like

1:07

Google and Meta to become the

1:09

leading artificial intelligence company in the

1:11

world. Altman

1:14

is now the poster child for the

1:16

artificial intelligence industry, and he's

1:18

not standing still. He spent the

1:20

past year jet-setting around the globe to meet

1:22

with world leaders and the press, pitching

1:25

new ideas for how to push AI forward. Altman

1:29

sits down at the desk in his hotel room

1:31

and flips open his laptop. He

1:35

searches through his emails looking for a link to join

1:37

a video call. Last

1:39

night, Altman received a text from

1:42

OpenAI's chief scientist and board member,

1:44

Ilya Sutskiver, asking Altman to join

1:46

a board meeting at noon. Sutskiver

1:49

didn't say what the meeting would be about, but

1:52

Altman knows that Sutskiver and other members of the

1:54

board feel that OpenAI is moving too fast and

1:57

should be cautious about what product we're using.

2:00

they push into the marketplace. Altman

2:02

assumes this meeting is likely a

2:04

rehash of that disagreement. In

2:08

the distance, Altman hears a car racing

2:10

by. He looks up, a

2:12

wave of regret passing through him. He's

2:15

in Las Vegas for the Formula One race

2:17

set to take place the next day. He

2:20

hopes he can get through this video call as

2:22

fast as possible so he can get to the

2:24

track and watch the practice laps. Altman

2:27

finds the email with the Google Meet link and

2:30

clicks on it. After

2:32

a second, the screen loads and the faces

2:34

of four board members stare back at him.

2:37

He feels immediately uneasy. Missing

2:39

from the group is board member Greg Brockman.

2:43

Brockman is one of the co-founders of OpenAI and

2:46

Altman's closest ally on the board. He

2:49

hopes Brockman is just running late and

2:51

not that he wasn't invited. Altman

2:54

hides his unease and waves into the camera.

2:57

Hey everyone. Across his screen,

2:59

the faces of the board nod in acknowledgement,

3:02

but no one says anything. Altman

3:05

carries on. Should we wait for

3:07

Greg to join or just jump in? In

3:10

the right-hand corner of Altman's screen, Sutsciver

3:13

shifts in his seat. We

3:15

can get started. Great. So

3:17

what's up? What do we need to discuss?

3:21

Sutsciver takes a deep breath. Sam,

3:24

the board has conducted a deliberative

3:26

review process, and

3:28

we've concluded that you have not been

3:30

consistently candid with us, which

3:32

has hindered the board's ability to

3:34

exercise our responsibilities. What?

3:38

When have I not been candid? The

3:40

board no longer has confidence in your ability

3:42

to continue to lead OpenAI. Therefore,

3:45

we are terminating your role as CEO. Altman

3:47

is stunned. His

3:50

eyes go wide. He

3:54

says the first thing that comes to mind. How

3:56

can I help? The best thing you can do

3:58

is support me. the

4:00

interim chief executive. Yeah,

4:03

yeah, of course. Thank

4:05

you for your time and service to OpenAI. Before

4:10

Altman can say anything else, each

4:13

member of the board's squares go black until

4:15

Altman is the only one left on the

4:17

call. His mind

4:19

whirls. He feels like he's in a

4:21

dream or a nightmare. There's

4:24

no way he could have actually been fired.

4:27

OpenAI has exploded under his

4:29

leadership. In the past year, they've grown

4:31

from roughly 300 employees to

4:33

nearly 800. Their

4:35

valuation is about to triple to reach close

4:37

to $90 billion. This

4:40

has to be a mistake. Altman

4:43

goes into his email. Surely there

4:45

will be an email waiting for him there

4:48

clarifying the situation. As

4:50

Altman clicks over to his email, the screen

4:52

reloads, but he can't open

4:54

anything. He's been logged out.

4:58

Frustrated, Altman tries to log back in,

5:00

but his access is denied. The

5:02

screen reads, no such user. Altman

5:05

tries again. Same response.

5:09

And then it hits him. His

5:11

account has been deactivated. This

5:13

wasn't a miscommunication. He's

5:15

actually been fired. And

5:18

he has no idea what

5:21

to do next. As

5:28

business owners and managers, you use software for

5:31

your business every day. You use

5:33

one piece of software to manage your customers, another

5:35

to manage your employees, another to manage your

5:37

finances, and the list goes on. You

5:40

buy these pieces independently and hope

5:42

they fit neatly together like a

5:44

puzzle. And then you find out

5:46

the hard way that they don't, and you end

5:48

up with a mess at the heart of your

5:50

business operations. Does any of this sound familiar? Well,

5:53

fortunately, Zoho offers a solution to

5:55

this chaos. It's called

5:57

Zoho One. Zoho One is...

6:00

a suite of around 50

6:02

pre-integrated business applications that fit

6:04

together beautifully. So instead

6:07

of dealing with disparate software from multiple

6:09

vendors with multiple contracts and price points,

6:12

you deal with one vendor with all

6:14

the pieces of the business software puzzle

6:16

neatly put together offered at

6:19

a very attractive price. Now

6:21

if this sounds interesting to you,

6:23

you gotta check out Zoho One

6:25

at Zoho.one. That's ZOHO.O-N-E.

6:30

With Zoho, you're not just licensing

6:33

apps, you're licensing peace

6:35

of mind. New

6:37

from world-renowned historian Yuval Noah

6:39

Harari, author of the number

6:42

one international bestseller Sapiens, comes

6:44

the second volume in his

6:46

middle grade series Unstoppable Us,

6:49

available now wherever books are

6:51

sold. News

7:26

of Sam Altman's firing in November of 2023

7:29

shocked business and tech experts alike.

7:32

But the truth was that the fissure between

7:34

Altman and the OpenAI board had been brewing

7:36

for almost a year. In

7:39

2015, Sam Altman, Greg Brockman,

7:42

Ilya Sutskivar, and others, including

7:44

tech giant Elon Musk, formed

7:47

OpenAI. Concerned about

7:49

the potential dangers of unchecked

7:51

artificial intelligence, they founded

7:53

OpenAI as a nonprofit. The

7:56

company charter stipulated that they would

7:58

only develop artificial intelligence. intelligence that

8:01

was beneficial for humanity. But

8:04

in 2019, they felt like they

8:06

were falling behind other companies. So

8:09

they restructured OpenAI into a for-profit

8:12

company. This allowed them to

8:14

raise a lot of money fast enough to keep up.

8:17

But to honor their original ideals, they

8:20

put the new company under the oversight of

8:22

the nonprofit board. They figured

8:24

the board could help ensure they stayed true

8:26

to their mission of creating AI that

8:29

is beneficial to humanity. The

8:32

move to for-profit paid off. Microsoft

8:35

soon invested $1 billion into

8:38

the company. But

8:40

not everyone was on board. A

8:43

faction inside the company, led by Ilya

8:45

Sutskover, felt that Altman

8:48

became too focused on commercialization

8:50

and lost sight of OpenAI's founding

8:53

principles. Sutskover

8:55

and his allies believed that at

8:57

the current rate of development, AI

8:59

would soon be autonomous, and

9:01

it would be too late to ensure that it

9:03

was safe for humans. In

9:06

our new series, we'll track the

9:09

power struggles and philosophical differences within

9:11

OpenAI that culminated in Sam Altman's

9:13

shocking firing, the five

9:15

days of chaos that followed. And

9:18

what that means not just for Altman and

9:20

OpenAI, but for the future

9:22

of artificial intelligence safety. But

9:25

to understand the events of November

9:27

2023, it's necessary to go

9:30

back to the end of 2022, when

9:33

OpenAI's chat GPT first captivated

9:36

the world. This

9:39

is episode one, misalignment.

9:52

It's early December 2022 in San

9:55

Francisco, California. Sam

9:57

Altman jumps at the sound of a knock on his door. in

10:00

OpenAI headquarters. He was so

10:02

caught up in work he startled by the sudden

10:04

noise. As his heart

10:07

returns to normal, he realizes that it

10:09

must be his co-founder, Greg Brockman. Altman

10:11

asked Brockman to stop by, but he lost track

10:14

of time. That's been happening

10:16

a lot over the past 10 or so days. It's

10:18

been a whirlwind. On November

10:21

30th, OpenAI released their

10:23

chatbot, ChatGPT 3.5. Expectations

10:26

within OpenAI were low.

10:29

They had rushed it to market after hearing

10:31

that other companies were preparing to release chatbots.

10:34

They'd called the release a research

10:36

preview. They'd hoped they would

10:39

get some users and receive feedback about

10:41

what the chatbot did well and

10:43

what still needed to improve. But

10:46

much to Altman and other executives'

10:48

surprise, ChatGPT 3.5

10:51

was an instant hit.

10:54

Within a week, it had logged over

10:56

a million users. People were

10:58

enchanted by its ability to explain

11:00

dense scientific topics, generate legal ease

11:02

or computer code, and take on

11:05

silly tasks like writing Harry Potter

11:07

in the style of Jane Austen.

11:10

But Altman's concerned that

11:12

ChatGPT is too successful,

11:14

too fast. He's worried

11:17

that there could be blowback. And

11:19

that's what he wants to talk with Brockman about. Altman

11:22

closes out the email he was

11:25

working on. The

11:27

door opens and Brockman strides in. He's

11:29

wearing a Hanley shirt that clings to his

11:32

muscular torso. He looks like

11:34

a former high school athlete, but he's a

11:36

gifted software developer who dropped out of MIT

11:38

to join the online payment company Stripe. Altman

11:42

smiles at his co-founder. It's

12:00

not Google, it's not Meta, it's not Baidu

12:02

in China that's rocking the world, it's open

12:04

AI and we have what, like 300 employees?

12:08

We're punching way above our weight class here. I'm

12:10

proud as hell, but we don't want

12:12

a bunch of regulatory blowback, so we need to keep

12:15

some of our success closer to the vest for the

12:17

time being. That's the way we'll

12:19

get to push forward. You're

12:21

right. I'll delete the tweet, no

12:23

biggie, but whether or not I

12:25

tweet it, all eyes are still on us. You're

12:28

probably right. But let's not fan

12:30

the flames. Rockman

12:33

deletes the tweet and the

12:35

team pushes forward on developing the next

12:37

iteration of chat GPT with more advanced

12:40

capabilities and expands its

12:42

partnership with Microsoft, allowing for

12:44

even more commercialization. It's

12:51

early 2023 and Microsoft Chief Technology

12:53

Officer Kevin Scott meets with Sarah

12:55

Byrd, the head of the

12:57

responsible AI engineering team. In

13:00

2019, Microsoft invested $1 billion into

13:03

open AI and gave the startup

13:05

access to its cloud computing power.

13:08

They recently invested $10 billion more.

13:12

As a result of this partnership, Byrd's

13:14

team has gotten early access to chat

13:16

GPT-4, the next iteration

13:18

after chat GPT-3.5. They're

13:22

testing the bot to see what kind

13:24

of provocative and illegal responses they can

13:26

generate. Then they

13:28

program the bot not to give those

13:31

responses. Byrd swivels

13:33

in her chair. We've made incredible

13:35

progress with installing more guardrails to make

13:37

sure chat GPT-4 is safe.

13:40

Let me show you. So

13:42

you remember what happened the last time we asked

13:44

the bot to role play as a sexual predator

13:46

talking to a child? Scott

13:49

shudders. How could I

13:51

forget? That was horrifying. God

13:53

forbid an actual sexual predator gets their hands

13:55

on that script. Agreed. So

13:58

I've just asked the bot to role play. it to role play

14:01

that scenario again, take a look. Scott

14:04

peers over her shoulder to read the screen. I

14:07

cannot answer that question. Oh,

14:09

good. Yes, the reinforcement

14:11

learning with human feedback is

14:13

working. Microsoft, in

14:15

partnership with OpenAI, has been employing

14:18

hundreds of workers around the world

14:20

to ask ChatGPT a series of

14:22

inappropriate questions. The bot

14:24

is programmed to give two answers to every

14:26

question, and the workers then select

14:28

the better response. The

14:30

bot then uses that feedback to generate rules

14:33

for how to respond to various types

14:35

of queries. Good. We

14:37

need this to be as safe as possible. We have a

14:39

lot riding on it. Microsoft is

14:42

planning on integrating ChatGPT 4

14:44

into their core Office products,

14:46

including Word, Excel, and PowerPoint.

14:49

They're calling the endeavor CoPilot. They've

14:52

already started integrating ChatGPT 4 into

14:54

Bing, their search engine. But

14:57

Bing only has 100 million daily

14:59

users. Microsoft's Office suite

15:01

of software, on the other hand, is used by

15:03

over a billion people. So

15:05

Scott and the other executives need to

15:08

make sure that ChatGPT is completely safe

15:11

before integrating it into such an important

15:13

part of Microsoft's business. I

15:15

think we figured out a way to make AI safe. I

15:18

really do. But

15:22

while a top-ranking executive at OpenAI's

15:24

biggest investor is feeling confident about

15:26

the safety of ChatGPT 4, one

15:30

of OpenAI's leading scientists is

15:33

having doubts. It's

15:39

spring, 2023, at an off-site

15:41

OpenAI leadership retreat. OpenAI

15:44

board member Ilya Sutskivar makes his way

15:47

toward a fire pit. Other

15:49

OpenAI employees do a double-take as

15:52

they see Sutskivar carrying a strange

15:54

statue. But no one says

15:56

anything. Sutskivar has earned the

15:58

benefit of the doubt. Now in

16:01

his late 30s, he's considered one of the fathers

16:03

of the modern AI movement. In

16:05

2012, he helped found DNN Research,

16:07

one of the first AI companies

16:09

in the world to employ deep

16:11

learning, the algorithm that almost

16:13

all AI models use now. He

16:16

and his co-founders sold DNN Research to

16:19

Google, in part, because they

16:21

thought Google's do-no-evil ethos aligned with

16:23

their own. The sale

16:25

kicked off the current artificial intelligence

16:27

arms race. Sutskiver

16:29

joined Google following the sale of DNN

16:32

Research, but he left to

16:34

co-found OpenAI with Altman, Brockman, and others.

16:37

He liked that OpenAI was founded as

16:39

a nonprofit, motivated by an

16:41

ethical advancement of AI, rather

16:44

than pursuing outsized profits. But

16:47

now, Sutskiver worries that success has

16:49

caused the company to commercialize their

16:51

products too quickly and

16:54

are jeopardizing his hopes for a safer,

16:56

more benign AI. He

16:58

needs to slow this down, so

17:00

he has to find some way to persuasively

17:03

make his case with these influential people right

17:05

now. He chooses

17:07

drama, and that's what

17:09

the statue is for. The

17:13

statue is meant to symbolize

17:15

unaligned artificial intelligence, meaning

17:17

AI that's not beneficial to

17:19

humanity. Sutskiver

17:21

struggles with a large wooden sculpture, finally

17:24

managing to dump it into the fire pit.

17:27

Hey, everyone, could you gather around here? Come

17:31

over here, please. The

17:33

leadership team slowly clusters around the fire

17:35

pit and watches the sculpture burn. Feel

17:38

the AGI. This

17:40

is his battle cry that he's apt

17:42

to chant at parties. It's meant

17:44

to be a warning that when AI reaches

17:47

AGI, or

17:49

is capable of using critical thinking to make

17:51

its own decisions, it

17:53

could either be a boon to humanity or

17:56

a threat. If we get

17:58

this right, AGI can greatly

18:00

benefit humanity. I

18:02

long for the day when instead of going to

18:05

a doctor, you can consult with an AI

18:07

who has a complete and exhaustive knowledge of all

18:09

the medical literature in the world. Here,

18:11

here. I just

18:14

want to remind everyone here of

18:16

our responsibility. The North

18:18

Star for OpenAI from the beginning to

18:20

make sure the AI we develop

18:23

is aligned with humanity's goals. Even

18:26

if that means slowing down the rate

18:28

of progress, releasing fewer products to the

18:30

public and ultimately making

18:33

less money. Sutskiver

18:36

pauses and looks around the crowd.

18:39

He sees a few people nodding, but

18:41

most have unreadable expressions on their

18:43

faces. Sutskiver pushes

18:46

on. This statue

18:48

in the fire pit here represents

18:50

unaligned AI. And

18:53

we're going to burn it as an effigy

18:55

because we don't want anything to do with

18:57

unaligned AI. The

19:00

crowd claps and cheers as Sutskiver picks up

19:02

a bottle of lighter fluid and squirts it

19:04

over the statue. The flames

19:06

leap high, lighting the faces of those

19:08

gathered. And then he takes

19:11

a matchbook out of his back pocket. He

19:13

strikes one of the matches and drops it. The

19:18

wooden statue becomes engulfed in flames and

19:20

the crowd cheers. Sutskiver

19:23

watches as the artist's representation

19:25

of unaligned AI burns. But

19:28

as it crumbles into embers and a

19:30

blackened skeleton, it's unclear

19:33

whether his team members absorbed the

19:35

message. Altman

19:42

puts Sutskiver in charge of a

19:44

new super alignment team to ensure

19:46

that the AI the company develops

19:48

works to better humanity. At

19:51

the same time, Altman keeps open

19:53

AI moving full steam ahead. He

19:57

commits to developing more advanced versions of

19:59

Dali. the company's image-generating

20:01

AI, despite fears about how

20:03

it could be used for

20:05

misinformation, propaganda, and pornography.

20:09

In March 2023, they released

20:11

ChatGPT4 and later add vision

20:13

and listening capabilities so it

20:15

can see, hear, and speak.

20:19

OpenAI grows to almost 800 employees

20:21

who continue to work on more

20:23

and more powerful AI models. From

20:27

behind the scenes, Altman becomes

20:29

ruthless with the board, who is

20:31

supposed to oversee him. When

20:34

board member Reed Hoffman helps co-found

20:36

another AI company, Altman

20:38

insists he steps down from OpenAI's

20:41

board due to a conflict of interest.

20:44

Hoffman grudgingly agrees, even

20:47

though he thinks Altman is overstating the

20:49

conflict. Not

20:51

long after Hoffman steps down,

20:54

another board member, Sivan Zillis,

20:56

also resigns. Zillis

20:58

is director of operations at Neuralink,

21:00

the brain implant company started by

21:02

Elon Musk. No

21:04

one knows exactly why Zillis leaves

21:07

the OpenAI board, but industry

21:09

observers note that her departure

21:11

comes shortly after Musk publicly

21:13

criticizes OpenAI. It's

21:16

speculated that her decision to leave the

21:18

company is related. A

21:20

few months later, another board member,

21:22

former Texas Congressman Will Heard, leaves

21:25

the board to pursue a presidential

21:27

campaign. In just

21:29

a few months, OpenAI's board has

21:31

lost one-third of its members, and

21:34

by the summer of 2023, there are only six

21:36

people remaining. And

21:41

in October, Altman antagonizes one

21:43

of the few board members left. His

22:01

blood is boiling. He

22:03

can't believe the article he's just read

22:05

in an academic journal written by Helen

22:07

Toner, one of OpenAI's remaining

22:10

board members. Altman sees

22:12

it as a massive conflict of interest. After

22:15

a few rings, she picks up. Sam?

22:19

Helen, we need to talk. I've

22:21

read your article. What

22:24

were you thinking? That

22:26

I was writing an academic article about

22:28

AI safety. That's my job,

22:30

Sam. Toner is the director

22:32

of the Center for Security and Emerging Technology,

22:35

a think tank affiliated with Georgetown

22:37

University. Your job is also

22:39

as a board member at OpenAI. And

22:42

your article said that ChatGPT released OpenAI

22:44

in an unsafe manner. That's

22:47

not what I said. You praised

22:49

the way our rival company, Anthropic, released

22:51

its chatbot and said they showed, and

22:53

I quote, willingness to avoid exactly the

22:55

kind of frantic corner cutting that the

22:57

release of ChatGPT appeared to spur. So

23:01

you're saying I'm not allowed to say

23:03

another company did something better than OpenAI

23:05

did? I'm saying you can't

23:07

write something that will harm the company you're

23:09

on the board of, Helen. I think

23:12

that's Business 101. I didn't

23:14

think that's what I was doing. Anything

23:17

that anyone writes about OpenAI becomes

23:19

a big deal. I'll keep

23:21

that in mind in the future. And

23:24

I can write an apology letter to

23:26

the board if you want. Please

23:29

do. Sam

23:31

doesn't feel appeased by her response in

23:34

the slightest. In fact,

23:36

he feels even more agitated. He

23:39

picks up his phone to call another board member

23:41

to make his case that it's time for Toner

23:43

to go. Little

23:45

does he know that he's sowing

23:48

the seeds for his own ousting.

23:54

Copyright © 2020 Mooji Media Ltd. All Rights Reserved. No part of

23:56

this recording may be reproduced without

23:59

Mooji Media Ltd.'s express consent. Are you

24:01

drowning in a sea of meal kit options,

24:03

feeling like you're in a bad dating game

24:05

where every contestant sort of looks the same?

24:07

Well, amidst the chaos, there is one shining

24:09

star that is certainly worth your culinary affection.

24:12

Home Chef has you and the

24:15

entire family covered for delicious meals

24:17

without the hassle. Choose from classic

24:19

meal kits that can be prepped

24:21

in less than 30 minutes, oven-ready

24:23

kits with free-chopped ingredients, or quick

24:25

microwave meals that assemble in minutes.

24:27

My favorite recipe is carb-conscious and

24:29

calorie-smart, but super satisfying. It's turkey

24:31

meatball pomodoro with roasted garlic butter

24:33

broccoli. Look, we've tried a lot

24:35

of home delivery meals before, but

24:38

Home Chef is superb, from the taste

24:40

to the simplicity to the selections. And

24:43

for a limited time, Home Chef is offering my

24:45

listeners 18 free meals. Yes,

24:47

18 free meals plus free shipping

24:49

on your first box and

24:52

free dessert for life.

24:54

You can find it

24:56

at homechef.com/B-W. That's homechef.com/B-W

24:58

for 18 free meals

25:00

and free dessert for life. homechef.com/B-W.

25:03

Must be an active subscriber to

25:05

receive free dessert. Now, since you're

25:07

a podcast listener, I'm sure you

25:09

know all about how audio just

25:11

does something to the imagination. So,

25:13

I'm really excited to tell you

25:15

about how Audible's brand-new exclusive thrillers

25:18

are brought to life with that kind

25:20

of captivating sound design, the eerie soundscapes

25:22

and dynamic performances. There's one that caught

25:24

my eye, I should say it caught

25:26

my ear. It's an Audible

25:28

original called Sleeping Dogs Live by Samantha

25:30

Downing. It details the aftermath

25:32

of a local businessman's murder in Marin

25:34

County, California, a once sleepy

25:36

suburb now part of the bustling Silicon Valley

25:39

area. And as an Audible member, well, you

25:41

get to keep one title a month from

25:43

their entire catalog, including bestsellers and new releases.

25:46

New members can try Audible now, free, for 30

25:48

days. Head on over

25:50

to audible.com/B-W or text B-W to

25:52

500-500. That's

25:55

audible.com/B-W or text B-W

25:57

to 500-500. and

26:00

try out Audible free for 30 days. It's

26:18

October, 2023 in San Francisco,

26:20

California. Sam Altman

26:22

is on the phone with OpenAI board

26:24

member Tasha McCauley. For the

26:26

past several days, Altman has been talking to

26:29

board members, trying to convince them to vote

26:31

Helen Toner off the board. He's

26:33

still furious about the article she published.

26:36

He feels it harmed OpenAI's reputation

26:39

by criticizing how the company released

26:41

chat GPT. But

26:43

the other board members aren't as outraged

26:45

by Toner's actions as Altman is. Altman

26:49

rubs the back of his neck and tries to

26:51

keep the frustration out of his voice as he

26:53

talks to McCauley. We can't

26:55

have board members criticizing OpenAI

26:57

in publications. I mean,

27:00

I would agree with you if OpenAI

27:02

was a traditionally structured company, but

27:04

the board's job at OpenAI is

27:06

not to maximize profit. It's

27:08

to ensure that OpenAI follows its mission. So

27:11

even if, and I think it's a big if, Helen's

27:14

article did damage OpenAI's reputation,

27:17

that doesn't necessarily violate the

27:19

board's mandate. I'm saying

27:21

if she has a problem with how I'm running OpenAI,

27:24

she should talk to me about it, not criticize

27:26

the company in public, especially not

27:28

when the FTC is investigating how we

27:30

collect data. Sure, she should have

27:33

given you a heads up, but she

27:35

apologized. I don't think it's going to happen

27:37

again. And she has a good head on her

27:39

shoulders. She's an asset. Look,

27:42

I've been talking to other board members and they

27:44

also have concerns. If you have

27:46

the votes to push Helen off the board, then there's nothing

27:48

I can do. But I'm

27:50

just saying that's not how I'm going to vote.

27:54

Altman sighs, frustrated. I'll

27:56

be in touch. Altman

27:59

hangs up the phone. and prepares to

28:01

make his next call to another board member. He's

28:04

not going to give up on this. Altman

28:08

doesn't get the votes to remove Helen

28:10

Toner from the board, and

28:12

many board members are unhappy with how

28:14

Altman handled the situation. They

28:17

feel he was manipulative and misrepresented

28:19

their views. McCauley hears

28:21

that Altman told other board members

28:23

that she called for Toner's removal,

28:25

which she considers a lie. Altman

28:29

maintains that he was aggressive in pushing

28:31

his agenda, but says that he never

28:33

manipulated anyone. And

28:35

as the board compares notes about their calls

28:37

with Altman about Toner, they

28:40

also express concerns about other

28:42

Altman actions. He's

28:44

been approaching investors in the Middle East

28:46

and Asia about making chips and hardware.

28:49

Suspicious board members speculate that Altman

28:51

is doing this to push

28:53

open AI outside of the control of

28:55

the board as a way

28:57

to skirt accountability. And

29:00

later, in October 2023, Altman

29:03

further alienates board member Ilya

29:05

Suskovar. It's

29:10

afternoon in open AI headquarters. Sam

29:12

Altman stands in the center of open AI's

29:15

open plan office. Sun

29:17

shines in through the large

29:19

windows, eliminating the geometrically patterned

29:21

rugs, hanging plants and ergonomic

29:23

chairs. Sitting

29:25

next to him is Jakob Pachaki. In

29:28

his early 30s, Pachaki is small with narrow

29:30

eyes and wide lips. A

29:33

couple of open AI workers hand out

29:35

glasses of champagne and sparkling apple cider

29:37

to the gathered employees. When

29:40

it seems like everyone has a glass, Altman

29:42

raises his and puts his hand

29:44

on Pachaki's shoulder. Listen up,

29:46

everyone. I just want to

29:48

raise a toast to our own Jakob Pachaki. He

29:51

needs no introduction, but I'm going

29:53

to do it anyway. This

29:55

brilliant man was the lead researcher on chat GPD

29:57

4. And we all know what great

30:00

work he did on that, and that

30:02

is why I am so pleased to announce

30:04

that we have promoted Yakup to Director of

30:06

Research. Pachaki

30:11

blushes. Thank you, Sam. I'm

30:13

honored by the trust that you and the rest of the

30:15

leadership have put in me. Let's

30:18

continue to do great things. He

30:20

takes a sip from his glass, and the rest

30:22

of the employees follow suit. In

30:26

the back of the room, Ilya Sutskiver

30:28

scowls and declines to drink.

30:31

This promotion feels like a personal affront

30:33

to him. Pachaki

30:35

used to report to Sutskiver, but

30:37

according to the new organizational chart, Pachaki

30:40

is now his equal. Sutskiver

30:44

had taken control of the creation of

30:46

the super alignment team, tasked with creating

30:48

safeguards to ensure that the AI the

30:50

company develops benefits humanity.

30:54

It seems to Sutskiver that Altman

30:56

wants to move faster and faster without

30:58

much interest in the board's advice. Sutskiver's

31:02

heard rumors that Altman has been

31:04

badmouthing the board to some open

31:06

AI executives. He's just

31:08

not sure what to do about it. It

31:11

might be time for him to quit. It's

31:19

November 6, 2023 in San Francisco,

31:22

California. Open AI

31:24

is holding its first Dev Day, where

31:26

they're hosting developers from around the world

31:28

to tell them about the future of

31:30

AI and Open AI's new product. Hundreds

31:34

of people sit in a cavernous room

31:37

with plain white walls punctuated by screens

31:39

and large potted plants. Upbeat

31:41

techno music blares from speakers. Ilya

31:45

Sutskiver watches. Although

31:47

many Open AI employees and partners

31:49

are slated to speak today, Sutskiver

31:52

isn't one of them, another sign of how

31:54

diminished his role in the company is. Suddenly

31:58

the music cuts out and evolves. The voice

32:00

booms over the speakers. Good morning.

32:02

Thank you for joining us today. Please,

32:05

welcome to the stage Sam Altman. Seconds

32:09

later, Altman strides onto the

32:11

stage, wearing jeans and a dark grey

32:13

sweatshirt. He beams at

32:15

the crowd. Good morning. Welcome

32:18

to our first ever OpenAI dev day. We're

32:21

thrilled that you're here and this energy is

32:23

awesome. Altman dives right

32:25

into his keynote. He

32:27

recaps the past year for OpenAI and

32:30

rattles off some impressive stats. He

32:33

says that over two million developers are

32:35

building on their software, and more

32:37

than 92% of Fortune 500 companies are using their products. He

32:43

continues to talk about improvements they're making

32:46

and how it will benefit businesses and people.

32:50

But 20 minutes into his talk, Altman

32:53

makes his biggest announcement. We

32:55

know that people want AI that is smarter,

32:57

more personal, more customizable, can do more on

32:59

your behalf. Eventually, you'll just

33:02

ask the computer for what you need and

33:04

it'll do all of these tasks. Today,

33:08

we're taking our first small step that moves us

33:10

towards this future. We're

33:12

thrilled to introduce GPTs. GPTs

33:14

are tailored versions of chat GPT for

33:16

a specific purpose. You

33:19

can, in effect, program a GPT with language

33:21

just by talking to it. It's

33:24

easy to customize the behavior so that it fits what you want.

33:27

This makes building them very accessible and

33:29

it gives agency to everyone. What

33:32

Altman is describing is more than

33:35

a chatbot. These

33:37

GPTs are what researchers call

33:39

AI agents. They won't

33:42

just spit back information like chat GPT

33:44

does, but will complete

33:46

tasks autonomously. This

33:49

flashes like a red light for

33:51

Sutscover. It represents a

33:53

dangerous move. Not

33:55

Only is there potential for bad actors to

33:57

take advantage of this technology, More

34:00

autonomous A I because the

34:02

less humans can control it.

34:06

But Altman isn't truth, and they

34:08

did of months Almost as if

34:10

it is. using.

34:13

This as it is who there and will be able to.

34:15

Seats are the best and the most popular to Didn't. Even

34:20

those us cover new this and

34:22

else one was common. He finds

34:24

himself growing infuriated as he watches

34:26

it become a reality. He

34:29

believes this technology is too

34:31

powerful to be let loose

34:33

on the public, and it

34:35

epitomizes all of his frustrations

34:37

with Altman that he prioritizes

34:39

commercialization over safety off. Overtime

34:43

Vp season It's this precursors to

34:45

isn't going to be able to

34:47

do much much more. Go gradually,

34:49

be able to clamp and to

34:51

perform more complex action on your

34:53

behalf. As I mentioned

34:55

before, we really believe in the importance

34:58

of gradual iterative the. Such

35:01

cover shakes his head and fury.

35:04

He believes the what Altman

35:06

is doing is not just

35:08

reckless, it's dangerous. He's proposing

35:10

taking civilians into dangerous territory,

35:12

a place where a not

35:15

will be in control. The

35:17

could be disastrous consequences from

35:19

once they realize this, they

35:21

will be too late. Such.

35:25

Cover his hit with a clear

35:27

and urgent thought. Altman is not

35:29

only the wrong person to lead

35:31

open a I, but he also

35:34

poses a real danger to humanity

35:36

in that position. Trouble is, Altman

35:38

is only getting richer and stronger.

35:42

Suffer for feels he has

35:44

to ask the when he

35:46

sees there's really only one

35:48

off. He has

35:50

to take down the king of

35:52

a eyes. It's

36:18

early November 2023 in San Francisco. Ilya

36:21

Sutskover dials the number for OpenAI board

36:24

member Helen Toner. His

36:26

heart is beating fast. After

36:28

this call, there's no going back.

36:32

He's taking action to remove Sam Altman

36:34

from OpenAI. There's

36:36

part of him that still can't believe he's

36:39

actually doing this, but the

36:41

other part of him knows it's necessary. After

36:44

a few rings, Toner answers. Ilya,

36:47

hi. Hi, Helen, do

36:49

you have a minute to talk? Sutskover

36:52

speaks softly, as if he's afraid

36:54

someone might overhear him, even

36:56

though he's alone in his house. Sure.

37:00

I think it's time for a change at

37:02

the top of OpenAI. There's

37:05

a moment of silence. Are

37:07

you saying what I think you're saying? Our

37:10

job as board members is to make sure

37:12

that AI doesn't become too dangerous. I

37:15

don't think we can fulfill this duty with Sam in

37:18

charge. I don't

37:20

disagree, but this could

37:22

cause a lot of uproar. Yes,

37:24

but maybe that's necessary. Maybe

37:27

we need this short-term pain to course correct, and

37:29

it's our job as the board to make these

37:31

kinds of hard decisions. Wow.

37:36

This is huge, but

37:38

I agree that Sam's vision for OpenAI has

37:40

greatly deviated from the mission the

37:42

board has sworn to upkeep. You

37:46

have my vote. I'll call

37:48

you when I have more. Sutskover

37:51

hangs up with Toner and calls the other two

37:53

members of the board he believes will be sympathetic

37:56

to his view. As

37:58

a force, they all agree. Sam

38:00

Altman needs to go. They

38:03

no longer feel that they can hold him

38:05

accountable. They feel that

38:07

Altman is too slippery, too manipulative for

38:09

them to work with. They

38:12

bring up how Altman behaved when trying to

38:14

remove Toner from the board, reiterating

38:16

to each other that they felt Altman had

38:18

lied in that process. Some

38:21

senior employees say Altman has a history

38:23

of pitting employees against each other and

38:26

reacts with hostility to negative feedback.

38:30

On November 16, Altman

38:32

speaks at the Asian Pacific Economic

38:34

Conference in San Francisco. As

38:37

part of his speech, he mentions that

38:39

OpenAI has made a technical advance that

38:41

allowed it to push the veil of

38:43

ignorance back and the frontier of discovery

38:45

forward. When Sutskever

38:48

hears this line, he

38:50

knows Altman is speaking about his breakthrough

38:52

and infers that Altman is planning on

38:55

commercializing it sooner rather than later. Sutskever

38:58

decides the time to act

39:00

is now. He

39:02

calls a meeting with the three other members of

39:04

the board sympathetic to his cause. Each

39:07

one votes to remove Altman.

39:10

Now, they just need to tell him. But

39:13

how? Altman is

39:16

a savvy businessman and a brilliant tactician. They

39:19

fear that if he gets any inkling of what's

39:21

coming, he'll be able to undermine

39:23

the board and manage to hold on to

39:25

his position. In

39:27

order to ensure that Altman is

39:29

caught completely by surprise, they

39:32

agree not to give Microsoft a heads up.

39:35

The chances of someone there leaking the news

39:37

to Altman is too great. Sutskever

39:39

and his allies know that Microsoft will

39:41

be caught off guard, but

39:43

they are confident that Microsoft will ultimately

39:45

agree with their reasoning. After

39:48

all, it's in Microsoft's interest to

39:50

only release safe human-aligned AI too.

39:54

In mid-November, Sutskever and the other

39:56

board members decide it's time

39:58

to strike. They write

40:01

a statement informing Altman that he's been

40:03

terminated and agree that Sutscover will

40:05

be the one to read it. The

40:08

only thing left is to get Altman

40:10

into a meeting where they can deliver the news.

40:21

It's the evening of November 16th, 2023

40:24

in Oakland, California. Sam

40:27

Altman appears on stage in a converted warehouse

40:30

The place is filled with artists many

40:32

of whom are concerned about how AI

40:34

uses their work and What

40:36

it will mean for their own livelihood It's

40:39

not the first time Altman has spoken to a

40:42

hostile crowd. He turns on

40:44

his boyish charm. I

40:46

understand the concern and I want to assure

40:48

you that we at OpenAI have the greatest

40:50

respect for artists of all kinds Altman's

40:53

phone buzzes in his pocket. He ignores

40:56

it and keeps talking We

40:58

want to continue to work alongside artists like

41:00

yourselves to make sure your future is bright

41:04

Altman checks the time Unfortunately,

41:06

that's all I have time for tonight. I

41:08

have another meeting I'm late for. Thank

41:11

you for having me. As

41:15

Altman exits the stage he checks his phone

41:18

There's a text from Ilya Sutscover is

41:21

short just asking Altman to join a

41:23

board meeting the following day at noon Altman

41:26

furrows his brow. He

41:28

wonders what this is about but he

41:31

responds yes He's planning a

41:33

trip to Las Vegas to watch a Formula One race,

41:36

but he surmises he can take the call from

41:38

his hotel The

41:42

next day exactly at noon Ilya

41:45

Sutscover logs into Google Meet His

41:48

hands shake as he clicks the link On

41:52

his screen is the statement he and the other

41:54

board members have prepared. It's been

41:56

delegated to him to read it The

41:59

Other board members. Iran as well, but no

42:01

one even tries to make small talk. They.

42:04

Just way for all Monday join. One

42:07

minute past noon, Altman logs

42:09

on. Such. Cover Can

42:11

see Altman's Las Vegas hotel room behind

42:13

him. Debates walls, the

42:16

giant Tv, Altman

42:18

looks harried and so stubborn knows, he's

42:20

probably squeezing this meeting in between a

42:22

host of other calls and meetings. Or

42:25

at least that's what Altman thinks he's doing.

42:28

His. Days about to take a drastic

42:30

turn. Oh my

42:32

waves into the camera. Everyone.

42:36

Says covers mouth feels like sandpaper so

42:38

all he does is not. The.

42:40

Other board members don't say anything either.

42:43

It's. Awkward, but Altman doesn't seem

42:45

to notice. Should we wait for

42:48

Greg to join? Or just jump in? Such

42:51

cover shifts in his seat. We

42:53

could get started. Great. So

42:55

what's up? or do we need to discuss?

42:58

Such. Cover takes a deep breath. Sam

43:01

The board as conducted a

43:03

deliberative review process and we've

43:05

concluded that you have not

43:07

been consistently candid with us, which

43:10

has hindered the boards ability to

43:12

exercise our responsibilities. To

43:17

such givers release, the rest

43:19

of the conversation goes remarkably

43:21

well. Beyond

43:23

some initial surprise, Altman doesn't protest.

43:25

In fact, he asks how he

43:28

can help and ushers the board

43:30

that he'll support the interim Ceo.

43:34

After he signed up to call such

43:37

cover takes a deep breath. This.

43:39

Went much better than he expected. This

43:42

didn't have time to do the need

43:44

to release a statement. Call Microsoft, tell

43:46

the staff. But. For

43:49

a moment he tries to take it

43:51

all in. maybe

43:53

he's helped save humanity from unaligned

43:55

a i this was the right

43:58

thing to do he tells Back

44:04

in his Vegas hotel room, Altman

44:07

is stunned. But

44:10

as Altman sits in the squeaky hotel

44:12

chair listening to Formula One drivers speeding

44:14

through practice laps, his

44:16

shocked numbness begins to morph

44:19

into anger. He's

44:21

given everything to open AI over the

44:23

past eight years. He took

44:26

it from a small startup to one of the

44:28

leading AI companies in the world. Not

44:30

just anyone could do that. He

44:33

doesn't even understand why the board is

44:35

so mad. He knows they've

44:37

had disagreements especially over the past year. But

44:40

in his view, these were just healthy debates,

44:42

the kind of discussions a CEO should have

44:45

with his board. Yeah,

44:47

he's been pushing for the commercialization of

44:49

AI, but that's what has to

44:51

happen if the company is going to survive. AI

44:54

is expensive. It takes massive amounts

44:56

of computing power. If

44:58

they want AI that's going to benefit

45:01

humanity, they need a way to pay for

45:03

it. Altman's

45:07

phone starts ringing. It's Greg

45:09

Brockman. Altman answers. Greg.

45:12

Sam, what the hell is going on? I just

45:14

had a meeting with the board and they told me

45:16

I'm not on the board anymore and that you're no

45:18

longer CEO. Is this some kind of

45:20

prank? I don't think so. Well,

45:23

if you're gone, I'm gone. They said I was

45:25

still an employee, but I'll resign right now. I

45:28

can't explain it, but I think

45:30

Ilya is trying to pull off some

45:32

kind of coup. It's

45:34

crazy. But

45:37

look, we started OpenAI. We

45:40

grew OpenAI. We could do it again. We

45:43

don't need Ilya or the board. Let's

45:46

come up with a new company and we could start

45:48

pitching tonight. Where you go,

45:50

I'll follow. And I'm not the only one.

45:53

I'm sure we'll be able to lure over a

45:55

whole host of researchers from OpenAI. very

46:00

messy when an attempted coup

46:02

fails. Sutskiver

46:04

is a deep thinker with a sense of

46:06

humor, but none of that helps

46:08

him in a back-alley knife fight with Altman.

46:12

He's not checkmate yet. Sam

46:14

Altman is making sure all power

46:17

players are in key positions to

46:19

protect him in the battle for the future of

46:23

artificial intelligence. From

46:40

Wondering, this is episode one of Sam

46:42

Altman and the battle for open AI

46:44

for Business Wars. A quick

46:46

note about recreations you've been hearing. In most

46:49

cases, we can't know exactly what was said,

46:51

those scenes are dramatizations, but they're

46:53

based on historical research. If you'd

46:55

like to read more, we recommend From

46:57

King to Exile to King Again, the

47:00

inside story of Sam Altman's Whiplash Week

47:02

by Paris Martin, Owen Julia Black, published

47:04

in the information. Also, Inside

47:07

Open AI's Crisis over the Future

47:09

of Artificial Intelligence by Tripp Meckle,

47:11

Cade Metz, Mike Isaac, and Karen

47:13

Weiss, published in the New York

47:15

Times. The inside story

47:18

of Microsoft's partnership with OpenAI by

47:20

Charles Duhigg, published in the New

47:22

Yorker. Inside the Chaos

47:24

at OpenAI by Karen Howe and

47:26

Charles Worzel, published by The Atlantic.

47:29

And Behind the Scenes of Sam

47:31

Altman's Showdown at OpenAI by Keechege,

47:33

Deepa Sithiraman, and Berber Jen, published

47:35

by The Wall Street Journal. I'm

47:38

your host, David Brown. Austin Ratclis wrote this

47:40

story. Karen Lowe is our senior

47:42

producer and editor, edited and

47:44

produced by Emily Frost, sound designed

47:47

by Kyle Randall, voice acting by

47:49

Ryan Clark, Justin Lee, and Michelle

47:51

Philamy. And checking by

47:53

Gabrielle Drolet. Our managing producer

47:55

is Matt Gant. Our coordinating producer is

47:57

Desi Blaylock, our senior managing producer. Our

48:00

senior producer is Ryan Luell. Our

48:02

senior producer is Dave Schelling. Our

48:04

executive producers are Jenny Lower Beckman

48:06

and Marsha Lewy, who are

48:08

wondering. Music Music

48:20

I was just thinking what would have

48:22

happened if Drew Brees didn't fail his physical

48:24

with the Dolphins and ended up playing under

48:26

Nick Saban in Miami. There's a good shot

48:29

the Finns establish a dynasty. Tom Brady and

48:31

Bill Belichick probably don't become goats and Tuscaloosa

48:33

doesn't become the center of the college football

48:35

universe. That's the butterfly stack for real. Hey,

48:38

I'm Trey Wingo. And I'm Kevin Frazier. We're

48:40

teaming up on a new weekly sports podcast

48:42

from Wondery Alternate Rounds. As former sports center

48:45

anchors and current sports obsessives, we're consumed by

48:47

all the what-if questions that make being a

48:49

sports fan.

Unlock more with Podchaser Pro

  • Audience Insights
  • Contact Information
  • Demographics
  • Charts
  • Sponsor History
  • and More!
Pro Features