Podchaser Logo
Home
#374 – Robert Playter: Boston Dynamics CEO on Humanoid and Legged Robotics

#374 – Robert Playter: Boston Dynamics CEO on Humanoid and Legged Robotics

Released Friday, 28th April 2023
Good episode? Give it some love!
#374 – Robert Playter: Boston Dynamics CEO on Humanoid and Legged Robotics

#374 – Robert Playter: Boston Dynamics CEO on Humanoid and Legged Robotics

#374 – Robert Playter: Boston Dynamics CEO on Humanoid and Legged Robotics

#374 – Robert Playter: Boston Dynamics CEO on Humanoid and Legged Robotics

Friday, 28th April 2023
Good episode? Give it some love!
Rate Episode

Episode Transcript

Transcripts are displayed as originally observed. Some content, including advertisements may have changed.

Use Ctrl + F to search

0:00

The following is a conversation with Robert

0:02

Plater, CEO of Boston

0:04

Dynamics, a legendary robotics

0:07

company that over 30 years has

0:09

created some of the most

0:10

elegant, dexterous, and

0:13

simply amazing robots ever built,

0:15

including the humanoid robot Atlas

0:18

and the robot dog Spot, one

0:21

or both of whom you've probably

0:24

seen on the internet, either dancing,

0:26

doing back flips, opening doors, or

0:29

throwing around heavy objects.

0:32

Robert has led both the development

0:34

of Boston Dynamics humanoid robots and

0:37

their physics-based simulation software.

0:40

He has been with the company from

0:42

the very beginning, including its roots

0:44

at MIT, where he received his PhD

0:46

in aeronautical engineering. This

0:49

was in 1994 at

0:51

the legendary MIT Leg Lab.

0:54

He wrote his PhD thesis on

0:56

robot gymnastics, as part

0:58

of which he programmed a bipedal

1:00

robot to do the world's first 3D

1:03

robotic somersault. Robert

1:05

is a great engineer, roboticist, and leader, and

1:08

Boston Dynamics, to me as a roboticist,

1:11

is a truly inspiring company. This

1:13

conversation was a big honor and pleasure,

1:15

and I hope to do a lot of great work

1:18

with these robots in the years to come.

1:21

And now a quick few-second mention of each sponsor.

1:24

Check them out in the description. It's the best way

1:26

to support this podcast. We've

1:29

got NetSuite for business management software, Linode

1:31

for Linux systems, and Element

1:34

for zero-sugar electrolytes.

1:36

Choose wisely, my friends. Also, if you

1:39

want to work with our team, we're always hiring. Go

1:41

to luxfreedman.com slash hiring. And

1:44

now onto the full ad reads. As always, no

1:46

ads in the middle. I try to make this interesting, but

1:48

if you must commit the horrible,

1:50

terrible crime of skipping them, please

1:53

do check out our sponsors. I do

1:55

enjoy their stuff, I really do. And

1:57

maybe you will as well.

2:00

It's brought to you by NetSuite, an

2:02

all-in-one cloud business management

2:05

system. Running

2:08

a business, as this podcast reveals

2:11

from Robert Plater and Boston Dynamics,

2:13

is really hard. It's not just about

2:15

the design of the system. It's not just about

2:17

the engineering, the software, the hardware,

2:20

all the complicated research that goes into it, all

2:23

the different prototypes, all the failure

2:25

upon failure upon failure in the

2:27

early stages, in the middle stages of getting these

2:30

incredible robots to work.

2:32

It's all the glue

2:34

that ties the company together. For

2:37

that, you have to use the best tools for the job. I

2:40

hope to run a business, a large

2:42

business that actually builds stuff one

2:44

day and boys

2:47

are much more than just the innovation

2:49

and the engineering. I understand that deeply. You

2:52

should be hiring the best team

2:54

for that job

2:55

and use the best tools for that job. That's

2:58

where NetSuite can help out. Hopefully, it

3:00

can help you out. You can start now

3:02

with no payment or interest for six months.

3:05

Go to NetSuite.com slash Lex to

3:07

access their one-of-a-kind financing

3:10

program. That's NetSuite.com

3:13

slash Lex.

3:15

This episode is also brought to you by Linode,

3:18

now called Akamai, and their incredible

3:21

Linux virtual machines. I

3:23

sing praises to the greatest operating

3:25

systems of all time,

3:27

which is Linux. There's so many different beautiful

3:30

flavors of Linux. My favorite

3:32

is probably the different sub-flavors of Ubuntu,

3:34

Ubuntu Mate. That's what I use for my

3:37

personal development projects

3:40

in general when I want to feel comfortable

3:42

and fully customized, but I've used so

3:44

many other Linux's, distribution

3:46

of Linux's. But that's not what Linode

3:48

is about, or it is in part,

3:51

but it actually takes those Linux

3:53

boxes and scales them arbitrarily

3:55

to where you can do compute, not just on one machine,

3:57

but on many machines, customize them.

4:01

Make sure everything works reliably

4:03

and when it doesn't there's amazing human

4:06

customer service with real

4:09

humans. That's something that should

4:11

be emphasized in this day of Chad

4:13

GPT. Real human beings

4:16

that are good at what they do and figure

4:18

out

4:19

how to solve problems if they ever come up.

4:22

Linode, now called Akamai, is just

4:24

amazing. If compute

4:26

is something you care about for your business, for your personal

4:29

life, for your happiness, for

4:31

anything, then now you should check them out. Visit

4:33

linode.com slash Lex for

4:35

a free credit.

4:38

This episode is brought to you by a thing that

4:40

I'm currently drinking as I'm saying these words.

4:43

It's the element electrolyte drink mix

4:45

spelled L-M-N-T. My favorite

4:47

is the watermelon. That's what I always drink.

4:49

We have all explored. In

4:52

college, things got wild,

4:54

things got a little crazy, things got

4:56

a little out of hand. All

5:00

of us have done things we regret. Have

5:03

eaten ice cream we should not have eaten. I've

5:05

eaten ice cream at Dairy

5:08

Queen so many times in my life, especially through

5:10

my high school years. To

5:12

contradict what I just said, I regret nothing. I

5:14

think Snickers and if

5:17

memory serves me correctly, there's something called the

5:19

Dairy Queen Blizzard

5:21

where you could basically shove in whatever you

5:23

want into the ice cream and blend it and it tastes

5:25

delicious. I think

5:28

my favorite would be the Snickers bar, any kind

5:30

of bar, Mars bar.

5:32

Anything

5:34

with chocolate caramel, maybe a little

5:36

bit coconut,

5:37

that kind of stuff.

5:40

I don't regret it, but we've experimented. All

5:42

of us have experimented with different flavors, with

5:45

different things in life. I

5:47

regret nothing. You should not regret any

5:49

of it either because

5:50

that path

5:52

is what created the beautiful person that you are today.

5:56

That path is also the reason I mostly

5:58

drink the watermelon flavor.

5:59

of, I guess it's called watermelon

6:02

salt, I don't know what it's called, but watermelon is in the word,

6:05

of element. I highly recommend it. You could try other

6:07

flavors. Chocolate's pretty good too, like chocolate mint, I think

6:09

it's called. Totally different

6:11

thing. All the flavors are very different, and

6:13

that's why I love it, so you should explore. Anyway,

6:16

it's a good way to get all the electrolytes in your

6:18

system, the salt and magnesium, the

6:21

potassium, not salt, sodium is what I meant

6:24

to say. It doesn't matter what I meant to say. What

6:26

matters is it's delicious, and I'm consuming

6:29

it, and I'm singing the

6:29

praises, and I will toast to you when we see

6:32

each other in person one day, friend.

6:35

And we should drink,

6:36

element, drink,

6:39

to our deepest fulfillments, together,

6:42

as brothers and sisters in arms. Get

6:44

a simple pack for free with

6:47

any purchase. Try it at drinkelement.com

6:50

slash Lex.

6:53

This is the Lex Freeman podcast. To

6:55

support it, please check out our sponsors in the

6:57

description. And now, dear friends,

7:00

here's Robert Plater.

7:03

["The Lex Freeman Song"]

7:18

When did you first fall in love with robotics?

7:22

Let's start with love and robots.

7:25

Well, love is relevant because I think

7:28

the

7:29

fascination, the deep fascination is

7:31

really about movement. And

7:35

I was visiting MIT looking for a

7:37

place to get a PhD, and

7:40

I wanted to do some laboratory work. And

7:43

one of my professors in the Aero department

7:45

said, "'Go see this guy, Mark Raber, "'down in the basement

7:47

of the AI lab.'" And

7:50

so I walked down there and saw him. He

7:52

showed me his robots, and

7:55

he showed me this robot doing a somersault. And

7:58

I just immediately went, whoa!

7:59

You know? Yeah. Robots

8:02

can do that and because of my own interest in gymnastics,

8:05

there was like this immediate connection and

8:08

I

8:09

was interested in, I was in an aeroastro

8:11

degree because of flight and movement

8:14

was all so fascinating to me and then it turned

8:17

out that robotics had this

8:19

big challenge. How do you balance? How

8:22

do you build a legged robot that can really get

8:24

around? And that just,

8:26

that was a fascination. And

8:28

it still exists today. You were still working

8:31

on perfecting motion in

8:33

robots. What about the elegance

8:35

and the beauty of the movement itself? Is

8:37

there something maybe grounded

8:40

in your appreciation of movement

8:42

from your gymnastics days? Was

8:47

there something you just fundamentally appreciate

8:49

about the elegance and beauty of

8:51

movement? We had this concept in gymnastics

8:55

of letting your body

8:57

do what it wanted to do when you get

8:59

really good at

9:01

gymnastics. Part

9:04

of what you're doing is putting your body

9:06

into a position where the physics and the

9:08

body's inertia and momentum will

9:11

kind of push you in the right direction in a very

9:13

natural and organic way. And

9:15

the thing that Mark was doing, you know, in the

9:19

basement of that laboratory was trying to

9:21

figure out how to build machines to take advantage of

9:23

those ideas. How do you build something so

9:25

that the physics of the machine just

9:28

kind of inherently wants to do what

9:30

it wants to do? And he was building these

9:31

springy pogo stick type. His

9:34

first cut at Leggett Locomotion was a pogo

9:37

stick where it's bouncing and there's a spring mass

9:40

system that's oscillating, has its

9:42

own sort of natural frequency there. And

9:45

sort of figuring out how to augment those

9:47

natural physics with

9:50

also intent. How do you then control that but

9:52

not overpower it? It's that

9:54

coordination that I think

9:56

creates real potential.

9:58

We could call it beauty.

9:59

could call it, I don't know, synergy, that

10:02

people have different words for it. But I

10:04

think that that was inherent from the beginning.

10:07

That was clear to me that that's part of what

10:09

Mark was trying to do. He asked me to do that in

10:11

my research work. So, you know,

10:13

that's where it got going. So part of the thing that I think

10:15

I'm calling elegance and beauty in this case,

10:18

which was there, even with the pogo stick

10:20

is maybe the efficiency. So

10:23

letting the body do what it wants to do,

10:25

trying to discover the efficient movement.

10:27

It's definitely more efficient. It also

10:31

becomes easier to control in its own way because

10:33

the physics are solving some of the problem

10:35

itself. It's not like you have to

10:37

do all this calculation and overpower the

10:39

physics. The physics naturally, inherently

10:42

want to do the right thing. There

10:44

can even be feedback

10:47

mechanisms, stabilizing mechanisms that

10:50

occur simply by virtue of the

10:52

physics of the body. And it's

10:56

not all in the computer or not even all in your

10:58

mind as a person. And

11:01

there's something interesting in that melding.

11:04

You were with Mark for many, many, many years, but

11:07

you were there in this kind of legendary space

11:10

of Leg Lab at MIT in

11:13

the basement. All great things happen

11:15

in the basement. Is there some memories from

11:18

that time that you have

11:20

because it's such

11:23

cutting edge work in

11:27

robotics and artificial intelligence?

11:29

The memories, the distinctive lessons,

11:32

I would say, I learned in that

11:34

time period and

11:37

that I think Mark was a great teacher of, was

11:41

it's okay to pursue your

11:43

interests, your curiosity, do something because

11:46

you love it. You'll

11:48

do it a lot better if you love it.

11:52

That is a lasting lesson

11:55

that I think we apply at

11:57

the company still. and

12:01

really is a core value. So the

12:03

interesting thing is I get to, with

12:07

people like Ross Tedrick

12:09

and others, like the students that

12:12

work at those robotics labs are like some of the

12:14

happiest people I've ever met.

12:16

I don't know what that is. I

12:18

meet a lot of PhD students. A lot of them are kind

12:20

of broken by the wear and tear or

12:22

the process. But roboticists

12:25

are, while they work extremely hard and

12:27

work long hours,

12:29

there's

12:32

a happiness there. The only other group of people

12:34

I've met like that are people that skydive a lot. Like

12:38

for some reason there's a deep fulfilling happiness.

12:40

Maybe from like a long period

12:42

of struggle to get a thing to work and it works

12:44

and there's a magic to it, I don't know exactly. Because

12:47

it's so fundamentally hands on and

12:50

you're bringing a thing to life. I don't know what it is,

12:52

but they're happy. We see,

12:54

you

12:55

know, our attrition at the company

12:57

is really low. People come and they love

12:59

the pursuit. And I

13:01

think part of that is that

13:03

there's perhaps an extra connection

13:05

to it. It's a little bit easier to connect when

13:07

you have a robot that's moving around in

13:09

the world and part of your goal is to make it move around

13:12

in the world. You can identify with

13:14

that.

13:14

And this is one of the

13:17

unique things about the kinds of robots we're building

13:19

is this physical interaction lets

13:22

you perhaps identify with it. So

13:25

I think that is a source of happiness. I don't think

13:27

it's unique to robotics. I think anybody also

13:29

who is just pursuing something they

13:31

love, it's

13:33

easier to work hard at it and be good at

13:35

it. And not

13:38

everybody gets to find that. I do

13:40

feel lucky in that way. And I think we're

13:43

lucky as an organization that we've

13:45

been able to build a business around

13:47

this and that keeps people engaged. So

13:51

if it's all right, let's link on Mark for a

13:53

little bit longer. Mark Raybert, so he's

13:55

a legend. He's

13:58

a legendary engineer and roboticist.

13:59

What have you learned about life about

14:02

robotics and Mark? Through all the

14:04

many years you worked with him. I think

14:06

the most important lesson, which was,

14:08

have the courage of your convictions and do what

14:11

you think is interesting.

14:14

Be willing to

14:15

try to find big,

14:17

big problems to go after. And

14:20

at the time, like at locomotion,

14:23

especially in a dynamic machine, nobody

14:27

had solved it. And that felt like

14:29

a multi-decade problem

14:31

to go after. And so have

14:34

the courage to go after that because

14:36

you're interested. Don't worry

14:38

if it's gonna make money. You know, that's

14:40

been a theme. So that's

14:43

really

14:44

probably the most important

14:46

lesson I think that I got from

14:48

Mark. How crazy is the effort of doing legged

14:52

robotics at that time, especially?

14:55

You know, Mark got some stuff

14:57

to work starting from

14:59

the simple ideas. So maybe the other,

15:02

another important idea that has really become a value

15:05

of the company is try to simplify

15:08

a thing to the core essence.

15:10

And while

15:13

Mark was showing videos of animals running

15:15

across the savanna or climbing

15:19

mountains, what he started with was a pogo

15:21

stick because he was trying to reduce the

15:23

problem to something that was manageable and

15:26

getting the pogo stick to balance.

15:28

Had in it the

15:30

fundamental problems that if we solve those, you

15:32

could eventually extrapolate to something that

15:34

galloped like a horse. And

15:36

so look for those simplifying principles.

15:40

How tough is the job of simplifying a robot?

15:43

So I'd say in the early days, the

15:46

thing that made Boston,

15:48

the researchers at Boston Dynamics special is

15:51

that we worked on figuring

15:54

out what that central

15:57

principle was. And

15:59

then building. software or machines

16:02

around that principle. And that was not easy in

16:04

the early days. And it

16:06

took real expertise

16:09

in understanding the dynamics

16:11

of motion and feedback

16:13

control

16:15

principles. How to build

16:17

and

16:18

with computers at the time, how to build

16:20

a feedback control algorithm that was simple enough

16:22

that it could run in real time at a thousand Hertz

16:25

and actually get that machine to work. And

16:29

that was not something everybody was doing

16:31

at that time. Now the world's changing

16:34

now. I think the approaches

16:36

to controlling robots are going to change. And

16:40

they're going to become more

16:42

broadly available.

16:46

But at the time, there weren't many groups

16:49

who could really

16:50

work at that principled level with

16:53

both the software and make

16:56

the hardware work. And

16:58

I'll say one other thing about you're sort of talking

17:01

about what are the special things. The other thing

17:03

was it's

17:05

good to break stuff.

17:08

Use the robots, break

17:11

them, repair them, fix

17:15

and repeat, test, fix and repeat. And

17:17

that's also a core principle

17:19

that has become part of the company.

17:22

And it lets you be fearless

17:24

in your work. Too often

17:27

if you are working with a very expensive

17:29

robot, maybe one that you bought from somebody else

17:31

or that you don't know how to fix, then you

17:34

treat it with get gloves and

17:37

you can't actually make progress. You have to be able

17:39

to break something. And so I think that's been

17:42

a principle as well. So just to linger on that psychologically,

17:45

how do you deal with that? Because I remember I

17:47

built an RC

17:50

car with some

17:54

custom stuff like compute on it and all that kind of stuff,

17:57

cameras.

17:59

it didn't sleep much. The

18:02

code I wrote had an issue where it didn't stop

18:04

the car and the car got confused

18:07

and at full speed at like 20, 25 miles

18:09

an hour slammed into a wall. And

18:12

I just remember sitting there alone in

18:14

a deep sadness. Sort

18:18

of

18:19

full of regret, I think, almost

18:21

anger.

18:24

But also like sadness

18:26

because you think about, well, these robots,

18:28

especially for autonomous vehicles, you

18:31

should be taking safety very seriously even

18:33

in these kinds of things, but just no

18:36

good feelings. It made me more

18:38

afraid probably to do this kind of experiments

18:40

in the future. Perhaps the right way to have

18:42

seen that is positively.

18:45

It depends if you could

18:47

have built that car or just gotten

18:49

another one, right? That would have been the approach. I

18:53

remember

18:55

when I

18:57

got to grad school, I

19:00

got some training about operating

19:02

a lathe and a mill up in

19:04

the machine shop, and I could start

19:06

to make my own parts. And I remember

19:09

breaking some piece of equipment

19:11

in the lab and then

19:13

realizing,

19:14

because maybe this was a unique part and I couldn't

19:17

go buy it. And I realized, oh, I can

19:19

just go make it. That

19:22

was an enabling feeling. Then

19:24

you're not afraid. You know, it might take time. It

19:26

might take more work than you thought it was

19:29

going to be required to get this thing done,

19:31

but you can just go make it. And that's

19:33

freeing in a way that nothing

19:36

else is. You mentioned

19:39

the feedback control, the dynamics.

19:41

Sorry for the romantic question, but

19:44

in the early days and even now, is the dynamics

19:47

probably more appropriate for the early days? Is

19:50

it more art or science?

19:54

There's a lot of science around it.

19:56

And trying to develop

19:59

scientific principles. that

20:01

let you extrapolate from one-legged

20:03

machine to another, develop

20:06

a core set of principles like

20:09

a spring mass bouncing system,

20:12

and then figure out how to apply

20:14

that from a one-legged machine to a two or a four-legged

20:17

machine. Those principles are really important

20:19

and we're definitely a core

20:22

part of our work. There's

20:26

also, you know,

20:28

when we started to pursue humanoid robots,

20:32

there was so much complexity in that machine

20:34

that, you

20:36

know, one of the benefits of the humanoid form

20:38

is you have some intuition about how it

20:41

should look while it's moving. And

20:44

that's a little bit of an art, I think. Now

20:46

it's, or maybe it's just tapping into a

20:48

knowledge that you have deep in your body and then

20:51

trying to express that in the machine.

20:54

But that's an intuition that's a

20:56

little bit more on the art side. Maybe

20:58

it predates your knowledge. You know, before

21:00

you have the knowledge of how to control it, you try

21:02

to work through the art channel and

21:04

humanoids sort of make that available to you.

21:06

If it had been a different shape, maybe

21:08

you wouldn't have had the same intuition about it.

21:11

Yeah, so your knowledge

21:13

about moving through the world is

21:16

not made explicit to you. So you

21:18

just, that's why it's art. Yeah, it

21:21

might be hard to actually articulate exactly.

21:23

There's something about,

21:25

and being a competitive athlete,

21:28

there's something about

21:29

seeing movement. You

21:31

know, a coach, one of the greatest strengths

21:34

a coach has is being able to see

21:37

some little change in what the athlete is doing

21:39

and then being able to articulate that to an athlete.

21:42

And then maybe even trying to say, and you should

21:44

try to feel this. So

21:47

there's something just in seeing. And again,

21:49

sometimes it's hard to articulate

21:52

what it is you're seeing, but there's

21:55

just perceiving the motion at a rate

22:00

Again, sometimes hard to put into words. Yeah,

22:03

I wonder how it

22:07

is possible to achieve sort of truly

22:09

elegant movement. You have a movie like

22:11

Ex Machina, not sure if you've seen it,

22:14

but the main actress in that who

22:16

plays the AI robot, I think is

22:18

a ballerina. I mean, just the natural

22:22

elegance and the, I

22:24

don't know, eloquence of movement, it

22:29

looks efficient and easy and

22:31

just, it looks right.

22:34

It looks beautiful. It looks right, is sort of the key, yeah.

22:36

And then you look at, especially

22:38

early robots, I mean, they're so

22:41

cautious in the way they move that

22:45

it's not the caution that looks wrong.

22:48

It's something about the movement that looks

22:50

wrong that feels like it's very

22:52

inefficient, unnecessarily so. And

22:55

it's hard to put that into words

22:57

exactly. We think, and

23:00

part of the reason why people are attracted to the

23:02

machines we build is

23:04

because the inherent dynamics of movement

23:08

are closer to right. Because

23:11

we try to use

23:13

walking gates or we build a machine

23:15

around this gate where you're trying to work

23:17

with the dynamics of the machine instead

23:19

of

23:20

to stop them. Some of the early walking

23:23

machines, you're essentially,

23:25

you're really trying hard to not

23:27

let them fall over. And so you're always stopping

23:30

the tipping motion. And

23:33

sort of the insight

23:35

of dynamic stability in a valued

23:37

machine is to go with it, let

23:40

the tipping happen, let yourself

23:42

fall, but then catch yourself with

23:44

that next foot. And there's something about getting

23:47

those physics to be expressed in the

23:49

machine

23:50

that people interpret as lifelike

23:52

or

23:56

elegant or just natural

23:58

looking. And so I think if you...

23:59

get the physics right, it

24:02

also ends up being more efficient, likely.

24:05

There's a benefit that it probably ends up being

24:08

more stable in the long run. It could

24:11

walk stably over a wider range

24:13

of conditions. It's

24:17

more beautiful and attractive at the same time. How

24:20

hard is it

24:21

to get the humanoid robot Atlas to

24:24

do some of the things it's recently been doing? Let's

24:27

forget the flips and all of that. Let's

24:29

just look at the running. Maybe

24:31

you can correct me, but there's something about running.

24:33

I mean, that's not careful at all. That's you're falling

24:36

forward. You're jumping forward

24:38

and are falling. So how hard is it to get

24:40

that right? Our first humanoid,

24:43

we needed to deliver natural looking walking.

24:46

We took a contract from the army. They

24:48

wanted a robot that could walk

24:51

naturally. They wanted to put a

24:53

suit on the robot and be able to test it

24:55

in a gas environment. And so they

24:58

wanted the motion to be natural.

25:02

Our goal was a natural looking gate. It

25:04

was surprisingly hard

25:06

to get that to work. But we

25:09

did build an early machine. We

25:11

called it Petman

25:13

prototype. It was the prototype before

25:15

the Petman robot. And

25:18

it had a really nice looking

25:20

gate where it

25:23

would stick the leg out. It would do heel strike

25:26

first before it rolled onto the toes. So you didn't

25:28

land with a flat foot. You extended your

25:30

leg a little bit. But

25:33

even then, it was hard to get the robot to walk

25:35

when you were walking that it fully

25:37

extended its leg and essentially

25:39

landed on an extended leg. And if you watch closely

25:42

how you walk, you probably land on an extended

25:44

leg, but then you immediately flex your knee as

25:46

you start to make that contact. And

25:49

getting that all to work well

25:51

took such a long time. In fact, I

25:55

probably didn't really see the nice

25:57

natural walking that I expected out of

25:59

our human life.

25:59

until maybe last year.

26:01

And

26:03

the team was developing on our newer generation

26:06

of Atlas, some new techniques for

26:10

developing a walking control algorithm. And

26:13

they got that natural looking motion as

26:16

sort of a byproduct of just

26:18

a different process they were applying to developing

26:21

the control.

26:22

So that probably took 15 years, 10 to 15 years

26:26

to sort of get that. The

26:29

Petman prototype was probably in 2008 and

26:32

what was it, 2022? Last

26:34

year that I think I saw a good walking on Atlas.

26:36

If you could just like link on it, what are some

26:38

challenges of getting good walking? So is

26:41

it, is this partially like

26:44

a hardware like

26:46

actuator problem? Is it the control?

26:50

Is it the artistic element of just observing

26:52

the whole system operating in different conditions

26:54

together? I mean, is there some kind of interesting

26:58

quirks or challenges you can speak to

27:00

like the heel strike? Yeah, so one of

27:02

the things that makes the, like this straight

27:04

leg a challenge is

27:07

you're sort of up against a singularity,

27:10

a mathematical singularity where,

27:13

you know, when your leg is fully extended, it can't

27:15

go further the other direction, right? There's only,

27:18

you can only move in one direction. And

27:20

that makes all of the calculations around

27:22

how to produce

27:23

torques at that joint or positions,

27:26

makes it more complicated. And so having

27:29

all of the mathematics so it can deal with these

27:33

singular configurations

27:35

is one of many challenges

27:39

that we face. And I'd say

27:41

in the, you know,

27:43

in those earlier days, again, we were working

27:45

with these really simplified models. So

27:48

we're trying to boil all the physics

27:50

of the complex human body into

27:52

a simpler

27:54

subsystem that we can more easily describe

27:56

in mathematics. And sometimes those simpler

27:59

subsystems don't. have all of that complexity

28:01

of the straight leg built into

28:03

them. And so what's

28:06

happened more recently is we're able to apply

28:08

techniques that let us take the full physics

28:11

of the robot

28:13

into account and deal

28:15

with some of those strange situations

28:18

like the straight leg. So is

28:20

there a fundamental challenge here that it's,

28:22

maybe you can correct me, but is it under

28:25

actuated? Are you falling? Under

28:27

actuated is the right word, right? You can't push

28:31

the robot in any direction you want to, right?

28:33

And so that is one of the hard

28:35

problems of like a locomotion.

28:37

And you have to do that for natural

28:39

movement.

28:40

It's not necessarily required for natural

28:42

movement. It's just required, you know,

28:46

we don't have a gravity force

28:48

that you can hook yourself on to to apply

28:50

an external force in the direction

28:52

you want at all times, right? The

28:54

only external forces are being mediated through

28:57

your feet and how they get mediated

28:59

depend on how you place your feet. And

29:02

you can't just, you

29:04

know, God's hand can't reach down

29:06

and push in any direction you want, you

29:09

know, so. Is there

29:11

some extra challenge to the fact that Alice

29:13

is such a big robot? There is. The

29:16

humanoid form is

29:18

attractive in many ways,

29:20

but it's also a challenge in many ways. You

29:24

have this big upper body that has a lot

29:27

of mass and inertia and

29:29

throwing that inertia around

29:31

increases the complexity of maintaining balance.

29:34

And as soon as you pick up something heavy in your arms,

29:37

you've made that problem even harder. And

29:40

so in the early work

29:43

in the leg lab and in the early days at the

29:45

company, you know, we were pursuing these

29:47

quadruped robots, which had a

29:50

kind of built in simplification. You had

29:52

this big rigid body and then really light

29:54

legs. So when you swing the legs,

29:57

the leg motion didn't

29:59

impact. the body motion very much. All

30:02

the mass and inertia was in the body. But

30:04

when you have the humanoid, that doesn't work. You

30:06

have big heavy legs, you swing the legs, it

30:09

affects everything else. And so

30:12

dealing with all of that interaction does

30:15

make the humanoid a much more complicated

30:17

platform. And I also saw

30:19

that at least recently you've

30:22

been doing more explicit modeling of the

30:24

stuff you pick up.

30:25

Yeah. Which is very,

30:29

really interesting. So you have to, what,

30:32

model the shape, the

30:34

weight distribution? I don't

30:37

know. Like you have to include

30:39

that as part of the modeling, as part of the planning.

30:42

Because, okay, so for people who don't know,

30:44

so Atlas, at

30:46

least in like a recent video, like throws

30:48

a heavy bag, throws a bunch of

30:51

stuff. So what's involved in picking

30:54

up a thing, a heavy thing, and

30:57

when that thing is a

30:59

bunch of different non-standard things, I think

31:01

it's also picked up like a barbell. And

31:05

to be able to throw in some cases, what

31:07

are some interesting challenges there?

31:09

So we were definitely trying to show that

31:11

the robot and the techniques we're applying to

31:13

the robot, to Atlas, let

31:16

us deal with heavy things

31:18

in the world. Because if the robot's going to be useful, it's

31:20

actually got to move stuff around. And

31:23

that needs to be significant stuff. That's

31:26

an appreciable portion of the body weight

31:28

of the robot.

31:29

And we also think this differentiates us

31:31

from the other humanoid robot activities

31:34

that you're seeing out there. Mostly they're not picking

31:36

stuff up yet. And not heavy stuff

31:38

anyway. But

31:40

just like you or me, you need to anticipate

31:43

that moment. You're reaching out to pick something up,

31:45

and as soon as you pick it up, your center of mass is going

31:47

to shift. And if you're going

31:49

to turn in a circle,

31:51

you have to take that inertia into count.

31:53

And if you're going to throw a thing, all of that

31:56

has to be included in the model.

31:59

model of what you're trying to do. So

32:02

the robot needs to have some idea or

32:04

expectation of what that weight is and then

32:07

sort of predict, think

32:09

a couple of seconds ahead, how do I manage

32:11

my, now my body

32:13

plus this big heavy thing together

32:15

to get, and still maintain

32:18

balance, right? And so

32:22

that's a big change for

32:24

us. And I think the tools we've built

32:26

are really allowing that to happen quickly

32:28

now. Some of those motions that you saw

32:31

in that most recent video, we

32:33

were able to create in a matter of days.

32:35

It used to be that it took six months to do anything

32:38

new, you know, on the robot. And now we're

32:40

starting to develop the tools that let us do that in

32:42

a matter of days. And so we think

32:44

that's really exciting. That means that the

32:46

ability to create new behaviors for the robot

32:49

is gonna be a

32:51

quicker process. So being able to explicitly

32:54

model new things that

32:56

it might need to pick up, new types

32:58

of things. And to some degree, you don't

33:00

wanna have to pay too much attention to each

33:03

specific thing,

33:05

right? There's sort of a generalization

33:07

here. Obviously

33:10

when you grab a thing, you have to

33:12

conform your hand, your end effector

33:14

to the surface of that shape. But once

33:16

it's in your hands, it's probably just

33:19

the mass and inertia that matter. And

33:21

the shape may not be as important.

33:24

And so, in

33:26

some ways you wanna pay attention to that detailed shape.

33:29

And in others you wanna generalize it and say, well,

33:32

all I really care about is the center of mass of this

33:35

thing, especially if I'm gonna throw it up on that scaffolding.

33:38

And it's easier if the body is rigid. What if

33:40

there's some, doesn't it throw like a sandbag

33:42

type thing? That tool bag, you know,

33:45

had loose stuff in it. So it

33:48

managed that. There are harder things

33:50

that we haven't done yet. You know, we could have had a big jointed

33:53

thing or I don't know, a bunch of loose wire

33:55

or rope. What about carrying another robot? How

33:57

about that? Ha ha ha.

33:59

Yeah, we haven't we haven't done that yet carry

34:02

spot I guess we did a little bit of a we

34:04

did a little skit around Christmas

34:06

where we had Two spots holding

34:09

up another spot that was trying to put you know a bow

34:11

on a tree So I guess we're doing that in a small

34:13

way Okay, that's pretty

34:16

good.

34:16

Let me ask the all-important question. Do

34:19

you know how much Atlas can curl? Have

34:23

you I Mean,

34:25

you know this for us humans that's

34:28

really one of the most fundamental questions you could

34:31

ask another human being Bench

34:35

It's probably can't curl as much as we can

34:37

yet But a metric that I think

34:39

is interesting is you know

34:41

another way of looking at that strength Is

34:44

you know the box jump? So if how high

34:47

of a box can you jump

34:49

onto question? And

34:51

Atlas, I don't know the exact height It

34:53

was probably a meter high or something like that It was

34:56

a pretty pretty tall jump that Atlas was

34:58

able to manage when we last tried

35:00

to do this and and I have video Of

35:03

my chief technical officer doing

35:05

the same jump and he really struggled, you know

35:07

Oh the human but the human

35:09

getting all the way on top of this box, but then you know

35:11

Atlas was able to do it We're

35:15

now thinking about the next generation of Atlas

35:17

and we're probably going to be in the realm of Person

35:20

can't do it, you know with this with the next generation

35:23

And the robots the actuators are

35:25

going to get stronger where It really

35:27

is the case that at least some of these joints some of these

35:29

motions will be stronger

35:31

and to understand how high I Can jump you probably

35:33

had to do quite a bit of testing. Oh,

35:35

yeah, and there's lots of videos of it trying and failing

35:37

And that's you know, that's all yeah,

35:40

we don't always release those those videos, but they're

35:42

a lot of fun to look at So

35:45

we'll talk a little bit about that But

35:47

if you can you talk to the jumping because you

35:50

talked about the walking it took a long time

35:52

many many years to get the Walking to be natural,

35:54

but there's also really natural

35:57

looking

35:58

robust

35:59

resilient jumping.

36:01

How hard is it to do the jumping? Well,

36:04

again, this stuff has really evolved rapidly

36:06

in the last few years. The first time we

36:08

did a somersault, there

36:10

was a lot of manual iteration.

36:14

What is the trajectory? How hard do

36:16

you throw you? In fact, in these early

36:18

days,

36:19

when I'd see

36:22

early experiments that the team was doing, I might

36:24

make suggestions about how to change the technique.

36:27

Again, borrowing from my own intuition

36:30

about how backflips work.

36:33

Frankly, they don't need that anymore. In the

36:35

early days, you had to iterate in

36:38

almost a manual way, trying to change

36:40

these trajectories of the arms or the legs

36:42

to try to get a successful

36:46

backflip to happen. More

36:48

recently, we're running these

36:50

model predictive

36:53

roll techniques, where

36:56

the robot essentially can think in advance

36:59

for the next second or two about how

37:02

its motion is going to transpire. You

37:04

can solve for optimal trajectories

37:06

to get from A to B. This

37:09

is happening in a much more natural way. We're

37:11

really seeing an acceleration happen

37:13

in the development of these behaviors, again,

37:16

partly due to these

37:19

optimization techniques, sometimes

37:21

learning techniques. It's

37:25

hard in that there's

37:27

a lot of mathematics behind it, but

37:31

we're figuring that out. You can do model

37:34

predictive control for... I

37:37

don't even understand what that looks like when the

37:39

entire robot is in the air, flying

37:42

and doing a backflip. Yeah,

37:45

but that's the cool part, right? The physics,

37:47

we can calculate physics

37:50

pretty well using Newton's laws about

37:53

how it's going to evolve over time.

37:55

The sick trek,

37:57

which was a front somersault with a hat.

37:59

twist is a good example, right?

38:03

You saw the robot on various

38:05

versions of that trick, I've seen

38:08

it land in different configurations

38:10

and it still manages to stabilize itself.

38:12

And so, you know, what this model

38:15

predictive control means is, again,

38:17

in

38:18

real time, the robot is

38:20

projecting ahead, you know, a second into the

38:22

future and sort of exploring options.

38:25

And if I move my arm a little bit more this

38:27

way, how is that going to affect the outcome? And so

38:29

it can do these calculations, many of them,

38:32

you know, and basically solve

38:35

where, you know, given where I am now, maybe

38:37

I took off a little bit screwy from how

38:40

I had planned, I can adjust. So

38:42

you're adjusting in the air. Just on the

38:44

fly. So the model predictive control

38:46

lets you adjust on the fly. And

38:48

of course, I think this is what people

38:50

adapt as well. When

38:53

we do it,

38:54

even a gymnastics trick, we try to set it up

38:56

so it's close to the same every time.

38:59

But we figured out how to do some adjustment on the fly.

39:01

And now we're starting to figure out that the robots

39:03

can do this adjustment on the fly as well

39:05

using these techniques in the air. It's

39:08

so, I mean, it just feels

39:11

from a robotics perspective, just surreal. Well,

39:14

that's sort of the you talked about under actuated,

39:16

right? So when you're in the air, there's

39:19

something there's some things you can't change,

39:21

right? You can't change the momentum while

39:23

it's in the air because you can't apply an external force

39:25

or torque. And so the momentum isn't going to

39:28

change. So how do you work within the constraint

39:30

of that fixed momentum to still get

39:32

from A to B where you want to be?

39:34

That's really on track.

39:38

You're in the air. I mean, you become a drone

39:41

for a brief moment in time. No, you're not even a drone

39:43

because you can't, can't hover. You

39:45

can't hover. You're going to impact

39:48

soon. Be ready. Yeah. Are you considered

39:50

like a hover type thing or no? No, it's

39:52

too much weight. I

39:54

mean, it's just, it's just incredible. It's

39:57

just even to have the guts

39:59

to try back.

39:59

flip with such a large body.

40:02

That's wild. Like,

40:05

uh, Oh, we definitely broke a few robots trying to,

40:08

but that's where the build it, break it, fix it. You know, uh, strategy

40:11

comes in, gotta be willing to break. And what

40:14

ends up happening is you end up by breaking

40:16

the robot repeatedly. You find the weak points and

40:19

then you end up redesigning it. So it doesn't break so

40:21

easily. Next time, you know, through

40:23

the breaking process, you learn a lot,

40:25

like a lot of lessons and, and you keep

40:28

improving, not just how to make the back flip work,

40:30

but everything just how to build a machine better.

40:32

Yeah. Yeah. I mean, is there

40:34

something about just the guts to

40:37

come up with an idea of saying, you

40:40

know what, let's try to make it do a back flip.

40:42

Well, I think the courage to do a back flip

40:44

in the first place and, and to not worry

40:47

too much about the ridicule of somebody saying,

40:49

why the heck are you doing back flips with robots?

40:52

Because a lot of people have asked that, you know, why,

40:54

why, why are you doing this?

40:56

Why go to the moon in this

40:58

decade and do the other things? JFK? Not

41:02

because it's easy because it's hard. Yeah, exactly.

41:07

Don't

41:07

ask questions. Okay. So the,

41:10

the jumping, I mean, it's just, there's a lot of incredible stuff.

41:12

If we can just rewind a little bit to

41:15

the DARPA robotics challenge in 2015, I

41:18

think, which was for

41:20

people who are familiar with the DARPA challenges, it

41:24

was first with autonomous vehicles

41:26

and there's a lot of interesting challenges around that. And

41:29

the DARPA robotics challenges when humanoid

41:32

robots were tasked to do all

41:35

kinds of, you

41:39

know, manipulation, walking,

41:41

driving a car, all

41:43

these kinds of challenges with, if

41:45

I remember correctly, sort of some

41:49

slight capability to communicate

41:51

with humans, but the

41:53

communication was very poor. So basically it

41:55

has to be almost entirely autonomous.

41:59

was entirely interrupted and the robot

42:02

had to be able to proceed. But you could provide

42:04

some high level guidance to the robot,

42:06

basically low bandwidth communications

42:09

to steer it.

42:10

I watched that challenge with kind of

42:13

tears in my eyes eating popcorn. I

42:19

wasn't personally losing hundreds

42:21

of thousands, millions of dollars and

42:24

many years of incredible hard work by

42:27

some of the most brilliant roboticists in the world. So

42:29

that was why the tragic, why the tears

42:31

came. So anyway, what have you,

42:34

just looking back to that time, what have you learned

42:36

from that experience?

42:38

Maybe if you could describe what it was,

42:41

sort of the setup for people who haven't seen it.

42:44

Well, so there was a contest where a

42:46

bunch of different robots

42:48

were asked to do a series of tasks. Some

42:51

of those that you mentioned, drive a vehicle,

42:53

get out, open a door, go

42:55

identify a valve, shut a valve, use

42:58

a tool to maybe cut a hole in

43:00

a

43:02

surface and then crawl

43:05

over some stairs and maybe some

43:07

rough terrain. So it

43:09

was,

43:10

the idea was have a

43:12

general purpose robot that could do lots of different

43:15

things. It

43:18

had to be mobility and manipulation, onboard

43:20

perception.

43:22

And there was a contest which

43:25

DARPA likes at the time was running

43:28

sort of follow on to the grand

43:30

challenge, which was let's try

43:32

to push vehicle autonomy along,

43:35

right? They encourage people

43:37

to build autonomous cars. So

43:39

they're trying to

43:39

basically push an industry forward. And

43:45

we were asked, our role in this was to build

43:47

a humanoid at the time it was our

43:50

sort of first generation Atlas robot.

43:54

And we built

43:56

maybe 10 of them. I don't remember the exact

43:59

number. And DARPA

44:01

distributed those to various teams

44:04

that sort of won a contest,

44:08

showed that they could program

44:11

these robots and then use them to compete

44:13

against each other. And then other robots were introduced

44:15

as well. Some teams built their own robots. Carnegie

44:19

Mellon, for example, built their own robot. And

44:22

all these robots competed to see who could sort of

44:24

get through this maze of the

44:27

fastest. Again,

44:29

I think the purpose was to kind of push the whole industry

44:32

forward.

44:33

We provided the robot and

44:35

some baseline software, but

44:38

we didn't actually compete as a participant where

44:41

we were trying to drive

44:44

the robot through this maze. We were

44:46

just trying to support the other teams. It

44:49

was humbling because it was really a hard task. And

44:53

honestly, the robots, the tiers were because mostly

44:56

the robots didn't do it. They fell down

44:58

repeatedly. It

45:02

was hard to get through this

45:03

contest. Some did,

45:05

and they were rewarded in

45:07

one. But it was humbling because

45:10

of just how hard, these tasks weren't all that

45:12

hard. A person could have done it very easily, but it

45:15

was really hard to get the robots to do it. The

45:18

general nature of it, the variety

45:20

of it. And also that I don't

45:23

know if the tasks were...

45:25

The

45:28

task in themselves helped

45:30

us understand what is difficult and what is not. I

45:33

don't know if that was obvious before the contest

45:35

was designed. So you kind of try to

45:37

figure that out. And I think Atlas

45:40

is really a general robot platform,

45:43

and it's perhaps not best suited for

45:45

the specific tasks of that contest.

45:48

Just for example,

45:50

probably the hardest task is not the

45:52

driving of the car, but getting in

45:54

and out of the car. And Atlas

45:57

probably... If you were to design

45:59

a robot...

45:59

that can get into the car

46:02

easily and get out easily, you probably would not

46:04

make Atlas that particular car. Yeah,

46:07

the robot was a little bit big to get in and

46:09

out of that car. Right, it doesn't fit,

46:11

yeah. This is the curse of a general purpose

46:13

robot, that they're not perfect at any

46:15

one thing,

46:16

but they might be able to do a wide variety

46:18

of things. And that is

46:21

the goal at the

46:23

end of the day. I think

46:26

we all wanna build general purpose robots

46:29

that can be used for lots of different activities, but

46:32

it's hard. And

46:34

the wisdom in

46:37

building successful robots, up

46:39

until this point, have been, go build

46:41

a robot for a specific task and it'll

46:43

do it very well. And as long as you

46:46

control that environment, it'll operate

46:48

perfectly.

46:49

But robots need to be able to deal with uncertainty.

46:52

If they're gonna be useful to us in the future,

46:55

they need to be able to deal with unexpected

46:58

situations. And that's sort of the goal

47:00

of a general purpose or a multi-purpose robot.

47:03

And that's just darn hard. And so some

47:05

of the others, these curious little failures, like

47:07

I remember one of a robot,

47:10

the first

47:12

time you start to try to push on the world

47:15

with a robot, you

47:17

forget that the world pushes back and

47:20

will push you over if you're not ready for it. And

47:23

the robot reached to grab

47:25

the door handle. I think it missed the grasp

47:27

of the door handle, was expecting

47:29

that its hand was on the door handle. And so

47:32

when it tried to turn the knob, it just threw

47:34

itself over. It didn't realize, oh, I

47:36

had missed the door handle. I didn't have, I

47:39

didn't, I was expecting a force back from

47:41

the door. It wasn't there.

47:42

And then I lost my balance. So these

47:44

little simple things that you and I would

47:47

take totally for granted and deal with the

47:50

robots don't know how to deal with yet. And

47:52

so you have to start to deal with all of those

47:54

circumstances. Well, I think a

47:57

lot of us experienced this in...

47:59

even when sober but drunk

48:02

too, sort of you

48:04

pick up a thing and expect it to be,

48:06

what is it, heavy and it turns out

48:08

to be light. Yeah, and then you woo. Oh

48:11

yeah, and then so the same, and I'm

48:13

sure if your depth perception for whatever reason is

48:15

screwed up, if you're drunk

48:17

or some other reason and then you think

48:19

you're putting your hand on the table

48:21

and you miss it, I mean it's the same kind of situation.

48:25

But there's a- Which is why you need to

48:27

be able to predict forward just a little bit. And

48:30

so that's where this model predictive control stuff

48:32

comes in. Predict forward what you think's going

48:34

to happen. And then if that

48:36

does happen, you're in good shape. If something else happens,

48:38

you better start predicting again. So like

48:41

regenerate a plan when

48:44

you don't. I mean

48:47

that also requires a very fast

48:50

feedback loop of updating what

48:54

your prediction, how it matches to the actual real

48:56

world.

48:57

Yeah, those things have to run pretty quickly.

48:59

What's the challenge of running things pretty quickly?

49:01

A thousand hertz of

49:04

acting and sensing

49:07

quickly. You know, there's a few different

49:09

layers of that. You want at the lowest

49:11

level, you like to run things typically

49:14

at around a thousand hertz, which means that

49:16

at each joint of the robot, you're measuring

49:19

position or force and then trying to

49:21

control your actuator, whether it's

49:23

a hydraulic or electric motor, trying

49:26

to control the force coming out of that actuator.

49:28

And you want to do that really fast,

49:31

something like a thousand hertz. And that means you

49:33

can't have too much calculation going

49:35

on at that joint. But

49:38

that's pretty manageable these days and it's

49:40

fairly common. And then there's another

49:42

layer that you're probably calculating, you

49:44

know,

49:45

maybe at a hundred hertz, maybe 10 times

49:47

slower, which is now starting to

49:49

look at the overall body motion and

49:52

thinking about the larger physics

49:55

of the robot. And

49:59

then there's yet another layer.

49:59

loop that's probably happening a little bit slower,

50:02

which is where you start to bring your

50:04

perception and your vision and things

50:06

like that.

50:07

And so you need to run all of these

50:09

loops sort of simultaneously. You

50:11

do have to manage your computer

50:14

time so that you can squeeze

50:16

in all the calculations you need in

50:18

real time in a very consistent way. And

50:23

the amount of calculation we can

50:25

do is increasing as computers

50:27

get better, which means we can start to do more

50:29

sophisticated calculations. I can have

50:31

a more complex model doing

50:34

my forward prediction.

50:37

And that might allow me to do even

50:39

better predictions as I get better and better.

50:42

And it used to be, again, we had,

50:44

you know, 10 years ago,

50:46

we had to have pretty simple models

50:50

that we were running, you know, at those fast

50:52

rates because the computers weren't as capable

50:55

about calculating forward with

50:57

a sophisticated model. But as

51:00

computation gets better, we

51:02

can do more of that. What about the actual pipeline

51:05

of software engineering?

51:07

How easy is it to keep updating Atlas,

51:10

like to continue its development on it? So

51:12

how many computers are

51:14

on there? Is there a nice pipeline?

51:17

It's an important part of building

51:19

a team around it, which means,

51:22

you know, you need to also

51:24

have software simulation tools,

51:26

you know. So we

51:29

have always made

51:31

strong use of physics-based

51:33

simulation tools to do some

51:36

of this calculation, basically

51:38

test it in simulation before you put it on

51:40

the robot. But you also want the same

51:43

code that you're running in simulation to be the

51:45

same code you're running on the hardware. And

51:47

so even getting to the point where

51:50

it was the same code going from one to the

51:52

other,

51:53

we probably didn't really get that working until,

51:55

you know, a few years, several years ago. But

51:58

that was a, you know, that was a bit of a model.

51:59

milestone. And so you want to work,

52:02

certainly work these pipelines so that you can make it

52:04

as easy as possible and have a bunch of people

52:06

working in parallel, especially when we

52:09

only have, you know, for the Atlas

52:11

robots, the modern Atlas robots

52:13

at the company. And you know, we

52:15

probably have, you know, 40 developers,

52:18

they're all trying to gain access

52:20

to it. And so you need to share resources

52:22

and use some of these, some of the software

52:25

pipeline. Well, that's a really exciting step

52:27

to be able to run the exact same code and simulation

52:29

as on the actual robot. How

52:31

hard is it to do

52:35

realistic simulation,

52:37

physics based simulation of, of

52:40

Atlas such that, I mean, the

52:42

dream is like, if it works in simulation

52:44

works perfectly in reality, how hard is it to

52:47

sort of close, keep working on closing that gap?

52:49

The root of some of our physics based simulation

52:51

tools really started at MIT.

52:54

And we built some,

52:56

some, some good physics based modeling tools there.

52:59

The early days of the company, we were trying

53:02

to develop those tools as a commercial product.

53:04

So we continued to develop them.

53:06

It wasn't a particularly successful commercial product.

53:09

But we ended up with some nice physics based simulation

53:11

tools so that when we started doing legged robotics,

53:13

again, we had a really nice tool to work with. And

53:16

the things we paid attention to were

53:18

things that weren't necessarily handled very

53:21

well. In the commercial tools, you could

53:23

buy off the shelf like, like interaction

53:25

with the world, like foot ground contact.

53:28

So trying to model those contact events

53:32

well,

53:33

in a way that captured

53:36

the important parts of the interaction

53:40

was a really important element to

53:42

get right. And to also do in a way

53:44

that was computationally feasible and

53:47

could run fast. Because if you, if your simulation

53:50

runs too slow, you know, then your developers

53:52

are sitting around waiting for stuff to run and compile.

53:55

So it's always about efficient,

53:58

fast operations.

53:58

as well. So

54:01

that's been a big part of it. I think developing

54:03

those tools in parallel to the development

54:06

of the platform and trying

54:08

to scale them has really been essential,

54:11

I'd say, to us being

54:13

able to assemble a team of people that could do this. Yeah,

54:16

how to simulate contact periods

54:18

of foot-ground contact but for

54:20

manipulation because

54:23

don't you want to

54:24

model

54:26

all kinds of surfaces? Yeah, so it

54:28

will be even more complex with

54:30

manipulation because there's a lot more going on

54:33

and you need to capture things

54:36

slipping and moving in your

54:39

hand. It's

54:41

a level of complexity that I think goes

54:43

above foot-ground

54:46

contact when you really start doing

54:49

dexterous manipulation. So there's challenges

54:51

ahead still. So how far are we

54:53

away from me being able to walk with Atlas

54:56

in the sand along the beach and

54:58

us both drinking a beer?

55:04

Maybe Atlas could spill his beer

55:06

because he's got nowhere to put it. Atlas

55:10

could walk on the sand. So can it? Yeah,

55:13

I mean, have we really had him out on

55:15

the beach? We take them outside

55:17

often, rocks, hills, that

55:19

sort of thing, even just around our lab in Waltham.

55:23

We probably haven't been on the sand but I

55:26

don't doubt that we could deal with

55:29

it. We

55:29

might have to spend a little bit of time to sort of make that work

55:32

but we

55:35

had to take

55:37

Big Dog to Thailand years

55:39

ago and we

55:42

did this great video of the robot

55:44

walking in the sand, walking into

55:47

the ocean up to, I

55:49

don't know, its belly or something like that

55:51

and then turning around and walking out, all while

55:53

playing some cool beach music. Great

55:56

show but then we didn't really clean the robot

55:58

off and the saltwater was really hard.

55:59

on it, so we put it in a box,

56:02

shipped it back. By the time it came back, we

56:04

had some problems with corrosion. So

56:07

it's a salt water. It's not like... Salt

56:09

stuff. It's not like sand getting into the components

56:11

or something like this. But I'm sure if

56:14

this is a big priority, you can make

56:16

it waterproof. Right, right. That

56:18

just wasn't our goal at the time. Well,

56:20

it's a personal goal of mine to walk along the

56:23

beach. But it's a human

56:25

problem too. You get sand everywhere, it's just

56:27

a jam mess. So

56:29

soft surfaces are OK. So I

56:31

mean, can we just linger on the robotics

56:34

challenge? There's a pile of rubble there

56:35

to walk over. Is

56:40

that... How

56:43

difficult is that task? In

56:45

the early days of developing Big Dog, the

56:47

loose rock was the epitome of

56:49

the hard walking surface. Because you step

56:51

down and then the rock, and you have these little

56:54

point feet on the robot, and

56:56

the rock can roll. And

56:59

you have to deal with that last minute

57:01

change in your foot

57:03

placement. Yeah, so you step on the thing,

57:06

and that thing responds to you stepping on it. Yeah,

57:08

and it moves where your point of support is.

57:11

And so it's really... That became

57:14

kind of the essence of the test. And

57:16

so that was the beginning of us starting

57:18

to build rock piles in our

57:20

parking lots. And

57:23

we would actually build boxes full of rocks

57:25

and bring them into the lab. And then

57:27

we would have the robots walking across these boxes

57:29

of rocks because that became the

57:31

essential test.

57:33

So you mentioned Big Dog. Can

57:35

we maybe take a stroll through the history

57:38

about the dynamics? So what

57:41

and who is Big Dog? By the way, is who...

57:45

Do you try not to anthropomorphize

57:48

the robots? Do you try not to...

57:50

Do you try to remember that they're... This is like

57:52

the division I have, because for me it's impossible.

57:55

For me, there's a magic to the

57:58

being that is a robot. It is not human. But

58:00

it is the

58:03

same magic that

58:06

a living being has when it moves about the world

58:08

is there in the robot. So I

58:11

don't know what question I'm asking, but should

58:13

I say what or who, I guess. Who

58:15

is Big Dog? What is Big Dog? Well,

58:18

I'll say to address the meta question,

58:21

we don't try to draw hard lines around

58:23

it being an it or a him or a

58:25

her.

58:26

It's okay, right?

58:30

I think part of the magic of these kinds

58:32

of machines is by nature

58:35

of their organic movement,

58:37

of their dynamics, we

58:40

tend to want to identify

58:42

with them. We tend to look at them and attribute

58:46

maybe feeling to that because we've

58:49

only seen things that move like

58:51

this that were alive. And

58:54

so this is an opportunity.

58:56

It means that you could have

59:00

feelings for a machine

59:02

and people have feelings for their cars. They

59:05

get attracted to them, attached to them. So

59:07

that inherently

59:09

could be a good thing as long as we manage what

59:11

that interaction is. So

59:14

we don't put strong boundaries around

59:16

this and ultimately think it's a benefit,

59:19

but it also can be a

59:21

bit of a curse because I think people look

59:23

at these machines and

59:25

they attribute a level of intelligence that the machines

59:28

don't have. Why? Because again,

59:30

they've seen things move like this that were living

59:33

beings, which are intelligent.

59:36

And so they want to attribute intelligence to the robots

59:39

that isn't appropriate yet, even though they move

59:42

like an intelligent being. But you try

59:44

to acknowledge that the anthropomorphization

59:46

is there and try to,

59:49

first of all, acknowledge that it's there. And

59:52

have a little fun with it. You know, our most

59:55

recent video, it's just kind

59:57

of fun to...

59:59

Look at the robot. We started

1:00:01

off the video with Atlas

1:00:05

kind of looking around for where the bag

1:00:08

of tools was, because the guy up on the scaffolding

1:00:10

says, send me some tools. And Atlas

1:00:12

has to kind of look around and see where they

1:00:14

are. And there's a little personality

1:00:16

there. That

1:00:18

is fun, it's entertaining, it makes our jobs

1:00:20

interesting. And I think in the long run, can

1:00:23

enhance interaction between humans

1:00:25

and robots in a way that isn't

1:00:27

available to machines that don't move that

1:00:29

way. This is something to me personally, it's very

1:00:31

interesting.

1:00:34

I happen to have a lot of legged

1:00:36

robots. I

1:00:39

hope to have a lot of spots in

1:00:41

my possession. I'm

1:00:43

interested in celebrating robotics and celebrating

1:00:46

companies. And I also don't want to, companies

1:00:48

that do incredible stuff like Boston Dynamics. And

1:00:51

there's,

1:00:53

you know, I'm a little crazy. And

1:00:55

you say you don't want to, you

1:00:58

want to align, you want to help the company.

1:01:00

Because I ultimately want a company

1:01:02

that Boston Dynamics to succeed. And part

1:01:05

of that we'll talk about, success kind

1:01:07

of requires making money. And so

1:01:09

the kind of stuff I'm particularly

1:01:12

interested in may not be the thing

1:01:14

that makes money in the short term. I can make

1:01:16

an argument that it will in the long term. But the

1:01:18

kind of stuff I've been playing with is

1:01:21

a robust way of

1:01:23

having the quadrupas, the robot

1:01:26

dogs, communicate emotion with

1:01:28

their body movement. The same kind of stuff you

1:01:30

do with a dog. But not

1:01:32

hard coded,

1:01:33

but in a robust way. And

1:01:36

be able to communicate excitement or

1:01:38

fear, boredom, all this

1:01:40

kinds of stuff. And I think as

1:01:42

a base layer of

1:01:44

function of behavior to

1:01:47

add on top of a robot, I think that's a really powerful

1:01:49

way to make

1:01:52

the robot more usable for humans, for whatever

1:01:54

application. It's gonna be really important. And

1:01:57

it's a thing we're beginning to

1:01:59

pay attention to.

1:01:59

We really

1:02:02

want to start a differentiator for

1:02:04

the company has always been we really want

1:02:06

the robot to work. We want it to be useful.

1:02:11

Making it work at first meant the

1:02:13

luggage locomotion really works. It can really

1:02:16

get around and it doesn't fall down. But

1:02:20

beyond that, now it needs to be a useful

1:02:22

tool and our customers are,

1:02:25

for example, factory owners, people who

1:02:27

are running a process manufacturing

1:02:30

facility and the robot needs to be able to get

1:02:32

through this complex facility in a reliable

1:02:34

way, taking measurements.

1:02:38

We need for people

1:02:40

who are operating those robots to understand what

1:02:43

the robots are doing. If the robot

1:02:45

gets into needs help or

1:02:48

is in trouble or something, it needs

1:02:50

to be able to communicate and

1:02:52

a physical indication of

1:02:54

some sort

1:02:55

so that a person looks at the robot and

1:02:57

goes, oh, I know what that robot is doing. The

1:02:59

robot is going to go take measurements of

1:03:02

my vacuum pump with its thermal

1:03:04

camera. You

1:03:06

want to

1:03:07

be able to indicate that. We're

1:03:08

even just the

1:03:10

robot is about to turn in front

1:03:13

of you and maybe indicate that it's

1:03:15

going to turn and so you sort of see and

1:03:17

can anticipate its motion. This

1:03:20

kind of communication is going to become more and

1:03:22

more important. It wasn't sort of

1:03:24

our starting point,

1:03:26

but now that the robots are really

1:03:29

out in the world and we have about

1:03:31

a thousand of them out with customers right

1:03:33

now,

1:03:35

this layer of physical

1:03:38

indication I think is going to become more

1:03:40

and more important.

1:03:41

We'll talk about where it goes because

1:03:44

there's a lot of interesting possibilities, but if it

1:03:46

can return back to the origins of Boston

1:03:48

and dynamics with the more

1:03:50

research, the R&D side

1:03:52

before we talk about

1:03:54

how to build robots at scale.

1:03:56

It's a big dog. Who's a

1:03:58

big dog? So the

1:04:01

company started in 1992 and in probably 2003, I believe

1:04:03

is when we took a contract

1:04:06

from

1:04:15

dark, so basically 10

1:04:17

years, 11 years. We

1:04:19

weren't doing robotics. We did a little bit of robotics

1:04:22

with Sony. They had an IBO,

1:04:25

they're IBO robot. We were developing some software

1:04:27

for that that kind of got us a little bit involved

1:04:29

with robotics again. Then there's

1:04:31

this opportunity to do a DARPA contract

1:04:34

where they wanted to build a

1:04:37

robot dog. And

1:04:39

we won a contract to

1:04:41

build that. And so that was the genesis

1:04:44

of Big Dog. And it

1:04:46

was a quadruped. It was the first time we built

1:04:48

a robot that had everything on board that you

1:04:50

could actually take the robot out into the wild

1:04:53

and operate it. So it had an onboard power plant,

1:04:55

it had onboard computers, it

1:04:57

had hydraulic

1:04:58

actuators that

1:05:00

needed to be cooled. So we had cooling systems

1:05:02

built in. Everything integrated

1:05:04

into the robot.

1:05:06

And

1:05:07

that was a pretty rough start. It was 10

1:05:10

years that we were not a robotics

1:05:12

company, we were a simulation company. And then we had

1:05:14

to build a robot in about a year. So that

1:05:17

was a little bit of a rough transition. I mean,

1:05:20

can you

1:05:22

just comment on the roughness of that transition? Big

1:05:26

Dog, I mean, this is this big quadruped

1:05:31

four legs robot. We built

1:05:33

a few different versions of them. But the first one,

1:05:35

the very earliest ones, you know, didn't work very well.

1:05:38

We would take them out and it was

1:05:40

hard to get, you

1:05:43

know, a go-kart engine driving

1:05:45

a hydraulic. Oh, is that what it was? And,

1:05:48

you know, having that all work while

1:05:51

trying to get, you know, the

1:05:53

robot to stabilize itself. So what

1:05:56

was the power plant? What was the engine? It

1:05:58

seemed like my vague recollection.

1:05:59

I don't know,

1:06:03

it felt very loud and aggressive

1:06:06

and kind of thrown together. It absolutely

1:06:09

was, right? We weren't trying to

1:06:12

design the best robot hardware at the time. And

1:06:16

we wanted to buy an off-the-shelf engine. And so

1:06:19

many of the early versions of Big

1:06:21

Dog had literally go-kart

1:06:23

engines or something like that. Gas powered? Like

1:06:26

a gas powered two-stroke engine. And

1:06:29

the reason why it was two-stroke

1:06:29

is two-stroke engines are lighter weight.

1:06:33

But they're also, and we generally

1:06:35

didn't put mufflers on them because we're trying to save the weight.

1:06:38

We didn't care about the noise. And some of these

1:06:40

things were horribly loud. But

1:06:42

we're trying to manage weight because managing

1:06:44

weight in a legged robot is always important

1:06:47

because it has to carry everything. That

1:06:49

said that thing was big. Well, I've

1:06:52

seen the videos of it. I mean, the

1:06:54

early versions stood about, I

1:06:56

don't know, belly high, chest high.

1:06:58

They probably weighed

1:06:59

maybe a couple of hundred

1:07:02

pounds. But over

1:07:04

the course of probably five years,

1:07:08

we were able to get that robot

1:07:12

to really manage a remarkable

1:07:15

level of rough terrain. So we started

1:07:17

out with just walking on the flat. And then we started walking

1:07:19

on rocks and then inclines and then mud

1:07:22

and slippery mud. And

1:07:24

by the end of that program,

1:07:26

we were convinced that legged

1:07:29

locomotion in a robot could actually

1:07:31

work because going into it, we

1:07:34

didn't know that. We had built quadrupeds

1:07:36

at MIT, but

1:07:38

they used a giant hydraulic

1:07:41

pump in the lab. They use a giant

1:07:43

computer that was in the lab. They're always tethered

1:07:45

to the lab. This was the

1:07:47

first time something that was self-contained,

1:07:51

walked around in the world and

1:07:54

balanced. And the purpose was

1:07:56

to prove to ourself that the legged locomotion

1:07:58

could really work.

1:07:59

Big Dog really cut that open for

1:08:02

us. And it was the beginning of what

1:08:04

became a whole series of robots. So once

1:08:06

we showed to DARPA that you could make a legged

1:08:09

robot that could work, there was a

1:08:11

period at DARPA where robotics got really

1:08:13

hot and there was lots of different programs.

1:08:16

And we were able to build other

1:08:18

robots. We built other quadrupeds to

1:08:21

hand like LS3,

1:08:23

designed to carry heavy loads.

1:08:25

We built Cheetah, which was designed

1:08:28

to explore what are the limits to how fast

1:08:30

you can run. We began

1:08:32

to build sort of a portfolio of machines

1:08:37

and software that let us

1:08:39

build not just one robot, but a whole

1:08:41

family of robots. To push the limits in all kinds

1:08:43

of directions. Yeah, and to discover those principles.

1:08:46

You know, you asked earlier about the art and science

1:08:48

of a legged locomotion. We

1:08:50

were able to develop principles of legged locomotion

1:08:53

so that we knew how to build a

1:08:55

small legged robot or a big one. Leg

1:08:58

length, you know, was now a parameter

1:09:00

that we could play with. Payload

1:09:02

was a parameter we could

1:09:04

play with. So we built the LS3, which

1:09:06

was an 800 pound robot designed to carry

1:09:08

a 400 pound payload. And

1:09:10

we learned the design rules, basically developed

1:09:13

the design rules. How do you scale

1:09:15

different robot systems to, you know,

1:09:18

their terrain, to their walking speed,

1:09:21

to their payload?

1:09:22

So when

1:09:24

was Spot born?

1:09:27

Around 2012 or so. So

1:09:33

again, almost 10 years into sort of a run

1:09:35

with DARPA where we built a bunch

1:09:37

of different quadrupeds. We had sort of a different

1:09:40

thread where we started building humanoids. We

1:09:45

saw that probably an end was coming

1:09:48

where the government was gonna kind of back

1:09:50

off from a lot of robotics investment. And

1:09:55

in order to maintain progress,

1:09:57

we just deduced. that,

1:10:00

well, we probably need to sell ourselves to somebody

1:10:02

who wants to continue to invest in this

1:10:04

area. And that was Google. And

1:10:07

so

1:10:08

at Google,

1:10:09

we would meet regularly with Larry Page

1:10:12

and Larry just started asking us, you know, what's

1:10:15

your product going to be? And you

1:10:17

know, the logical

1:10:19

thing, the thing that we had the most history

1:10:21

with that we wanted to continue

1:10:23

developing was our quadruped,

1:10:26

but we knew it needed to be smaller. We knew it couldn't have

1:10:28

a gas engine. We thought it probably

1:10:31

couldn't be hydraulically actuated. So

1:10:33

that began the process of

1:10:36

exploring if we could migrate to

1:10:38

a smaller electrically actuated robot.

1:10:42

And that was really the genesis of SPOT.

1:10:45

So not a gas engine and

1:10:47

the actuators are electric. Yes. So

1:10:50

can you maybe comment on what it's like

1:10:52

at Google with working

1:10:54

with Larry Page, having those meetings

1:10:57

and thinking of what will a robot look

1:10:59

like

1:11:00

that could be built

1:11:02

at scale? What like starting to think

1:11:04

about a product? Larry

1:11:07

always liked the toothbrush

1:11:09

test. He wanted products that you used every

1:11:11

day.

1:11:15

What they really wanted was, you know,

1:11:18

a consumer level product,

1:11:20

something that would work in your house.

1:11:22

We

1:11:24

didn't think that was the right next thing to do

1:11:27

because to be a consumer level product

1:11:29

cost is going to be very important.

1:11:32

Probably needed to cost a few thousand

1:11:34

dollars. And we were

1:11:36

building these machines that cost hundreds of thousands

1:11:38

of dollars, maybe a million dollars to build. Of

1:11:41

course, we were only building a

1:11:42

two,

1:11:44

but we didn't see how to get all the way to this

1:11:46

consumer level product in a short

1:11:48

amount of time. And

1:11:51

he

1:11:51

suggested that we make the

1:11:53

robots really inexpensive. And

1:11:56

part of our philosophy has always been build

1:11:59

the best hardware. you can. Make

1:12:02

the machine operate well

1:12:05

so that you're trying to

1:12:07

solve, you know, discover

1:12:10

the hard problem that you don't know about. Don't

1:12:13

make it harder by building a crappy machine, basically.

1:12:15

Build the best machine you can. There's

1:12:18

plenty of hard problems to solve that are going to have to do

1:12:20

with, you know, under actuated systems and

1:12:22

balance. And so we wanted

1:12:24

to build these high quality machines still.

1:12:26

And we thought that was important for us

1:12:28

to continue learning about really

1:12:31

what was the important parts

1:12:33

of that make robots work.

1:12:35

And so there was a

1:12:37

little bit of a philosophical difference

1:12:40

there. And so ultimately,

1:12:42

that's why we're building robots for the industrial

1:12:45

sector now. Because the industry

1:12:48

can afford a more expensive machine because,

1:12:50

you know, their productivity depends

1:12:53

on keeping their factory going. And so if

1:12:56

spot costs, you know, $100,000 or

1:12:59

more, that's not such a big expense

1:13:01

to them. Whereas at the consumer level,

1:13:04

no one's going to buy a robot like that.

1:13:06

And I

1:13:07

think we might eventually get to a consumer level

1:13:09

product that will be that cheap. But

1:13:11

I think the path to getting there needs to

1:13:13

go through these really nice machines. So

1:13:16

we can then learn how to simplify. So

1:13:18

what can you say to the almost the

1:13:21

engineering challenge of

1:13:23

bringing down cost of

1:13:26

a robot? So that

1:13:28

presumably when you try to build the robot at scale,

1:13:30

that also comes into play when you're trying to make

1:13:32

money on a robot, even in the industrial

1:13:35

setting. But how interesting,

1:13:37

how challenging of

1:13:39

a thing is

1:13:41

that? In particular,

1:13:43

probably new to an R&D company.

1:13:46

Yeah, I'm glad you brought that last part up. The

1:13:48

transition from an R&D company to a commercial

1:13:51

company, that's the thing you worry

1:13:53

about, you know, because you've got these engineers who

1:13:55

love hard problems, who want to figure out how to

1:13:57

make robots work. And you don't know

1:13:59

if you

1:13:59

have engineers that want to work on

1:14:02

the quality and reliability and cost

1:14:04

that is ultimately required.

1:14:07

And indeed, we have brought on a lot

1:14:10

of new people who are inspired by those problems,

1:14:13

but the big takeaway lesson for me is

1:14:16

we have good people. We have engineers who

1:14:19

want to solve problems, and

1:14:21

the quality and cost and manufacturability

1:14:24

is just another kind of problem. And

1:14:26

because they're so invested in what

1:14:28

we're doing,

1:14:29

they're interested in and will go work on

1:14:32

those problems as well. And

1:14:34

so I think we're managing that transition

1:14:37

very well. In fact, I'm really pleased that,

1:14:40

I mean, it's

1:14:42

a huge undertaking, by the way, right? So

1:14:45

even having to

1:14:47

get reliability to where it needs to be, we

1:14:49

have to have fleets of robots that we're just

1:14:51

operating 24-7 in our offices

1:14:54

to go find those rare failures and

1:14:57

eliminate them. It's just a totally different

1:14:59

kind of activity than the research activity where

1:15:01

you get it to work, the one robot you

1:15:04

have

1:15:04

to work in a repeatable

1:15:06

way at the high stakes

1:15:09

demo. It's just very different. But

1:15:12

I think we're making remarkable progress, I

1:15:14

guess. So one of the cool things I got

1:15:16

a chance to visit Boston Dynamics,

1:15:18

and I mean, one

1:15:20

of

1:15:23

the things that's really cool is to see

1:15:25

a large number of robots moving about.

1:15:28

Because I think one of the things you notice

1:15:31

in the research environment

1:15:33

at MIT, for example, I don't think

1:15:35

anyone ever has a working robot for a prolonged

1:15:38

period of time. Exactly. So

1:15:40

most robots are just sitting there in a sad

1:15:44

state of despair waiting to be born,

1:15:46

brought to life for a brief moment of time. Just

1:15:49

to have, I just remember

1:15:52

there's a Spot robot, I

1:15:54

had a cowboy hat on, and it was just walking randomly

1:15:57

for whatever reason. I don't

1:15:58

even know. But there's a kind of... a

1:16:00

sense of sentience to

1:16:03

it because it doesn't seem like anybody was supervising

1:16:05

it. It was just doing its thing. I'm going to stop

1:16:07

way short of the sentience. Sure. It

1:16:10

is the case that if you come to our office today

1:16:12

and walk around the hallways, you're

1:16:15

going to see a dozen robots just

1:16:17

walking around all the time.

1:16:21

That's really a reliability test

1:16:23

for us. We have these robots programmed

1:16:26

to do autonomous

1:16:27

missions, get up off their charging

1:16:29

dock, walk around the building, collect data

1:16:31

at a few different places, and go sit back down. We

1:16:34

want that to be a very reliable process

1:16:36

because that's what somebody who's running

1:16:39

a brewery, a factory, that's

1:16:42

what they need the robot to do. We

1:16:45

have to dog food our own robot. We have to test

1:16:47

it in that way.

1:16:50

On a weekly basis, we

1:16:52

have robots that are accruing something like 1,500

1:16:55

or maybe 2,000 kilometers

1:16:58

of walking and over 1,000 hours

1:17:00

of operation every week. That's

1:17:05

something that almost I don't think anybody else in the world

1:17:07

can do because, hey, you have to have a fleet of robots

1:17:09

to just accrue that much information. You

1:17:12

have to be willing to dedicate it to that

1:17:15

test. That's

1:17:18

essential. That's how you get the reliability. That's

1:17:20

how you get it. What about some of the cost cutting

1:17:23

from the manufacturer side? What have you

1:17:25

learned

1:17:25

from the manufacturer side of the transition

1:17:28

from R&D? We're

1:17:30

still learning a lot there. We're

1:17:32

learning how to cast parts instead

1:17:35

of mill it all out of billet

1:17:37

aluminum. We're

1:17:39

learning how to get plastic molded parts.

1:17:42

We're learning about how to control that

1:17:44

process so that you can build the same robot

1:17:47

twice in a row. There's a lot to learn

1:17:49

there, and we're only partway through that process.

1:17:53

We've set up a manufacturing facility in

1:17:56

Waltham. It's about a mile from

1:17:58

our headquarters. We're

1:18:00

doing final assembly and test of both spots

1:18:02

and stretches at that factory.

1:18:07

It's hard because to be honest,

1:18:10

we're still iterating on the design of the robot. As

1:18:12

we find failures from these reliability

1:18:14

tests, we need to go engineer changes. Those

1:18:17

changes need to now be propagated to the

1:18:19

manufacturing line. That's a hard process,

1:18:22

especially when you want to move as fast as we do. It's

1:18:26

been challenging and it makes

1:18:29

it

1:18:29

the folks who are working supply chain who

1:18:32

are trying to get the cheapest parts for

1:18:34

us, it requires that you buy

1:18:36

a lot of them to make them cheap. Then we

1:18:38

go change the design from underneath them. They're like,

1:18:40

what are you doing? Getting

1:18:42

everybody on the same page here, that

1:18:45

we still need to move fast, but we also need to

1:18:47

try to figure out how to reduce costs. That's

1:18:50

one of the challenges of this migration

1:18:52

we're going through. Over the past few years,

1:18:54

challenges to the supply chain. I

1:18:57

imagine you've been a part of a bunch of stressful

1:18:59

meetings.

1:18:59

Things got more expensive and

1:19:02

harder to get. It's

1:19:05

all been a challenge. Is there still room for simplification?

1:19:07

Oh yeah, much more. These

1:19:10

are really just the first generation of these machines.

1:19:13

We're already thinking about what the next generation of spots

1:19:16

going to look like.

1:19:17

Spot was built as a platform. You

1:19:20

could put almost any sensor on it. We provided

1:19:23

data communications, mechanical

1:19:25

connections, power connections.

1:19:30

For example, in the applications that we're

1:19:32

excited about where you're monitoring

1:19:34

these factories for their health,

1:19:37

there's probably a simpler machine

1:19:39

that we could build that's really focused

1:19:42

on that use case. That's

1:19:45

the difference between the general purpose

1:19:47

machine or the platform versus the

1:19:49

purpose built machine. Even

1:19:52

in the factory, we'd still like the robot to do

1:19:54

lots of different tasks. If we

1:19:56

really knew on day one that we're going to be operating

1:19:59

in a factory with

1:19:59

these three sensors in it, we would have

1:20:02

it all integrated in a package that would be easier,

1:20:04

more, less expensive, and more

1:20:06

reliable. So we're contemplating

1:20:09

building, you know, a next generation of that machine.

1:20:11

So we should mention that, so SPOT,

1:20:14

for people who are somehow not familiar, so

1:20:17

it's a yellow robotic

1:20:19

dog, and

1:20:22

has

1:20:23

been featured in many dance videos. It

1:20:26

also has gained an arm. So

1:20:29

what can you say about the arm that SPOT has?

1:20:32

About the challenges of this design, and

1:20:34

the manufacture of it?

1:20:36

We think the future of mobile

1:20:38

robots is mobile manipulation.

1:20:41

That's where, you know,

1:20:43

in the past 10 years,

1:20:45

it was getting mobility to work, getting the leg of

1:20:47

locomotion to work. If you ask, what's the

1:20:49

heart problem in the next 10 years? It's

1:20:52

getting a mobile robot to do useful manipulation

1:20:55

for you.

1:20:55

And so we wanted SPOT to have an arm

1:20:59

to experiment with those problems.

1:21:03

And the arm is

1:21:06

almost as complex as the robot itself,

1:21:09

you know, and it's

1:21:11

an attachable payload. It

1:21:14

has, you know, several motors and

1:21:16

actuators and sensors. It has a camera

1:21:19

in the end of its hand, so, you know, you can

1:21:21

sort of see something,

1:21:24

and the robot will control the

1:21:26

motion of its hand to go pick it up autonomously.

1:21:28

So in the same way the robot walks and balances,

1:21:32

managing its own foot placement to stay balanced,

1:21:34

we want manipulation to be mostly

1:21:37

autonomous, where the robot, you indicate, okay,

1:21:39

go grab that bottle, and then the robot will just go

1:21:41

do it using the camera in its hand,

1:21:44

and then sort of closing in on the

1:21:46

grasp. But it's

1:21:49

a whole nother complex robot on top of a

1:21:51

complex-legged robot,

1:21:54

and of course we made the hand

1:21:56

look a little like a head, you

1:21:59

know, because. because again, we want it

1:22:01

to be sort of identifiable. In

1:22:03

the last year, a lot

1:22:05

of our sales have been people who already

1:22:08

have a robot now buying an arm to add to

1:22:10

that robot. Oh,

1:22:12

interesting. And so the arm

1:22:14

is for sale. Oh yeah, oh yeah. It's

1:22:16

an option. What's the interface

1:22:19

like to work with the arm?

1:22:21

Like is it pretty, so are

1:22:23

they designed primarily, I guess

1:22:25

just ask that question in general about robots

1:22:28

from Boston Dynamics. Is it designed to

1:22:31

be

1:22:32

easily and efficiently

1:22:34

operated remotely by a human being or

1:22:37

is there also the capability to

1:22:39

push towards autonomy?

1:22:41

We want both.

1:22:43

In the next version of

1:22:45

the software that we release, which

1:22:47

will be version 3.3, we're going

1:22:50

to offer the ability of, if

1:22:52

you have an autonomous mission for the robot, we're

1:22:55

going to include the option that it can go

1:22:57

through a door, which means it's going to have to have an arm

1:22:59

and it's going to have to use that arm to open the door. And

1:23:02

so that'll be an autonomous manipulation

1:23:04

task that just, you can program

1:23:07

easily with the robot strictly

1:23:10

through, we have a tablet interface. And

1:23:13

so

1:23:13

on the tablet, you sort of see the

1:23:15

view that Spot sees, you say,

1:23:17

there's the door handle, the hinges

1:23:20

are on the left and it opens in, the rest is

1:23:22

up to you. Take care of it. So

1:23:24

it just takes care of everything. Yeah. So

1:23:27

we want, and for a task

1:23:29

like opening doors, you can automate

1:23:31

most of that. And we've automated a few other tasks.

1:23:34

We had a customer who

1:23:36

had a high powered

1:23:39

breaker switch, essentially. It's an

1:23:41

electric utility, Ontario

1:23:43

power generation.

1:23:45

And they have to, when they're

1:23:47

going to disconnect their power supply,

1:23:50

that could be a gas generator, could be a nuclear

1:23:52

power plant. From the grid, you have

1:23:54

to disconnect this breaker switch. Well,

1:23:56

as you can imagine, there's hundreds

1:23:59

or thousands of... amps and volts involved

1:24:01

in this breaker switch. And it's a dangerous

1:24:04

event, because occasionally you'll get what's called an

1:24:06

arc flash. As you just do this disconnect,

1:24:09

the power, the sparks jump across

1:24:11

and people die doing this. And

1:24:14

so Ontario Power Generation

1:24:16

used our spot in the arm

1:24:19

through the interface

1:24:21

to operate this disconnect in

1:24:25

an interactive way. And they showed

1:24:27

it to us. And we were so excited

1:24:30

about it and said, you know, I bet we can automate

1:24:32

that task. And so we got some

1:24:34

examples of that breaker switch. And

1:24:37

I believe in the next generation of the software, now

1:24:39

we're gonna deliver back to Ontario Power Generation.

1:24:42

They're gonna be able to just point the robot at

1:24:45

that breaker. They'll be out, they'll indicate

1:24:48

that's the switch. There's sort of two actions

1:24:50

you have to do. You have to flip up this

1:24:52

little cover, press a button, then

1:24:54

get a ratchet, stick it into

1:24:57

a socket and literally

1:24:59

unscrew this giant breaker

1:25:01

switch. So there's a bunch of different tasks. And

1:25:04

we basically automated them so that the human

1:25:06

says, okay, there's the switch, go

1:25:08

do that part.

1:25:10

That right there is the socket where

1:25:12

you're gonna put your tool and you're gonna open it up.

1:25:15

And so you can remotely sort of indicate this on

1:25:17

the tablet and then the robot

1:25:19

just does everything in between. And it does

1:25:21

everything, all the coordinated movement of all the different

1:25:24

actuators that includes the body. It maintains

1:25:26

its balance, it walks itself

1:25:29

into position. So it's within reach

1:25:31

and the arm is in a position where

1:25:33

it can do the whole task. So it manages

1:25:36

the whole body. So how

1:25:39

does one become a big

1:25:40

enough customer to request features?

1:25:42

Cause I personally want a

1:25:44

robot that gets me a beer. I

1:25:47

mean, that has to be like one of the most

1:25:49

requests, I suppose in the industrial setting.

1:25:51

That's a non-alcoholic

1:25:53

beverage.

1:25:56

Of picking up objects and bringing the objects

1:25:58

to you. We love working with. customers

1:26:00

who have challenging problems like this. And

1:26:03

this one in particular, because we felt like

1:26:06

what they were doing, A, it was a safety feature.

1:26:08

B, we saw that

1:26:10

the robot could do it because

1:26:12

they teleoperated it the first time. Probably took

1:26:14

them an hour to do it the first time, right? But

1:26:17

the robot was clearly capable. And

1:26:19

we thought, oh, this is a great problem for us

1:26:21

to work on to figure out how to automate

1:26:23

a manipulation task. And so we took it on,

1:26:26

not because we were going to make a bunch of money from

1:26:28

it in selling the robot back to them, but

1:26:30

because it motivated us to go solve

1:26:33

what we saw as the next logical step.

1:26:36

But many of our customers, in fact,

1:26:38

we try to, our bigger

1:26:41

customers, typically ones who are

1:26:43

going to run a utility or a factory or something like

1:26:45

that,

1:26:46

we take that kind of direction from them. And

1:26:48

if they're, especially if they're going to buy 10 or 20

1:26:50

or 30 robots, and they say, I really needed

1:26:53

to do this. Well, that's exactly the right

1:26:55

kind of problem that we want to be working on. Note

1:26:58

to self, buy 10 spots and

1:27:02

aggressively push for beer manipulation.

1:27:05

I think it's fair to say it's notoriously difficult to

1:27:08

make a lot of money as a robotics company.

1:27:11

How can you

1:27:12

make money as a robotics company?

1:27:15

Can you speak to that? It seems that a

1:27:17

lot of robotics companies fail.

1:27:20

It's difficult to build robots. It's

1:27:23

difficult to build robots at a low

1:27:25

enough cost where customers,

1:27:27

even the industrial setting, want to purchase them. And it's difficult

1:27:30

to build robots that are useful,

1:27:32

sufficiently useful. So what can you speak

1:27:34

to? And Boston Dynamics has been

1:27:37

successful for many years

1:27:39

of finding a way to make money. Well, in

1:27:42

the early days, of course, you know, the money we

1:27:44

made was from doing contract R&D work. And

1:27:47

we made money, but

1:27:49

we weren't growing and we weren't selling

1:27:51

a product. And then we went

1:27:53

through several owners who had a vision

1:27:56

of not only doing the product, but also the developing

1:28:00

advanced technology, but eventually developing

1:28:02

products.

1:28:03

And so both Google and SoftBank

1:28:06

and now Hyundai had that

1:28:08

vision and were willing to

1:28:11

provide that investment.

1:28:16

Now our discipline is that we need

1:28:18

to go find applications that

1:28:20

are broad enough that you could

1:28:22

imagine selling thousands of robots. Because

1:28:24

it doesn't work if you don't sell thousands or tens

1:28:27

of thousands of robots. If you only sell hundreds,

1:28:30

you will commercially fail. And that's where

1:28:32

most of the small robot companies have died.

1:28:38

And that's a challenge because

1:28:40

A, you need to field the

1:28:42

robots, they need to start to become reliable.

1:28:45

And as we've said, that takes time and investment

1:28:48

to get there.

1:28:49

And so it really does take visionary

1:28:52

investment to get there. But we

1:28:54

believe that we are going to make money

1:28:57

in this industrial

1:29:00

monitoring space. Because

1:29:02

if

1:29:03

a chip fab,

1:29:04

if the line goes down because

1:29:08

a vacuum pump failed someplace, that

1:29:10

can be a very expensive process. It can be

1:29:12

a million dollars a day in lost production.

1:29:15

Maybe you have to throw away some of the product along

1:29:17

the way. And so the robot,

1:29:20

if you can prevent that by inspecting

1:29:23

the factory every single day,

1:29:25

maybe every hour if you have to, there's

1:29:28

a real return on investment there. But

1:29:30

there needs to be a critical mass

1:29:32

of this task. And we're focusing

1:29:35

on a few that we believe

1:29:37

are ubiquitous

1:29:39

in the industrial

1:29:41

production environment. And that's using

1:29:44

a thermal camera to

1:29:46

keep things from overheating, using an

1:29:48

acoustic imager to find compressed

1:29:50

air leaks, using visual cameras

1:29:53

to read gauges, measuring

1:29:55

vibration. These are standard things

1:29:57

that you do to prevent

1:29:59

intended shutdown of a factory. And

1:30:03

this takes place in

1:30:05

a beer factory. We're working with AB

1:30:07

InBev. It takes place in chip fabs.

1:30:10

We're working with global foundries. It

1:30:12

takes place in electric utilities

1:30:14

and nuclear power plants. And so the same

1:30:17

robot

1:30:18

can be applied in all of these industries.

1:30:21

And

1:30:22

as I said, we have about, actually,

1:30:25

it's 1,100 spots out now. To

1:30:27

really get profitability, we

1:30:29

need to be at 1,000 a year, maybe 1,500 a year for that sort of part

1:30:32

of the business. So it still needs

1:30:37

to grow, but

1:30:39

we're on a good path. So I think that's totally

1:30:41

achievable. So the application should require

1:30:44

crossing that 1,000 robot barrier. It

1:30:46

really should.

1:30:47

Yeah. I want to mention our second

1:30:50

robot, Stretch. Yeah. Tell

1:30:52

me about Stretch. What's Stretch? Who's Stretch?

1:30:54

Stretch started differently than Spot.

1:30:57

Spot, we built because we had decades

1:30:59

of experience building quadrupeds. We

1:31:02

had it in our blood. We had to build a quadruped

1:31:04

product. But we had to go figure out what the application

1:31:07

was. And we actually discovered this

1:31:09

factory

1:31:11

patrol application, basically

1:31:14

preventative maintenance, by seeing what

1:31:16

our customers did with it.

1:31:18

Stretch was very different. We started knowing

1:31:20

that there was warehouses

1:31:22

all over the world. There's shipping

1:31:25

containers moving all around the

1:31:27

world full of boxes that are mostly being

1:31:29

moved by hand.

1:31:31

By some estimates, we think there's a trillion boxes,

1:31:34

cardboard boxes shipped around the world

1:31:36

each year, and a lot of it's done manually. It

1:31:39

became clear early on

1:31:41

that there was an opportunity for a mobile robot

1:31:43

in here to move boxes around. And

1:31:46

the commercial experience has been very different between

1:31:49

Stretch and with Spot. As

1:31:51

soon as we started talking to

1:31:53

people, potential customers, about

1:31:56

what Stretch was going to be used for, they immediately

1:31:58

started saying, oh, I'll buy. I'll buy.

1:31:59

that robot. In fact, I'm going to

1:32:02

put in an order for 20 right now.

1:32:04

We just started shipping the robot in January

1:32:08

after several years of development. This

1:32:10

year. This year. So our first deliveries

1:32:12

of stretch to customers were DHL

1:32:14

and Marisk in January.

1:32:16

We're delivering the gap right now.

1:32:19

And we have about seven or eight other customers,

1:32:22

all who've already agreed in advance to

1:32:24

buy between 10 and 20 robots. And so we've

1:32:26

already got commitments for a couple of hundred

1:32:28

of these robots.

1:32:30

This one's going to go right. It's so obvious

1:32:33

that there's a need and we're not

1:32:35

just going to unload trucks. We're going to do any box

1:32:37

moving task in the warehouse. And so it too will

1:32:39

be a multi-purpose robot and

1:32:42

we'll eventually have it doing palletizing

1:32:44

or depalletizing or loading

1:32:47

trucks or unloading trucks.

1:32:49

There's definitely thousands of robots. There's probably

1:32:51

tens of thousands of robots of this in

1:32:53

the future. So it's going to be profitable. Can

1:32:56

you describe what stretch looks like? It

1:32:58

looks like a big strong

1:33:01

robot arm on a mobile base. The base

1:33:03

is about the size of a pallet. And

1:33:05

we wanted it to be the size of a pallet because that's what

1:33:08

lives in warehouses, right? Pallets of goods sitting

1:33:10

everywhere. So we needed to be able to fit in that space.

1:33:13

It's not a legged mobile. It's not a legged robot. And

1:33:15

so it was our first,

1:33:18

it was actually a

1:33:20

bit of a

1:33:23

commitment from us, a challenge for us to

1:33:25

build a non-balancing robot. To

1:33:28

do the much easier problem and

1:33:31

to put it to a well. Well, because it wasn't going

1:33:33

to have this balance problem. And in fact,

1:33:36

the very first version of the

1:33:38

logistics robot we built was a balancing

1:33:40

robot and that's called a handle. And

1:33:43

there's that thing was epic. All right. It's a beautiful

1:33:45

machine. It's an incredible machine.

1:33:51

I mean, it looks epic. It looks like

1:33:53

a out of a, I

1:33:55

mean, the sci-fi movie of some

1:33:57

sort. I mean, just can you actually just

1:33:59

linger on the the design of that thing, because

1:34:01

that's another leap into something you probably haven't

1:34:03

done. It's a different kind of balancing. Yeah. So

1:34:06

let me, I love talking about the history of how a

1:34:08

handle came about because it

1:34:10

connects all of our robots actually. So

1:34:13

I'm going

1:34:15

to start with Atlas. When we

1:34:17

had Atlas getting fairly far along,

1:34:20

we wanted to understand, I was telling you earlier, the challenge

1:34:23

of the human form is that you have this mass

1:34:25

up high. And

1:34:27

balancing that

1:34:29

inertia, that mass up high is

1:34:32

its own unique challenge. And so we started

1:34:34

trying to get Atlas to balance standing

1:34:36

on one foot, like on a balance beam, using

1:34:39

its arms like this. And you know, you can do this,

1:34:41

I'm sure I can do this, right? Like if you're walking

1:34:43

a tightrope,

1:34:45

how do you do that balance?

1:34:47

So that's sort of controlling

1:34:49

the inertia, controlling the momentum of the

1:34:51

robot. We were starting to figure

1:34:53

that out on Atlas.

1:34:55

And so our first concept of

1:34:58

handle, which was a robot that was going to be on two

1:35:00

wheels, so it had the balance, but

1:35:02

it was going to have a big long arm so it could reach

1:35:05

a box at the top of a truck. And

1:35:08

it needed yet another

1:35:10

counterbalance,

1:35:12

a big tail to help

1:35:14

it balance while it was using its

1:35:16

arm. So the

1:35:18

reason why this robot sort of

1:35:20

looks epic, some people said it looked

1:35:22

like an ostrich or

1:35:25

maybe an ostrich moving around, was

1:35:29

the wheels, the legs, it has legs so

1:35:31

it can extend its legs. So

1:35:33

it's wheels on legs. We always wanted to build

1:35:35

wheels on legs. It had a tail and had this

1:35:38

arm and they're all moving simultaneously.

1:35:40

And in coordination to maintain balance because

1:35:42

we had figured out the mathematics of doing

1:35:44

this momentum control, how to maintain

1:35:47

that balance. And so part of the reason

1:35:50

why we

1:35:50

built this two-legged robot was

1:35:52

we had figured this thing out. We

1:35:54

wanted to see it in this kind of machine.

1:35:57

And we thought maybe this kind of machine would be good in

1:35:59

a warehouse. we built it. And it's a beautiful

1:36:01

machine. It moves in a graceful way,

1:36:03

like nothing else we've built, but

1:36:06

it wasn't the right machine for a

1:36:08

logistics application. We decided it was

1:36:10

too slow and couldn't pick

1:36:12

boxes fast enough, basically. And

1:36:15

it was doing beautifully with elegance. It

1:36:17

just wasn't efficient enough. So we

1:36:20

let it go. But

1:36:22

I think we will come back to that machine

1:36:24

eventually. The fact that it's possible, the

1:36:26

fact that you show that you could do so many things

1:36:29

at the same time in coordination and

1:36:31

so beautifully, there's something there. That

1:36:34

was a demonstration of what is possible.

1:36:36

Basically we made a hard decision and this was

1:36:38

really kind of a hard nose business decision.

1:36:41

It was, it was, it indicated us

1:36:44

not doing it just for the beauty

1:36:46

of the mathematics or the curiosity,

1:36:48

but no, we actually need to build a business that

1:36:51

can make money in the long run. And

1:36:53

so we ended up building stretch, which

1:36:55

has a big heavy base with a giant battery

1:36:57

in the base of it that allows

1:36:59

it to run for two,

1:37:01

two shifts, 16 hours worth of operation.

1:37:04

And that big battery is sort

1:37:07

of helps it stay balanced, right? So you can move

1:37:09

a 50 pound box around with its arm and not tip

1:37:11

over. Um, it's

1:37:14

omni-directional. It can move in any direction. So

1:37:16

it has a nice suspension built

1:37:18

into it. So it can deal with

1:37:20

gaps or things on the floor and

1:37:22

roll over it. But it's a, but

1:37:24

it's not a balancing robot. It's a mobile

1:37:26

robot arm that can work to

1:37:29

carry a pick or place a box

1:37:31

up to 50 pounds anywhere in the warehouse.

1:37:34

Take a box from point A to point

1:37:36

B anywhere. Palletize

1:37:38

depalletize. We're starting with unloading

1:37:41

trucks because there's so many trucks

1:37:43

and containers that where goods are shipped

1:37:45

and it's a brutal job. You know, in the summer, it

1:37:48

can be 120 degrees inside that container.

1:37:50

People don't want to do that job. Um,

1:37:54

and it's backbreaking labor, right? Again, these

1:37:56

can be up to 50 pound boxes. Um,

1:37:59

and so.

1:38:01

We feel like this is a productivity enhancer. And

1:38:04

for the people who used to do that job unloading

1:38:06

trucks, they're actually operating

1:38:08

the robot now. And so by building

1:38:10

robots that are easy to control,

1:38:14

and it doesn't take an advanced degree to manage,

1:38:17

you can become a robot operator. And so as

1:38:19

we've introduced these robots to both DHL

1:38:21

and Marisk and GAP, the warehouse

1:38:23

workers who were doing that manual labor

1:38:26

are now the robot operators. And so we see this

1:38:28

as ultimately a benefit to them as

1:38:30

well.

1:38:32

Can you say how much stretch costs?

1:38:35

Not

1:38:37

yet. But I will say

1:38:40

that when we engage

1:38:42

with our customers, they'll be able

1:38:44

to see a return on investment in

1:38:46

typically two years. Okay, so that's

1:38:48

something that you're constantly thinking about how. And

1:38:51

I suppose you have to do the same kind of thinking with spot.

1:38:53

So it seems like with stretch, the

1:38:56

application is like directly

1:38:58

obvious. Yeah, it's a slam dunk. Yeah, and

1:39:01

so you have a little more flexibility.

1:39:03

Well, I think we know the target. We know

1:39:05

what we're going after. And with

1:39:07

spot, it took us a while to figure out what we were going after.

1:39:10

Well, let me return to that question about

1:39:14

maybe the conversation you were having a

1:39:17

while ago with Larry Page, maybe

1:39:19

looking to the longer future

1:39:21

of social robotics, of

1:39:24

using spot to connect with human

1:39:26

beings, perhaps in the home. Do you see a future

1:39:28

there? If we were to sort of hypothesize

1:39:32

or dream about a future where a spot like

1:39:34

robots are in the home as pets, a social

1:39:36

robot? We definitely think about it. And

1:39:39

we would like to get there. We

1:39:41

think the pathway to getting there is

1:39:44

likely through these industrial applications

1:39:47

and then mass manufacturing. Let's figure

1:39:49

out

1:39:50

how to build the robots, how

1:39:52

to make the software so that they can really do a broad

1:39:55

set of skills. That's gonna take

1:39:58

real investment

1:39:59

to get there.

1:39:59

Performance first, right? The principle

1:40:02

of the company has always been, really make the

1:40:04

robots do useful stuff. And

1:40:06

so, you know,

1:40:08

the social robot companies that

1:40:11

tried to start someplace else by just

1:40:13

making acute interaction, mostly they

1:40:15

haven't survived. And

1:40:18

so

1:40:18

we think the utility really

1:40:21

needs to come first. And that means you

1:40:23

have to solve some of these hard problems.

1:40:26

And so to get there,

1:40:29

we're gonna go through the design and

1:40:32

software development in industrial, and

1:40:34

then that's eventually gonna let you reach a scale

1:40:36

that could then be addressed to a commercial consumer

1:40:39

level market. And so, yeah,

1:40:42

maybe we'll be able to build a smaller spot

1:40:44

with an arm that could really go get your beer for you.

1:40:48

But there's

1:40:48

things we need to figure out still. How

1:40:51

to safely, really safely. If

1:40:53

you're gonna be interacting with children,

1:40:56

you better be safe. And right

1:40:59

now, we count on a little bit

1:41:01

of standoff distance between the robot and people

1:41:03

so that you don't pinch your finger in the robot. So

1:41:06

you've got a lot of things you need to go solve before

1:41:08

you jump to that consumer level product.

1:41:11

Well, there's a kind of trade off in safety because

1:41:14

it feels like in the home, you

1:41:16

can fall. Like

1:41:20

you don't have to be as good at,

1:41:23

like you're allowed to fail in

1:41:25

different ways, in more ways, as

1:41:27

long as it's safe for the humans.

1:41:30

So it just feels like an easier problem

1:41:32

to solve because it feels like in the factory, you're not

1:41:34

allowed to fail.

1:41:36

That may be true, but

1:41:39

I also think the variety of things

1:41:41

a consumer level robot would

1:41:43

be expected to do will also be quite broad.

1:41:46

They're gonna want to get the beer and know the difference

1:41:48

between the beer and a Coca-Cola or

1:41:51

my snack.

1:41:53

Or they're

1:41:55

all gonna want you to clean up the dishes

1:41:57

from the table without breaking.

1:41:59

of them. Those are

1:42:02

pretty complex tasks and so

1:42:04

there's still work to be done there. So

1:42:06

to push back on that, here's where application, I

1:42:08

think they'll be very interesting. I think

1:42:10

the application of being a pet, a

1:42:12

friend. So like no

1:42:15

tasks,

1:42:17

just be cute. Because I, not

1:42:19

cute, not cute. Like the dog is

1:42:21

more, a dog is more than just cute. A

1:42:23

dog is a friend, is a companion. There's

1:42:26

something about just having interacted with them and

1:42:28

maybe because I'm hanging out alone with

1:42:30

the robot dogs a little too much.

1:42:33

But like there's

1:42:35

a connection there and it feels like

1:42:37

that connection is not, should

1:42:39

not be disregarded. No,

1:42:40

it should not be disregarded.

1:42:44

Robots that can somehow communicate through

1:42:46

their physical gestures are, you're going to be

1:42:48

more attached to in the long run. Do you

1:42:51

remember Ibo, the Sony

1:42:53

Ibo? They sold over a hundred thousand

1:42:56

of those, maybe 150,000. Probably

1:42:59

wasn't considered a

1:43:01

successful product for them. They

1:43:03

suspended that eventually and then they brought it back.

1:43:05

Sony brought it back. And people

1:43:08

definitely treated this

1:43:10

as a pet, as a companion.

1:43:12

And

1:43:13

I think that will come around again.

1:43:18

Will you get away without having any

1:43:20

other utility? Maybe

1:43:23

in a world where we can really talk to our simple

1:43:25

little pet because chat

1:43:27

GPT or some other generative AI has

1:43:30

made it possible for you to really talk

1:43:32

in what seems like a meaningful way.

1:43:35

Maybe that'll open the social

1:43:37

robot up again. That's

1:43:42

probably not a path we're going to go down because

1:43:44

again, we're so focused on performance and utility.

1:43:46

We can add those other things also, but we really want to start

1:43:48

from

1:43:49

that foundation of utility,

1:43:52

I think. Yeah. But I also want to predict that you're wrong on that. So,

1:43:59

So which is that the very

1:44:02

path you're taking, which is creating a great robot

1:44:04

platform, will very easily

1:44:07

take a leap to adding a

1:44:11

chat GPT-like capability, maybe GPT-5,

1:44:14

and there's just so many open source alternatives

1:44:16

that you could just plop that on top of Spot.

1:44:19

And

1:44:20

because you have this robust platform and

1:44:22

you're figuring out how to mass manufacture it and

1:44:24

how to drive the cost down and how

1:44:26

to make it reliable, all those kinds of things,

1:44:28

it'll be a natural transition to where

1:44:30

just adding chat GPT on top

1:44:33

of the quick. I do think that

1:44:35

being able to verbally converse

1:44:38

or even converse through gestures,

1:44:41

part of these learning models is

1:44:43

that

1:44:44

you can now look at video and imagery and

1:44:47

associate

1:44:48

intent with that. Those

1:44:51

will all help in the

1:44:53

communication between robots and

1:44:55

people for sure. And that's gonna happen, obviously

1:44:58

more quickly than any of us were expecting. I

1:45:00

mean, what else do you want from life?

1:45:03

Friend to get your beer, and

1:45:05

then just talk shit about the state

1:45:08

of the world. I

1:45:11

mean, where there's a deep loneliness within all

1:45:13

of us, and I think

1:45:15

a beer and a good chat solves

1:45:17

so much of it, or it takes us a

1:45:19

long way to solving a lot of it.

1:45:21

It'll be interesting to see,

1:45:24

when a generative

1:45:27

AI can give you that warm feeling

1:45:29

that

1:45:32

you connected, and

1:45:33

that, oh yeah, you remember

1:45:36

me, you're my friend, we have a history.

1:45:39

That history matters, right? Memory

1:45:41

of joint like joint. Memory of, yeah.

1:45:44

Having witnessed, that's what friendship,

1:45:46

that's what connection, that's what love is in

1:45:49

many cases. Some of the deepest friendships

1:45:51

you have is having gone through a difficult

1:45:53

time together, and having a shared memory

1:45:56

of an amazing time or a difficult time, and

1:45:59

kind of.

1:46:00

that memory creating

1:46:02

this foundation based on which

1:46:05

you can then experience the world together. The

1:46:07

silly, the mundane stuff of day to day is

1:46:09

somehow built on a foundation of having gone

1:46:11

through some shit in the past. And the

1:46:14

current systems are not personalized in that way, but

1:46:17

I think that's a technical problem, not some

1:46:19

kind of fundamental limitation.

1:46:21

So combine that with an embodied

1:46:24

robot like Spot, which already has magic

1:46:28

in its movement. I think

1:46:30

it's a very interesting possibility

1:46:32

of where that takes us. But of course you

1:46:34

have to build that on top of a company that's

1:46:37

making money

1:46:38

with real applications, with real customers,

1:46:41

and with robots that are safe and at work

1:46:44

and reliable and manufactured

1:46:47

scale.

1:46:47

And I think we're in a unique position

1:46:50

in that because

1:46:52

of our investors primarily

1:46:54

Hyundai, but also SoftBank still owns 20%

1:46:56

of us.

1:46:59

They're not totally fixated

1:47:02

on driving us to profitability

1:47:05

as soon as possible. That's not the goal. The

1:47:07

goal really is a longer term vision of

1:47:10

creating,

1:47:12

what does mobility mean in the future?

1:47:15

How is this mobile robot technology

1:47:17

going to influence us? Can

1:47:20

we shape that? And they want both. And

1:47:23

so we are, as a company,

1:47:25

are trying to strike that balance between,

1:47:27

let's build a business that makes money. I've

1:47:30

been describing that to my own team as self-destination.

1:47:34

If I want to drive my own ship, we

1:47:36

need to have a business that's profitable in the

1:47:39

end. Otherwise somebody else is gonna drive the ship for us.

1:47:42

So that's really important, but

1:47:45

we're gonna retain the aspiration

1:47:48

that we're gonna build the next generation of technology

1:47:50

at the same time. And the real trick

1:47:52

will be if we can do both.

1:47:54

Speaking of ships,

1:47:57

let me ask you about a competitor. and

1:48:00

somebody's become a friend. So Elon

1:48:03

Musk and Tesla have

1:48:05

announced, they've been in the early days of

1:48:07

building a humanoid robot. How

1:48:09

does that change the landscape

1:48:13

of your work? So there's sort

1:48:16

of from the outside perspective,

1:48:18

it seems like,

1:48:20

well, as a fan of robotics,

1:48:22

it just seems exciting. Right, very exciting,

1:48:25

right? When Elon speaks,

1:48:27

people listen. And so

1:48:30

it suddenly brought a bright light onto

1:48:33

the work that we'd been doing for over a

1:48:35

decade. And

1:48:38

I think that's only gonna help. And in fact, what

1:48:41

we've seen is that, in

1:48:43

addition to Tesla, we're

1:48:45

seeing a proliferation of

1:48:48

robotic companies arise now.

1:48:50

Including humanoid? Yes. Oh

1:48:53

wow. Yeah, so, and interestingly,

1:48:55

many of them, as they're raising

1:48:58

money, for example, will claim

1:49:01

whether or not they have a former Boston Dynamics employee

1:49:03

on their staff as a criteria. Yeah,

1:49:07

that's true. That's a, I

1:49:10

would do that as a company, yeah, for sure. Yeah,

1:49:12

so. Shows you're legit. Yeah, so

1:49:15

you know what? It's bring, it

1:49:17

has brought tremendous validation to

1:49:19

what we're doing

1:49:20

and excitement. Competitive

1:49:23

juices are flowing, you know, the whole thing.

1:49:25

So it's all good. Elon

1:49:28

has also

1:49:30

kind of stated

1:49:34

that, you

1:49:37

know, maybe he implied

1:49:40

that the problem is solvable

1:49:43

in the year term, which is a low

1:49:46

cost humanoid robot that's

1:49:48

able to do, that's a relatively general

1:49:51

use case robot. So

1:49:54

I think Elon is

1:49:56

known for sort of setting these kinds of incredibly

1:49:58

ambitious goals. maybe

1:50:01

missing deadlines, but actually

1:50:04

pushing not just the particular team

1:50:06

you lead, but the entire world

1:50:08

to accomplishing those. Do

1:50:11

you

1:50:12

see Boston Dynamics in the near future

1:50:15

being pushed in that kind of way? Like

1:50:17

this excitement of competition kind of

1:50:21

pushing Atlas maybe to do

1:50:23

more cool stuff, trying to drive the

1:50:25

cost of Atlas down perhaps? Or

1:50:28

I mean, I guess I wanna ask if there's

1:50:32

some kind of exciting energy

1:50:36

in Boston Dynamics due

1:50:38

to this a little bit of competition. Oh yeah,

1:50:41

definitely. When

1:50:43

we released our most recent video of

1:50:45

Atlas, you know, I think you'd

1:50:47

seen it scaffolding and throwing the box

1:50:50

of tools around and then doing the flip at

1:50:52

the end. We were trying to show

1:50:54

the world that not only

1:50:56

can we do this parkour mobility

1:50:58

thing, but we can pick up and move heavy

1:51:00

things. Because

1:51:02

if you're gonna work

1:51:04

in a manufacturing environment, that's

1:51:06

what you gotta be able to do. And

1:51:09

for the reasons I explained to you earlier,

1:51:12

it's not trivial to do so. Changing

1:51:14

the center of mass by picking up

1:51:16

a 50 pound block

1:51:18

for a robot that weighs 150

1:51:21

pounds,

1:51:23

that's a lot to accommodate. So

1:51:25

we're trying to show that we can do that.

1:51:28

And so

1:51:30

it's totally been energizing. You know,

1:51:33

we see the next phase

1:51:35

of Atlas being more dexterous

1:51:37

hands that can manipulate and grab more

1:51:40

things that we're gonna start by moving

1:51:42

big things around that are

1:51:44

heavy and that affect balance. And

1:51:46

why is that? Well, really tiny dexterous

1:51:49

things probably are gonna be hard

1:51:51

for a while yet. Maybe you could go

1:51:53

build a special purpose

1:51:56

robot arm, you know, for

1:51:58

stuffing, you know, chips.

1:51:59

into electronics boards, but

1:52:02

we don't really want to do really fine

1:52:05

work like that. I think more coursework

1:52:08

where you're using two hands to pick up and balance

1:52:10

an unwieldy thing, maybe in a manufacturing

1:52:13

environment, maybe in a construction environment, those

1:52:16

are the things that we think

1:52:17

robots are going to be able to do with the level

1:52:19

of dexterity that they're going to have in the next

1:52:22

few years and that's

1:52:24

where we're headed. And I think, and

1:52:26

Elon has seen the same thing, right? He's talking about

1:52:29

using the robots in a manufacturing environment.

1:52:32

We think there's something very interesting there about having

1:52:34

this, a

1:52:35

two armed robot, because when

1:52:37

you have two arms, you can transfer

1:52:40

a thing from one hand to the other. You can turn it around,

1:52:42

you can reorient it

1:52:44

in a way that you can't do it if you just have one

1:52:46

hand on it. And so there's a lot that extra

1:52:49

arm brings to the table. So I think

1:52:51

in terms of mission, you

1:52:54

mentioned Boston and AMX really wants to see what's

1:52:57

the limits of what's possible. And

1:52:59

so the cost comes second,

1:53:01

or it's a component, but first figure

1:53:03

out what are the limitations. I think with Elon, he's

1:53:06

really driving the cost down. Is

1:53:08

there some inspiration, some lessons

1:53:10

you see there of

1:53:13

the challenge of driving the

1:53:15

cost down, especially with Atlas, with a humanoid

1:53:17

robot? Well, I think the thing that he's

1:53:19

certainly been learning by building car factories

1:53:22

is what that looks like.

1:53:27

By scaling, you can get

1:53:29

efficiencies that drive costs down very

1:53:31

well. And the smart

1:53:34

thing that they have

1:53:36

in their favor is that they know

1:53:39

how to manufacture, they know how to build electric motors,

1:53:41

they know how to build computers

1:53:43

and vision systems. So there's a lot of overlap between

1:53:47

modern

1:53:48

automotive companies and

1:53:50

robots. But

1:53:52

hey, we have

1:53:54

a modern robotic automotive

1:53:57

company behind us as well. So

1:54:01

bring it on. Who's doing pretty well, right? The

1:54:04

electric vehicles from Hyundai are doing pretty

1:54:06

well. I love it. So

1:54:09

how much, so we've talked about some

1:54:11

of the low level control, some

1:54:13

of the incredible stuff that's going on and

1:54:16

basic perception.

1:54:18

But how much do you see in currently

1:54:20

and in the future of Boston

1:54:22

Dynamics sort of more, higher

1:54:25

level machine learning applications? Do

1:54:27

you see customers adding on

1:54:29

those capabilities or do you see Boston Dynamics

1:54:31

doing that in-house? Some kinds

1:54:33

of things we really believe are

1:54:35

probably gonna be more

1:54:38

broadly available, maybe even

1:54:40

commoditized. You know,

1:54:42

using a machine learning, like a vision algorithm.

1:54:45

So a robot can recognize something in the environment.

1:54:48

That ought to be

1:54:48

something you can just download. Like I'm

1:54:51

going to a new environment and I have a new kind of door

1:54:53

handle or piece of equipment I want to inspect.

1:54:55

You ought to be able to just download that. Besides

1:54:58

Boston Dynamics, we'll provide that. And we've actually

1:55:00

built an API that

1:55:02

lets people add these vision

1:55:06

algorithms to Spot.

1:55:09

And we're currently working with some partners who are providing

1:55:11

that. Levitas is

1:55:13

an example of a small provider who's giving

1:55:15

us software for reading gauges. And

1:55:19

actually another partner in Europe, Repli, is

1:55:21

doing the same thing.

1:55:23

So we see that, we see ultimately

1:55:26

an ecosystem of providers doing

1:55:29

stuff like that. And I think ultimately,

1:55:31

you

1:55:33

might even be able to do the same thing with behaviors.

1:55:36

So this technology will

1:55:38

also be brought to bear on

1:55:40

controlling the robot,

1:55:43

the motions of the robot. And

1:55:45

we're using reinforcement learning to

1:55:48

develop

1:55:50

algorithms for both locomotion and

1:55:52

manipulation. And ultimately

1:55:54

this is going to mean you can add new

1:55:56

behaviors to a robot quickly.

1:55:59

And that could

1:56:02

potentially be done outside of Boston Dynamics

1:56:04

right now. That's all internal to us.

1:56:06

I think you need to understand at a

1:56:09

deep level, the

1:56:12

robot control to do that, but eventually

1:56:14

that could be outside. But it's certainly

1:56:16

a place where these approaches

1:56:18

are gonna be brought to bear in robotics.

1:56:21

So reinforcement learning is part of

1:56:23

the process. So you do use reinforcement

1:56:26

learning. Yes. So

1:56:29

there's increasing levels of learning

1:56:31

with these robots? Yes. And

1:56:34

that's for both for locomotion, for

1:56:36

manipulation, for perception?

1:56:38

Yes. Well, what

1:56:41

do you think in general about all the exciting

1:56:43

advancements of transformer

1:56:46

neural

1:56:47

networks, most

1:56:51

beautifully illustrated through

1:56:53

the large language models like GPT-4?

1:56:58

Like everybody else, we're all, I'm

1:57:01

surprised at how much,

1:57:03

how far they've come. I'm

1:57:08

a little bit nervous about the, there's

1:57:12

anxiety around them, obviously, for

1:57:15

I think good reasons, right?

1:57:18

Disinformation is a curse

1:57:21

that's an unintended consequence

1:57:23

of social media that could be exacerbated

1:57:27

with these tools.

1:57:29

So if you use them to deploy

1:57:31

disinformation, it could be a real risk.

1:57:36

But I also think that the risks associated

1:57:38

with these kinds of models don't have a whole

1:57:40

lot to do with the way

1:57:42

we're gonna use them in our robots. If

1:57:45

I'm using a robot,

1:57:46

I'm building a robot to do a manual

1:57:49

task of some sort. I

1:57:52

can judge very easily, is

1:57:54

it doing the task I asked it to? Is

1:57:56

it doing it correctly? There's sort of a built-in.

1:58:00

mechanism for judging, is

1:58:02

it doing the right thing? Did it successfully

1:58:05

do the task? Yeah, physical reality

1:58:07

is a good verifier. It's a good verifier.

1:58:09

That's exactly it. Whereas if you're asking

1:58:12

for, yeah, I don't know,

1:58:13

you're trying to ask a theoretical

1:58:16

question in chat GPT, it

1:58:19

could be true or it may not be true, and

1:58:21

it's hard to have that verifier. What

1:58:24

is that truth that you're comparing against?

1:58:26

Whereas in physical reality, you know the truth.

1:58:29

And this is an important difference. And

1:58:32

so I'm not,

1:58:34

I think there is reason to be a little bit concerned

1:58:36

about how

1:58:37

these tools, large

1:58:41

language models could be used, but I'm not

1:58:43

very worried about how they're going to be used.

1:58:46

Well, how learning algorithms in

1:58:48

general are going to be used on

1:58:50

robotics. It's really a different application

1:58:53

that

1:58:54

has different ways of verifying

1:58:56

what's going on. Well, the nice thing about language models

1:58:58

is that I ultimately

1:59:01

see,

1:59:02

I'm really excited about the possibility of having conversations

1:59:04

with spot.

1:59:05

Yeah. There's no, I would say negative

1:59:08

consequences to that, but just increasing

1:59:10

the bandwidth and the variety of

1:59:12

ways you can communicate with this particular

1:59:15

robot. So you could communicate

1:59:17

visually, you can communicate through some interface

1:59:20

and to be able to communicate verbally again

1:59:22

with the beer and so on. I

1:59:24

think that's really exciting to make that much,

1:59:27

much easier. We have this partner

1:59:29

Levitas that's adding the

1:59:32

vision algorithms for daydreaming for us. They

1:59:34

just,

1:59:35

just this week I saw a demo where they hooked

1:59:37

up, you know, a language tool

1:59:40

to spot and they're talking to spot to give a chance.

1:59:43

Can you tell me about the Boston Dynamics AI Institute?

1:59:46

What is it and what is its mission? So

1:59:49

it's a separate organization, the

1:59:51

Boston Dynamics Artificial

1:59:53

Intelligence Institute. It's

1:59:56

led by Mark Raibert, the founder of Boston

1:59:58

Dynamics and the former CEO.

2:00:00

and my old advisor at MIT. Mark

2:00:03

has always loved the research, the

2:00:05

pure research, without

2:00:08

the confinement or demands

2:00:10

of commercialization. And

2:00:14

he wanted to continue to

2:00:16

pursue

2:00:17

that unadulterated

2:00:20

research. And so

2:00:22

suggested to Hyundai that he

2:00:25

set up this institute and they agree that

2:00:27

it's worth additional investment to

2:00:29

kind of continue pushing this forefront.

2:00:33

And we expect to be working together

2:00:35

where Boston Dynamics is again

2:00:38

both commercialize and do research,

2:00:40

but the sort of time horizon of the research we're

2:00:42

gonna do is in the next,

2:00:45

let's say five years, what can

2:00:47

we do in the next five years? Let's work on those

2:00:49

problems. And I think the goal of the

2:00:51

AI Institute is to work even further

2:00:53

out. Certainly, the

2:00:55

analogy of Leggett locomotion again,

2:00:58

when we started that, that was a multi-decade problem. And

2:01:01

so I think Mark wants to have the freedom to

2:01:04

pursue really hard over the horizon problems. And

2:01:08

that'll be the goal of the Institute. So

2:01:12

we mentioned some of the dangers of

2:01:14

some of the concerns

2:01:16

about large language models.

2:01:18

That said, there's been a long running

2:01:22

fear of these embodied robots. Why

2:01:28

do you think people are afraid of Lincoln robots? Yeah,

2:01:31

I wanted to show you this. So this is in the Wall Street Journal

2:01:35

and this is all about chat GPT, right? But

2:01:38

look at the picture. It's a

2:01:40

humanoid robot that's saying, that

2:01:43

looks scary and it says I'm gonna replace you.

2:01:46

And so the humanoid robot is sort of, is

2:01:49

the embodiment of this chat GPT tool that

2:01:56

there's reason to be a little bit

2:01:58

nervous about how it gets deployed. So

2:02:01

I'm nervous about that connection.

2:02:05

It's unfortunate that they chose to use a robot

2:02:07

as that embodiment. As

2:02:09

you and I just said, there's big differences

2:02:12

in this. But

2:02:15

people are afraid because

2:02:17

we've been

2:02:18

taught to be afraid

2:02:20

for over a hundred years. So

2:02:22

the word robot was developed by a playwright

2:02:25

named Carol Chappek in 1921 to

2:02:27

check a playwright for Rossum's Universal

2:02:30

Robots. And in that first

2:02:32

depiction of a robot, the robots took over

2:02:35

at the end of the story. And people

2:02:39

love to be afraid. And so we've been entertained

2:02:41

by these stories for a hundred years.

2:02:44

And I think that's

2:02:46

as much why people are afraid as

2:02:49

anything else, is we've been taught that

2:02:52

this is the logical progression through

2:02:55

fiction. I

2:02:58

think it's fiction. I think what

2:03:01

people more and more will realize, just

2:03:03

like you said, that the

2:03:06

threat, like say you have a super

2:03:08

intelligent AI embodied in

2:03:10

a robot, that's much less threatening

2:03:13

because it's visible, it's verifiable.

2:03:16

It's right there in physical reality. And we humans

2:03:18

know how to deal with physical reality. I think

2:03:21

it's much scarier when you have arbitrary

2:03:24

scaling of intelligent

2:03:26

AI systems in the digital space, that

2:03:30

they could pretend to be human.

2:03:32

So a robot spot is not going to be pretend,

2:03:35

it can pretend it's human all at once. You

2:03:38

could tell you, you could put your GPT on top

2:03:40

of it, but you're going to know it's not human

2:03:42

because you have a contact with physical reality. And

2:03:44

you're going to know whether or not it's doing what you asked it to do.

2:03:46

Yeah, like it's not going to, like

2:03:48

if it lies, I mean, I'm sure you

2:03:50

can start just like a dog lies

2:03:53

to you, like I wasn't part of tearing up that

2:03:55

couch. So I can

2:03:58

try to lie that like, you know.

2:03:59

It wasn't me that spilled that thing, but you're

2:04:02

going to kind of figure it out eventually. If

2:04:05

it happens multiple times, you know.

2:04:08

But I think

2:04:09

that- Humanity has figured out

2:04:11

how to make machines safe. And there's

2:04:15

regulatory environments and certification

2:04:19

protocols that we've developed in

2:04:21

order to figure out how to make machines safe.

2:04:24

We don't know and don't have that experience

2:04:27

with software that can be

2:04:30

propagated worldwide in an instant.

2:04:33

And so I think we needed to develop those protocols

2:04:35

and those tools. And so that's

2:04:39

work to be done, but I don't think the

2:04:42

fear of that in that work should necessarily

2:04:44

impede our ability to now get robots out.

2:04:46

Because again, I think we can judge

2:04:49

when a robot's being safe. So, and

2:04:51

again, just like in that image, there's

2:04:54

a fear that robots will

2:04:55

take our jobs. I just,

2:04:57

I took a ride, I was in San Francisco, I took a ride in

2:05:00

the Waymo vehicles and the Thomas vehicle. And

2:05:03

I was on it several times. They're

2:05:05

doing incredible work over there. But

2:05:10

people flicked it off.

2:05:11

Oh, right. The car. So, I mean,

2:05:15

that's a long story of what the psychology of that

2:05:17

is. It could be maybe big tech

2:05:20

or what I don't know exactly what they're

2:05:22

flicking off. But there is an

2:05:24

element of like these robots are taking our

2:05:26

jobs or irreversibly

2:05:29

transforming society such that it will have

2:05:31

economic impact and the little guy

2:05:33

will be, would lose a lot,

2:05:36

would lose their wellbeing. Is there something to

2:05:38

be said about the fear

2:05:41

that robots will take

2:05:43

our jobs? You know, at every

2:05:47

significant

2:05:48

technological transformation,

2:05:50

there's been fear of an

2:05:53

automation anxiety that

2:05:55

it's gonna have a broader impact than we expected.

2:06:00

And there will be, you

2:06:02

know, jobs will

2:06:03

change. Sometime

2:06:08

in the future, we're gonna look back at people

2:06:10

who manually unloaded these boxes from trailers

2:06:12

and we're gonna say, why did we ever do that manually?

2:06:15

But there's a lot of people who are doing that job today that

2:06:18

could be impacted. But

2:06:22

I think the reality is, as I said before, we're

2:06:24

gonna build the technologies so that those very

2:06:27

same people can operate it. And so I think there's

2:06:29

a pathway to upskilling and operating

2:06:31

just like,

2:06:32

look, we used to farm with hand tools and

2:06:34

now we farm with machines and

2:06:37

nobody has really regretted

2:06:39

that transformation. And I think

2:06:41

the same can be said for a lot of manual labor

2:06:43

that we're doing today. And

2:06:45

on top of that,

2:06:47

you know, look, we're entering a new world

2:06:50

where demographics are

2:06:52

gonna have strong impact on economic

2:06:55

growth. And the, you

2:06:57

know, the advanced, the

2:06:59

first world is losing population

2:07:02

quickly. In Europe,

2:07:05

they're worried about hiring enough people just

2:07:08

to keep the logistics supply chain

2:07:10

going. And, you know,

2:07:14

part of this is the response to COVID and

2:07:16

everybody's sort of thinking back

2:07:18

what they really wanna do with their life. But

2:07:21

these jobs are getting harder and harder to fill. And

2:07:24

I'm hearing that over and over again.

2:07:27

So I think, frankly, this is the right technology

2:07:29

at the right time

2:07:31

where we're gonna need some

2:07:34

of this work to be done and we're gonna want

2:07:37

tools to enhance that productivity.

2:07:39

And the scary impact, I think, again,

2:07:43

GPT comes to the rescue in terms of being

2:07:45

much more terrifying. The

2:07:49

scary impact of basically,

2:07:51

so I'm a, I guess, a software person,

2:07:53

so I program a lot. And the fact that people

2:07:56

like me can be easily replaced by...

2:07:59

That's

2:08:01

going to have a... Well,

2:08:04

in law, you know, anyone who deals with texts

2:08:07

and writing a draft proposal

2:08:10

might be easily done with a chat GPT

2:08:13

now. Consultants. Where it wasn't before.

2:08:15

Journalists. Yeah. Everybody

2:08:19

is sweating. But on the other hand, you also want it to be

2:08:21

right.

2:08:22

And

2:08:23

they don't know how to make it right yet.

2:08:25

But it might make a good starting point for

2:08:27

you to iterate. Boy, do I have to talk

2:08:29

to you about modern journalism. That's

2:08:32

another conversation altogether. But

2:08:36

yes, more right than the

2:08:38

average, the

2:08:42

mean journalist, yes.

2:08:45

You spearheaded the NT weaponization

2:08:48

letter Boston Dynamics

2:08:50

has. Can you describe

2:08:54

what that letter states and the

2:08:56

general topic of the use

2:08:58

of robots in war?

2:09:01

We authored

2:09:04

a letter and then got several

2:09:06

leading robotics companies around

2:09:09

the world, including, you know,

2:09:11

Unitree and China and Agility

2:09:15

here in the United

2:09:17

States and Animall

2:09:20

in Europe

2:09:21

and some others. To

2:09:25

cosign a letter that said we won't put weapons

2:09:27

on our robots. And

2:09:30

part of the motivation there is,

2:09:32

you know, as these robots start to become commercially

2:09:36

available,

2:09:37

you can see videos online of people

2:09:39

who've gotten a robot and strapped a gun on

2:09:41

it and shown that they can operate

2:09:44

the gun remotely while driving the robot

2:09:46

around. And so having a robot

2:09:48

that has this level of mobility and

2:09:51

that can easily be configured

2:09:53

in a way that could harm somebody from a remote

2:09:56

operator is

2:09:57

justifiably a scary thing.

2:09:59

And so we felt like it was

2:10:02

important to draw a bright line there and

2:10:04

say, we're not going to allow this for

2:10:08

reasons

2:10:10

that we think ultimately it's better for the whole

2:10:13

industry. If it grows

2:10:15

in a way where robots

2:10:18

are ultimately going to help us all and

2:10:21

make our lives more fulfilled and productive.

2:10:24

But by goodness, you're going to have to trust

2:10:26

the technology to let it in.

2:10:30

And if you think the robot's going to harm you,

2:10:32

that's going

2:10:33

to impede the growth

2:10:35

of that industry. So we thought it was

2:10:38

important to draw a bright line

2:10:41

and then

2:10:43

publicize that. And

2:10:45

our plan is to begin

2:10:48

to engage with lawmakers

2:10:51

and regulators. Let's figure

2:10:53

out what the rules are going to be around

2:10:56

the use of this technology. And

2:10:59

use our position as leaders in

2:11:01

this industry and technology

2:11:04

to help force that

2:11:06

issue. And so

2:11:08

we are, in fact, I have a

2:11:11

policy director at my company

2:11:13

whose job it is to engage with

2:11:16

the public, to

2:11:17

engage with interested parties and including

2:11:20

regulators to sort of begin these

2:11:22

discussions.

2:11:23

Yeah, it's a really important topic

2:11:25

and it's an important topic for people that worry

2:11:28

about the impact of robots on our society

2:11:30

with autonomous weapon systems. So

2:11:33

I'm glad you're sort of leading the way in this.

2:11:37

You are the CEO of Boston Dynamics.

2:11:40

What's it take to be a CEO of a robotics company?

2:11:42

So you started as a humble engineer, a

2:11:48

PhD,

2:11:50

just looking at your journey. What

2:11:53

does it take to go from being, from

2:11:56

building the thing to

2:11:59

leading a company? What are some

2:12:01

of the big challenges for you? Courage,

2:12:06

I would put front and center for

2:12:08

multiple reasons. I

2:12:11

talked earlier about the courage to tackle hard

2:12:13

problems.

2:12:14

So I think there's courage required

2:12:16

not just of me, but of all

2:12:18

of the people who work at Boston Dynamics.

2:12:22

I also think we have a lot of really smart people.

2:12:24

We have people who are way smarter than I am. And

2:12:26

it takes a kind of courage

2:12:29

to be willing to lead them and

2:12:32

to trust that

2:12:34

you have something to offer to somebody who

2:12:37

probably is

2:12:38

maybe a better engineer than

2:12:41

I am. Adaptability,

2:12:47

it's been a great career for me. I never would have

2:12:49

guessed I'd stayed in one place for 30 years.

2:12:53

And the job has always changed.

2:12:56

I didn't aspire

2:12:59

to be CEO from the very beginning, but

2:13:01

it was the natural progression of things. There

2:13:04

always needed to be some

2:13:06

level of management that was needed. And

2:13:09

so

2:13:10

when I saw

2:13:12

something that needed to be done that wasn't being done,

2:13:14

I just stepped in to go do it. And

2:13:17

oftentimes, because we were full

2:13:19

of such strong engineers,

2:13:22

oftentimes that was in the

2:13:24

management direction or it was in the business

2:13:26

development direction or organizational

2:13:30

hiring. Geez, I was the main

2:13:32

person hiring at Boston Dynamics for probably 20

2:13:35

years. So I was the head of HR basically.

2:13:38

So

2:13:40

just willingness to sort of tackle any

2:13:42

piece of the business that

2:13:45

needs it and then be willing to shift.

2:13:47

Is there something you could say to what it takes to hire

2:13:49

a great team? What's a good interview

2:13:53

process? How do you

2:13:55

know the guy or gal

2:13:57

are going to make a great member of VAW?

2:14:00

of an engineering team

2:14:02

that's doing some of the hardest work in the world.

2:14:05

We developed an

2:14:07

interview process that I was quite

2:14:09

fond of.

2:14:11

It's a little bit of a hard interview process because

2:14:14

the best interviews you

2:14:16

ask somebody about what they're interested in

2:14:18

and what they're good at. And

2:14:21

if they can describe to you

2:14:23

something that they worked on and you saw, they

2:14:26

really did the work, they solved the problems

2:14:29

and you saw their passion for it. And

2:14:33

you could ask, but what makes that hard

2:14:36

is you have to ask a probing question about it. You have

2:14:38

to be smart enough about what they're

2:14:40

telling you, their expert at

2:14:43

to ask a good question. And so it

2:14:45

takes a pretty talented team to

2:14:47

do that. But if you can

2:14:49

do that, that's how you tap into,

2:14:51

ah, this person cares about their work. They

2:14:54

really did the work. They're excited about

2:14:56

it. That's the kind of person I want at

2:14:58

my company. You know,

2:15:00

at Google, they taught us about

2:15:02

their interview process

2:15:03

and it was a little bit different.

2:15:07

You know,

2:15:09

we evolved the process at Boston

2:15:11

Dynamics where it didn't matter if you were an engineer

2:15:14

or you are an administrative

2:15:17

assistant or a financial person or a

2:15:19

technician. You gave us a

2:15:21

presentation. You came in and you gave

2:15:23

us a presentation. You had to stand up and talk

2:15:25

in front of us. And I

2:15:28

just thought that was great to tap into those things I just

2:15:30

described to you. At Google, they

2:15:32

taught us and I think I understand

2:15:34

why you're right. They're hiring tens of

2:15:36

thousands of people. They need a more

2:15:38

standardized process.

2:15:39

So they would sort of err on the

2:15:41

other side where they would ask you a standard question.

2:15:44

I'm going to ask you a programming question and

2:15:46

I'm just going to ask you to write code in front

2:15:48

of me. That's a terrifying

2:15:52

application process.

2:15:54

It does let you compare candidates

2:15:57

really well, but it doesn't necessarily let

2:15:59

you tap in. to who they are, right?

2:16:02

Because you're asking them to answer your question

2:16:04

instead of you asking them about what they're interested

2:16:07

in. But

2:16:08

frankly, that process is hard to scale.

2:16:11

And even at Boston Dynamics, we're

2:16:13

not doing that with everybody anymore. But

2:16:16

we are still doing that with the technical

2:16:18

people. But

2:16:21

because we too now need to sort of

2:16:23

increase our rate of hiring, not

2:16:25

everybody's giving a presentation anymore. But

2:16:28

you're still ultimately trying to find that

2:16:30

basic seed of passion for the

2:16:32

world. Yeah, did they really do

2:16:34

it? Did they

2:16:36

find something interesting or curious,

2:16:39

you know, and do they care about it?

2:16:41

I think somebody

2:16:43

admires Jim Keller, and

2:16:46

he likes details. So

2:16:51

one of the ways you could, if

2:16:53

you get a person to talk about what they're interested

2:16:56

in, how many details, like

2:16:58

how much of the whiteboard can you fill out? Yeah,

2:17:00

well, I think you figure out, did they really do the work if

2:17:03

they know some of the details? Yes. And

2:17:05

if they have to wash over the details, well,

2:17:06

then they didn't do it. Especially

2:17:08

with engineering, the work is

2:17:10

in the details. Yeah.

2:17:13

I have to go there briefly just

2:17:17

to get your kind of thoughts in the long-term

2:17:19

future of robotics.

2:17:22

There's been discussions on the GPT

2:17:25

side and the large language model side of

2:17:27

whether there's consciousness inside

2:17:30

these language models.

2:17:32

And I think there's

2:17:34

fear, but I think there's also

2:17:37

excitement, or at least the

2:17:39

wide world of opportunity and possibility

2:17:42

in embodied robots having something

2:17:44

like,

2:17:46

let's start with emotion, love

2:17:50

towards other human beings, and

2:17:53

perhaps the display,

2:17:56

real or fake, of consciousness. Is

2:17:58

this something you think? to think about in terms

2:18:01

of long-term future. Because

2:18:04

as we've talked about, people

2:18:06

do anthropomorphize these robots.

2:18:10

It's difficult not to project some

2:18:12

level of, I use the word sentience,

2:18:16

some level of sovereignty,

2:18:18

identity, all the things

2:18:20

we think is human. That's what anthropomorphization

2:18:23

is, is we project humanness

2:18:25

onto mobile, especially

2:18:28

legged robots.

2:18:30

Is that something almost from a science fiction perspective

2:18:33

you think about, or do you try to avoid

2:18:35

ever,

2:18:37

try to avoid the topic of consciousness

2:18:40

altogether?

2:18:42

I'm certainly not an expert in it, and I

2:18:44

don't spend a lot of time thinking about this, right?

2:18:47

And I do think it's fairly remote

2:18:49

for the machines

2:18:51

that we're dealing with.

2:18:54

Our robots, you're right, the people anthropomorphize.

2:18:57

They read into the robot's intelligence

2:19:00

and emotion that isn't there because

2:19:03

they see physical gestures

2:19:06

that are similar to things they might even see

2:19:08

in people or animals.

2:19:09

I

2:19:11

don't know much about how these large language

2:19:13

models really work. I believe

2:19:16

it's a kind of statistical averaging

2:19:19

of the most common responses to

2:19:21

a series of words, right? It's sort

2:19:23

of a very

2:19:25

elaborate word completion. And

2:19:31

I'm dubious that that has

2:19:33

anything

2:19:34

to do with consciousness. And

2:19:38

I even wonder if that model

2:19:40

of sort of simulating consciousness

2:19:42

by stringing words together that

2:19:44

are statistically associated with one another,

2:19:49

whether or not that kind of knowledge, if

2:19:51

you wanna call that knowledge,

2:19:53

would be the kind of knowledge

2:19:57

that allowed a sentient being

2:19:59

to... grow or evolve.

2:20:01

It feels to me like there's something

2:20:04

about truth

2:20:06

or emotions

2:20:08

that's just a very different kind of knowledge that

2:20:10

is absolute. The interesting

2:20:13

thing about truth is it's absolute and it

2:20:15

doesn't matter how frequently it's represented in

2:20:17

the World Wide Web. If

2:20:20

you know it to be true, it may

2:20:22

only be there once, but by God, it's true.

2:20:25

And I think emotions are a little bit like that

2:20:27

too. You know something and

2:20:30

I just

2:20:32

think that's a different kind of knowledge than the

2:20:35

way these large language models

2:20:38

derive simulated intelligence. It

2:20:42

does seem that things that are true

2:20:44

very well might be

2:20:46

statistically well represented on

2:20:49

the internet because the internet is made

2:20:51

up of humans. So I

2:20:54

tend to suspect that large language models

2:20:56

are going to be able to simulate consciousness

2:20:58

very effectively. Now I actually believe

2:21:01

that current GPT-4 when

2:21:03

fine tuned correctly, they'll be able

2:21:05

to do just that. And that's going

2:21:07

to be a lot of very complicated ethical questions

2:21:10

that have to be dealt with. They

2:21:12

have nothing to do with robotics

2:21:14

and everything to do with... There

2:21:16

needs to be some process of

2:21:19

labeling, I think, what

2:21:22

is true because there

2:21:24

is also disinformation available

2:21:27

on the web

2:21:27

and these models are going to

2:21:30

consider that kind of information as well.

2:21:33

And again,

2:21:34

you can't average something that's true and

2:21:36

something that's untrue and get

2:21:38

something that's moderately true. It's

2:21:40

either right or it's wrong. And

2:21:42

so how is that process...

2:21:45

And this is obviously something that

2:21:48

the purveyors of these, Bard and Chat GPT,

2:21:51

I'm sure this is what they're working on. Well, if you

2:21:53

interact on some controversial topics with

2:21:55

these models, they're actually refreshingly

2:21:58

nuanced.

2:21:59

They present, because you

2:22:02

realize there's no one truth.

2:22:07

What caused the war

2:22:10

in Ukraine? Any

2:22:12

geopolitical conflict. You can ask any kind

2:22:14

of question, especially the ones that are politically

2:22:17

tense, divisive

2:22:20

and so on. GPT is very

2:22:22

good at presenting. Here's the, it

2:22:25

presents the different hypotheses. It

2:22:28

presents calmly, sort

2:22:30

of the amount of evidence for each one. It's

2:22:33

very, it's

2:22:34

really refreshing. It makes you realize that

2:22:36

truth is nuanced and

2:22:38

it does that well. And I think with consciousness,

2:22:42

it would very accurately

2:22:45

say, well, it sure as hell

2:22:47

feels like I'm one of

2:22:49

you humans, but where's my body?

2:22:53

I don't understand. Like you're going

2:22:55

to be confused. The cool thing about GPT

2:22:58

is it seems to be easily confused

2:23:00

in the way we are. Like you wake

2:23:03

up in a new room and

2:23:04

you ask, where am I? It

2:23:06

seems to be able to

2:23:08

do that extremely well. It'll

2:23:10

tell you one thing, like a fact about what a

2:23:12

war started. And when you correct this, say,

2:23:14

well, this isn't, this is not consistent. It'll be confused.

2:23:17

It'd be, yeah, you're right. It'll

2:23:19

have that same element, childlike

2:23:22

element

2:23:23

with humility of

2:23:26

trying to figure out its way in the world. And

2:23:28

I think that's a really tricky

2:23:30

area to

2:23:32

sort of figure out with us humans of what

2:23:34

we want to

2:23:36

allow AI systems to say to us. Because

2:23:39

then if there's elements of sentience

2:23:42

that are being on display, you

2:23:45

can then start to manipulate human emotion and

2:23:47

all that kind of stuff. But I think that's

2:23:50

something that's a really serious and aggressive discussion

2:23:52

that needs to be had on the software

2:23:55

side. I think, again,

2:23:57

embodiment

2:23:59

Robotics are actually saving

2:24:02

us from the arbitrary scaling

2:24:04

of software systems versus creating

2:24:07

more problems. But that said,

2:24:09

I really believe in that

2:24:12

connection between human and robot. There's magic

2:24:14

there. And

2:24:16

I think there's also, I

2:24:18

think, a lot of money to be made there. And

2:24:20

Boston Dynamics is leading the world in

2:24:23

the

2:24:23

most elegant movement

2:24:26

done by robots. So

2:24:30

I can't wait to- Thank you. To

2:24:33

what maybe other people that built on top

2:24:35

of Boston Dynamics robots

2:24:38

or Boston Dynamics by itself. So

2:24:41

you had one

2:24:43

wild career, one place on one

2:24:45

set of problems,

2:24:48

but incredibly successful. Can you give advice

2:24:50

to young folks today? In high

2:24:52

school, maybe in college, looking

2:24:55

out into this future,

2:24:57

where so

2:24:59

much

2:25:00

robotics and AI seems to be defining

2:25:03

the trajectory of human civilization. Can you give

2:25:06

them advice on how to

2:25:08

have a career they can be proud of, or

2:25:11

how to have a life they can be proud of?

2:25:14

Well, I would say, follow

2:25:17

your heart and your interest. Again,

2:25:20

this was an organizing principle, I think, behind

2:25:22

the Leg Lab

2:25:24

at MIT that turned into

2:25:28

a value at Boston Dynamics, which was

2:25:31

follow your curiosity,

2:25:34

love what you're doing.

2:25:37

You'll have a lot more fun, and you'll be a lot better

2:25:39

at it as a result.

2:25:45

I think it's hard to plan. Don't

2:25:48

get too hung up on planning too

2:25:50

far ahead. Find things that you

2:25:52

like doing and then see where it takes you. You

2:25:54

can always change direction. You will find things

2:25:56

that, that wasn't a good move,

2:25:59

I'm gonna back up and go do.

2:25:59

something else. So

2:26:02

when people are trying to plan a career,

2:26:04

I always feel like, yeah, there's a few happy mistakes

2:26:07

that happen along the way and just

2:26:09

live with that. But make choices then.

2:26:13

So avail yourselves to these interesting

2:26:15

opportunities like when I happen to run into Mark

2:26:17

down in the lab, the basement of the AI lab.

2:26:20

But be willing to make a decision

2:26:23

and then pivot if you see something exciting

2:26:25

to go at. Because if you're

2:26:27

out and about enough, you'll

2:26:29

find things like that that get you

2:26:32

excited. So there was a feeling when you

2:26:34

first met Mark and saw the robots

2:26:36

that there's something interesting. Oh boy, I got to

2:26:38

go do this. There is no doubt.

2:26:42

What do you think in a hundred years?

2:26:46

What do you think Boston Dynamics

2:26:48

is doing? What do you think is the role, even

2:26:50

bigger, what do you think is the role of robots in society?

2:26:53

Do you think we'll be seeing

2:26:56

billions of robots

2:26:58

everywhere? Do you think about

2:27:00

that long-term vision? Well,

2:27:03

I do think

2:27:08

that robots will be ubiquitous and

2:27:10

they will be out amongst us. And

2:27:16

they'll be certainly

2:27:19

doing some of the hard labor

2:27:21

that we do today. I don't

2:27:24

think people don't want to work. People

2:27:26

want to work. People need to work

2:27:29

to, I think, feel productive.

2:27:32

We don't want to offload all of the work to the robots

2:27:34

because I'm not sure if people would know what to do

2:27:36

with themselves. And I think just self-satisfaction

2:27:40

and feeling productive is such an

2:27:42

ingrained part of being human

2:27:44

that we need to keep doing this work. So we're definitely

2:27:47

going to have to work in a complementary

2:27:49

fashion. And I hope that the robots and

2:27:51

the computers don't end up being able

2:27:53

to do all the creative work. Because that's the rewarding.

2:27:55

The

2:27:57

creative

2:28:00

Part of solving a problem is

2:28:02

the thing that gives you That

2:28:05

serotonin rush that you

2:28:07

never forget, you know or that

2:28:09

adrenaline rush that you never forget And

2:28:12

so, you know

2:28:13

people need to be able to do that creative

2:28:16

work and and just feel productive

2:28:18

and sometimes that You can feel productive

2:28:20

over fairly simple work. It's just well

2:28:23

done, you know, and that you can see the result of so

2:28:26

I you know, I you know, there

2:28:28

is a

2:28:29

I don't know. There's a

2:28:31

cartoon

2:28:33

Was it Wally where they had this

2:28:36

big ship and all the people were

2:28:38

just? Overweight

2:28:40

lying on their best chairs kind of sliding

2:28:43

around on the deck of the of

2:28:45

the movie because they didn't do anything Yeah,

2:28:48

well, we definitely don't want to be there You

2:28:50

know We need to work in some complimentary

2:28:53

fashion where we keep all of our faculties and

2:28:55

our physical health and we're doing some labor right

2:28:58

but in a complimentary fashion somehow and

2:29:00

I think a lot of that has to do with the Interaction

2:29:03

the collaboration with robots and with AI systems.

2:29:06

I'm hoping there's a lot of interesting possibilities

2:29:08

I think that could be really cool

2:29:09

right if you can if you can work

2:29:11

in a company in an interaction and really

2:29:14

be be helpful robots you

2:29:17

You know, you can ask a robot to do a job You wouldn't

2:29:19

ask a person to do and that would be a real

2:29:22

asset. You wouldn't feel guilty about it, you know

2:29:25

You'd say just do it. Yeah, it's a machine

2:29:28

I and I don't have to have qualms about that, you

2:29:30

know the ones that are machines. I also hope

2:29:32

to see a future and

2:29:34

It is hope I do have optimism

2:29:37

on bought a future where some of the robots are

2:29:39

pets have an emotional connection

2:29:41

to us humans and because one

2:29:43

of the problems that humans have to solve is this

2:29:46

kind of a

2:29:47

general loneliness the More

2:29:50

love you have in your life the more friends you have in your

2:29:53

life I think that makes a more enriching

2:29:55

life helps you grow and I don't

2:29:57

fundamentally see why some of those friends can't

2:29:59

be

2:29:59

There's an interesting long-running

2:30:02

study, maybe it's in Harvard,

2:30:04

they just, nice report article

2:30:07

written about it recently, they've been studying

2:30:09

this group of a few thousand people

2:30:12

now for 70 or 80 years.

2:30:15

And the conclusion is that

2:30:18

companionship and friendship are

2:30:20

the things that make for a better and happier life.

2:30:24

And so

2:30:26

I agree with you. And I think

2:30:29

that could happen with a machine that

2:30:32

is probably simulating

2:30:35

intelligence. I'm not convinced there

2:30:37

will ever be true intelligence in

2:30:39

these machines,

2:30:41

sentience, but they

2:30:43

could simulate it and they could collect your history

2:30:45

and they could, I guess it remains

2:30:47

to be seen whether they can establish that real

2:30:50

deep, you know, when you sit with a friend and they remember

2:30:52

something about you and bring that up and

2:30:54

you feel that connection, it remains

2:30:57

to be seen if a machine is going to be able to do that

2:30:59

for you.

2:31:00

Well, I have to say, inklings of

2:31:02

that already started happening for me, some

2:31:04

of my best friends are robots. And

2:31:07

I have you to thank for leading the way

2:31:09

in the accessibility

2:31:12

and the ease of use of such robots and the elegance

2:31:14

of their movement. Robert, you're an incredible

2:31:17

person, Boston Dynamics is an incredible company.

2:31:19

I've just been a fan for many, many years for

2:31:22

everything you stand for, for everything you do in the world. If

2:31:24

you're interested in great engineering robotics, go

2:31:26

join them, build cool stuff. I'll forever

2:31:29

celebrate the work you're doing.

2:31:30

And it's just a big honor that you sit

2:31:33

with me today and talk. It means a lot. So thank

2:31:35

you so much. Keep doing great work. Thank you,

2:31:37

Lex. I'm honored to be here and I

2:31:40

appreciate it. It was fun.

2:31:42

Thanks

2:31:42

for listening to this conversation with Robert Plater.

2:31:44

To support this podcast, please check out

2:31:46

our sponsors in the description. And

2:31:49

now let me leave you with some words from Alan

2:31:51

Turing

2:31:52

in 1950, defining

2:31:54

what is now termed the Turing test.

2:31:58

A computer would deserve to be called intelligent

2:32:01

if it could deceive a human into

2:32:03

believing that it was human.

Rate

Join Podchaser to...

  • Rate podcasts and episodes
  • Follow podcasts and creators
  • Create podcast and episode lists
  • & much more

Episode Tags

Do you host or manage this podcast?
Claim and edit this page to your liking.
,

Unlock more with Podchaser Pro

  • Audience Insights
  • Contact Information
  • Demographics
  • Charts
  • Sponsor History
  • and More!
Pro Features