Podchaser Logo
Home
The State of Autonomous Vehicles

The State of Autonomous Vehicles

Released Wednesday, 29th January 2020
Good episode? Give it some love!
The State of Autonomous Vehicles

The State of Autonomous Vehicles

The State of Autonomous Vehicles

The State of Autonomous Vehicles

Wednesday, 29th January 2020
Good episode? Give it some love!
Rate Episode

Episode Transcript

Transcripts are displayed as originally observed. Some content, including advertisements may have changed.

Use Ctrl + F to search

0:04

Welcome to tech Stuff, a production of

0:06

I Heart Radios How Stuff Works. Hey

0:12

there, and welcome to tech Stuff. I'm your host,

0:14

Jonathan Strickland. I'm an executive producer with I

0:16

Heart Radio and I love all things tech. And

0:18

if you haven't noticed already, yeah, my voice

0:20

is all sorts of jacked up because I'm getting

0:23

over a cold. I apologize for that, but

0:26

the tech must go

0:28

on now. Just a couple

0:30

of years ago, the tech world

0:32

in general was pretty optimistic

0:35

about autonomous cars, and I include

0:37

myself in that group. I remember seeing

0:40

the remarkable progress that had

0:42

come out from the first DARPA

0:44

Grand Challenge up to about I

0:46

don't know ten

0:49

or so, and and it seemed like we were just on the

0:51

verge of having fleets of

0:53

robo taxis at our back and call. But

0:56

now we've gone on for several more

0:58

years and we're still at a point are only a

1:01

handful of companies are conducting limited

1:03

tests. Plus there

1:05

have been some high profile cases

1:07

of accidents involving vehicles operating

1:09

under autonomous or semi autonomous

1:12

modes that ended in tragedy.

1:14

So in this episode, we're going to take a look

1:17

at autonomous cars and where we stand

1:19

today. Now, let's start with I

1:21

think it helps if we run through the levels

1:24

of autonomy, and not everyone uses

1:27

these levels to talk about autonomy, and

1:29

to be honest, the barriers between levels

1:31

are a bit fuzzy, and sometimes we're not

1:33

really able to say where

1:36

we are at as far as

1:38

levels of autonomy. We can look back

1:40

at previous developments

1:42

and say, all right, well, judging on where we are

1:45

now, we'd say that this falls into level

1:47

two or level three, But it can be a

1:49

little difficult to see what level

1:51

we are currently in without

1:54

you know, truly remarkable

1:56

evidence. But in general, this

1:58

is a useful way to talk about how far along

2:01

we are uh as

2:03

far as getting fully autonomous cars.

2:06

So technically the levels

2:08

range from zero to five, so

2:10

that means there are really six levels.

2:13

However, level zero really means

2:15

there is no autonomy at all. So

2:17

with that type of vehicle, the human driver

2:19

is responsible for all operations

2:22

of the vehicle. Every driving

2:24

task is handled by the driver

2:26

alone. So some folks will say

2:29

that they're really just five levels of autonomy.

2:31

Zero would refer to vehicles that really,

2:34

honestly, they don't exist that

2:36

much anymore. Now you might be thinking

2:39

but hey, Mr smarty Pants

2:41

podcast person. I drive a car and

2:43

it doesn't have any autonomous vehicle features,

2:46

but depending upon whom you ask, features

2:48

like power steering or anti lock

2:50

breaks or cruise control and

2:53

other pretty common features

2:55

fall into the low level autonomous

2:58

range. It doesn't mean your car is at on this, but

3:00

it has some of the components

3:03

that are identified with this

3:06

concept of autonomy. So most

3:08

cars today are actually above

3:11

level zero if we go by that definition

3:13

there level one are higher. So

3:16

level one autonomy would apply to cars

3:19

where the driver still controls the vehicle.

3:21

The vehicle is still under driver control, but the

3:23

car has some driver assistance

3:25

features like power steering or antilock

3:28

brakes. The car might have what is called

3:30

an advanced driver Assistance

3:32

system or a d a S, and

3:35

the word advanced makes it sound a bit

3:37

fancier than it is at this particular

3:40

level of autonomy. The car

3:42

might have systems that help people steer,

3:44

or it might have systems that help accelerate

3:47

and or break, but the steering

3:50

and accelerating or steering and breaking can't

3:52

happen simultaneously. Either

3:54

one or the other can be taken

3:56

over by these systems, but not both.

3:58

At the same time, not with level one.

4:01

If we get up to level to autonomy, then

4:03

we're talking about partial automation.

4:06

The A d A s on these cars can do

4:08

stuff like control steering and breaking,

4:10

or steering and accelerating at the same

4:12

time, at least under certain circumstances.

4:15

But even in those cases, the car's driver

4:17

still remains primarily in control

4:20

of the vehicle. With this level of autonomy,

4:22

a driver would still not remove their hands from

4:24

the wheel, as the car would need the

4:27

humans participation to you know, works

4:29

safely. So with level to autonomy, you

4:31

still have to have your attention on the road,

4:33

you still have to have your hands on the wheel. It's just that

4:35

the car can occasionally kick in

4:38

and assist in various scenarios,

4:40

typically in very restricted cases.

4:43

Now at level three autonomy,

4:45

we're getting up to conditional automation.

4:48

These cars would still require a human driver,

4:51

but there can be times when the car systems

4:53

can operate the vehicle on its own and

4:55

the driver is essentially a passenger.

4:58

During those moments. The driver is still

5:00

supposed to monitor the environment. They're still supposed

5:02

to be prepared to take over the car should the vehicle

5:05

indicate it needs to hand over control

5:07

to the person behind the wheel. So ideally

5:09

there would be a system where the car would

5:12

identify a situation in which the driver

5:14

needs to take over, and then well

5:16

in advance of that situation becoming

5:18

imminent, it would alert the driver

5:21

to take over control of the car. Uh.

5:23

This is trickier, right. This is

5:25

harder to do than it than to

5:28

say, because the car would have to know far

5:30

enough in advance to be able to send

5:32

that alert to the driver, and the driver would

5:34

have to be able to respond to that. And while

5:37

we feel like our response time is

5:39

really fast, in computational

5:42

terms, we are snails.

5:45

We move super slow. So

5:47

this is actually pretty tricky, especially if you're talking

5:49

about a dynamic situation

5:52

where things are changing very rapidly. At

5:54

level three, autonomous cars are supposed to uh

5:57

do this seamlessly, and as I said, that

6:00

is a pretty tricky thing to do technically.

6:03

Most vehicle systems were looking

6:05

at now, especially the ones like Tesla's autopilot,

6:08

falls somewhere in level three. Level

6:11

four autonomy is at a point where

6:13

a vehicle can automatically operate itself

6:15

at least under certain conditions, but not

6:17

necessarily all driving conditions,

6:20

the vehicle would likely include

6:22

the option for a human driver to

6:24

take over operations, but under

6:26

normal, you know, conditions

6:28

that the car would pretty much drive itself. So

6:31

with level for autonomy, you would have self

6:33

driving cars that could

6:35

act as a self driving car for most

6:38

of the time, but also allow

6:40

a human driver to take over if the human

6:42

driver wanted to. UM.

6:44

Level five autonomy is a fully

6:46

autonomous car. The car can operate itself

6:49

under all driving conditions,

6:51

So any condition where a human would be driving a

6:53

car, a level five autonomous

6:55

car should be able to operate in that same

6:57

situation. There may not be any

7:00

steering wheel or any controls

7:02

at all in a vehicle, meaning there's

7:04

no option for a human driver to take

7:06

over. Now that's not our prerequisite.

7:09

You can have a level five

7:11

fully autonomous car that would still

7:13

have controls and still allow humans

7:15

to take over manually if

7:18

they if they chose to do so.

7:20

It's just that it's an option, it's

7:22

not. It's no longer a mandatory thing to

7:24

have those human based controls

7:27

with a level five autonomous car. UM

7:29

So we don't have any of these

7:31

yet, So really talking about this

7:34

is purely in the hypothetical. Arguably,

7:36

we have some that are in the level

7:39

four range, but there will

7:41

get to that they're under very strict

7:43

parameters, all right, So most

7:45

experts agree that the versions

7:47

of autonomous cars we've seen so far are

7:50

mainly in the level three and level four categories,

7:53

uh, creeping more toward a

7:55

firm level four. We're kind of in the early

7:57

stages of that, and there's several tests

8:00

programs that are operating almost

8:02

as if we're at level five. But

8:04

there's disagreement about whether or not technology

8:07

is really sophisticate enough to warrant us

8:09

calling any existing vehicle a

8:11

level four or level five autonomous car.

8:14

And so, while we have

8:16

some examples of cars and I'll talk about a

8:19

couple of them that lack control

8:21

systems for human drivers, they

8:23

are almost all prototypes and concept

8:25

vehicles or in very limited testing

8:27

situations. Uh. And so

8:30

therefore they don't really rank as level five

8:32

autonomous cars because while they lack the

8:34

controls, they cannot operate in

8:36

every situation an environment

8:38

that humans drive in. So

8:41

it's it's too early for us to talk

8:43

about deploying cars that have no way to hand

8:45

over control to human driver in

8:47

all regions and in all

8:49

you know, driving situations. Okay, so

8:52

let's do a very quick rundown on

8:54

the history of autonomous cars up to say, like

8:57

or so, and to see why sub folks like like

9:00

yours truly we're so bullish on

9:02

the future of autonomous cars.

9:04

So the history stretches back a good

9:06

long ways, particularly if we're looking at

9:08

stuff like power steering. But that's

9:11

getting way too granular. I'm not going to do

9:13

that. And the history is also really

9:15

complex, and that involves lots of different

9:18

disciplines converging into

9:20

the autonomous car form factor.

9:23

You have stuff like robotics, you know, sensored

9:25

development, artificial intelligence,

9:27

computational processing, power, range

9:30

finding technology, lots

9:32

of things that all have to come together. And

9:35

to really dive into the complex

9:37

history of all the technologies that are

9:39

coming together to make autonomous cars possible

9:42

would require a whole mini series of

9:44

episodes. So we're not going to jump

9:46

into all of that in this one. Instead,

9:48

I want to focus on things like the

9:50

DARPA challenges that were created

9:52

in the mid two thousand's. The first one

9:54

was in two thousand four and DARPA,

9:57

as you'll recall, is the research and Development

9:59

Arm of the depart I'm in the defense in the United States,

10:01

so it's technically an organization

10:04

that funds various other

10:07

groups to do R and D in

10:09

technologies that ultimately stand

10:12

to benefit the defense of the United

10:14

States. So while there are

10:16

other uses for those technologies

10:19

that don't directly relate to defense

10:22

or military systems, that's

10:24

the primary purpose for DARBA. So in

10:27

two thousand four they created this

10:29

this challenge. They called for teams

10:31

to build or convert vehicles

10:33

into autonomous cars

10:36

that were capable of navigating a long

10:38

distance desert course is

10:40

more than a hundred miles long, and there

10:42

it needs to be no human operators, so it

10:44

could not be remotely controlled, nor would

10:46

there be a driver in the vehicle. And the

10:49

idea was design a car that would

10:51

be capable of traveling a

10:53

predesignated route from

10:55

beginning to end. For that two thousand

10:57

four challenge, no team was able to complete

11:00

eat the challenge. Uh cars

11:02

failed. Some of them went off road

11:04

and got stuck, some of them just got

11:06

confused and stopped. So

11:09

no one completed it within the

11:11

time frame that DARPA had set.

11:13

But it's set the stage for subsequent

11:15

competitions. In two thousand five, DARPA

11:18

held another Grand Challenge again with

11:20

the desert course. This one was a hundred thirty

11:22

two miles long and this time

11:24

five teams were able to complete the route

11:27

and the winning team was from Stanford

11:29

Race. Stanford University was the Stanford

11:31

Racing team. They clocked the shortest

11:33

time on the course. By shortest

11:35

time, I'm still talking about a long

11:38

time. It took them six

11:40

hours fifty three minutes to

11:42

make the one thirty two mile

11:45

journey. That would mean that the average speed

11:47

for the vehicle when taken across the whole

11:49

course was somewhere around eighteen miles

11:51

per hour or approximately twenty nine kilometers

11:54

per hour, which isn't exactly tearing up

11:56

the track, but it was still a very impressive

11:58

achievement. I don't want to take away from what

12:00

they achieved. It was incredible,

12:03

especially for the time, but it's

12:06

not the sort of speed you would look at and

12:08

think, oh, well, this is the replacement for the modern

12:10

car. The next challenge

12:13

would happen in two thousand seven, and it switched

12:15

things up by requiring teams to design a

12:17

car capable of navigating through a simulated

12:20

urban environment complete with

12:22

traffic and traffic laws like

12:24

you know, traffic lights and stop signs and

12:27

simulated pedestrians. It

12:29

wouldn't be enough to design a car that

12:31

could detect a road and follow it,

12:33

or even a car capable of managing stuff

12:35

like how to how

12:38

to send torque two different

12:40

wheels. In order to get out of a tricky situation,

12:42

the cars would need advanced collision

12:45

detection and decision making capabilities.

12:47

They have to obey traffic laws, they'd

12:50

have to be able to adapt to potentially changing

12:52

situations, the kind of stuff you might find if

12:55

you're driving around a city.

12:57

So in that case, six

12:59

teams were able to finish

13:01

the course, Stanford Racing would actually

13:04

take second place that time. They

13:06

clocked in at just under four and a

13:08

half hours. First place went

13:10

to a group called Tartan Racing from

13:12

Carnegie Melon University and

13:14

they finished in four hours ten

13:16

minutes. Now, the purpose of these competitions

13:19

wasn't just to find out which groups

13:21

of smarty pants engineers were

13:23

able to build the best car. It

13:25

was an attempt to kick start serious development

13:28

in the various fields related to

13:31

making autonomous cars a possibility.

13:34

Engineers worked on all sorts of different

13:36

designs. Some incorporated lots

13:38

of optical cameras. Some used

13:41

lie dar, which is a type of laser based

13:43

range finding technology similar to radar.

13:46

So it works by zapping out a laser

13:48

and then detecting any reflections coming

13:50

back from that laser light. It uses

13:53

an array of sensors looking

13:55

for any evidence of that laser light

13:57

coming back to the sensor, and then

14:00

measures the time difference between when the laser

14:02

went out and when it picked

14:05

up the reflection, and

14:07

then, working with some math, it can

14:09

figure out how far away an

14:11

obstacle is from the vehicle. Not only

14:13

that, it can also figure out whether or not that obstacle

14:16

is moving, or if it's stationary, or if it's

14:18

moving away from or toward the vehicle.

14:20

It can figure out all that. Uh, and

14:23

I've talked about that in past episode, so I

14:25

won't get into the whull technical details here,

14:27

but it was one of those key components

14:30

that's used in some, but not all

14:33

vehicles that are following under this

14:35

autonomous card development. It's interesting

14:37

because there are lots of different companies that are

14:40

working on autonomous cars. They are

14:42

not all relying on exactly the

14:44

same technologies to achieve

14:46

that goal. Some of them are much more heavily

14:48

focused on optical cameras. Some

14:51

of them are more focused on things like lidar

14:53

and other sensors. Some of them

14:55

involve a whole, you know, slew

14:58

of different technologies that are meant

15:00

to be both uh, you know, primary

15:02

systems and redundant systems. So it's really

15:04

interesting and it was. It was

15:06

really impressive to see these teams complete

15:08

the Urban Challenge, but again, it didn't immediately

15:11

make everyone think driverless cars would be available

15:13

right away. The challenges,

15:16

while impressive, didn't

15:18

compare to what the average human driver

15:21

deals with on a on a regular day. The

15:23

competition times were pretty long. The

15:25

average speeds were all below fifteen

15:28

miles per hour, so they're all below twenty

15:30

four kilometers per hour. At that speed,

15:32

it was clear that these vehicles were just the earliest

15:35

incarnations of technologies

15:37

that would power autonomous cars in

15:39

the future. So they were airing on the

15:41

side of caution, which frankly, you

15:44

want in the first place. You don't want to see

15:46

a lot of people say let's take some

15:48

chances, when when you're talking about

15:50

vehicles, I mean, they're human lives at stake.

15:53

Meanwhile, another narrative

15:55

drives home pun intended why

15:58

a lot of folks got really hyped up about

16:00

autonomous cars. It's also a

16:02

sobering line of thought.

16:05

And I'm talking, of course, about the

16:07

frequency of fatal car accidents

16:10

and how many of them can be traced

16:12

back to human error. Now,

16:14

getting global statistics is pretty

16:17

tough on this, so I'm going to focus on the United

16:19

States because we have a lot of organizations

16:22

in the US that track these kinds

16:24

of numbers, and you can kind of get

16:26

an idea of how big the problem

16:28

is. So in tween, the

16:30

National Safety Council released a report

16:33

that stated and estimated forty

16:35

thousand people had died in

16:37

car accidents in the United States. Uh,

16:40

that actually amounted to a decline of

16:42

one percent from two thousand seventeen.

16:45

That was when forty one

16:48

people died. Another four and a

16:50

half million people on top of that had

16:52

become seriously injured in car

16:54

crashes in Meanwhile,

16:58

the National Highway Traffics say D administration

17:01

in the United States said that of

17:04

serious car crashes result

17:07

due to human error or dangerous

17:09

choices. So, in other words, mechanical

17:12

failures only contribute a

17:14

very small percentage to the overall numbers.

17:16

When it comes to serious car accidents,

17:19

most serious car accidents aren't caused

17:21

by a tire blowout or

17:24

you know, a car failing in

17:27

some way. They're caused by humans

17:29

doing something wrong, whether

17:32

it's totally by accident or

17:34

someone just makes a really bad decision, like

17:36

they think, oh, sure there's no there's

17:38

no dashed line here, but I'm gonna go ahead

17:40

and try and pass this person on this windy

17:43

rural road because I bet nobody's

17:46

coming the other way. That's what we would call

17:48

a bad decision, So says

17:51

the tech optimist. If you

17:53

could create autonomous cars

17:55

that operates safely, you could eliminate

17:58

the vast majority of car crashes

18:01

and thus fatalities on the

18:03

road. You just remove the human error

18:05

element, and suddenly you're talking

18:07

about a staggering result,

18:10

and that is an incredibly powerful

18:13

motivator. Tens

18:15

of thousands of people wouldn't

18:18

die each year from these car

18:20

accidents. Millions more would

18:22

never be injured or affected by

18:24

the tragic loss of a loved one from

18:27

an accident. Then you start

18:29

moving outward, you go out another

18:31

circle. You think of this as a ripple effect and

18:33

you think, imagine all the contributions

18:36

those people might make in the future

18:38

that they'll get a chance to make because

18:41

they wouldn't have had this terrible car

18:43

crash. These are things we

18:45

never would see come to fruition

18:47

if they were to get in a fatal car crash,

18:50

and it becomes this butterfly effect issue.

18:52

And of course, we want to make

18:55

the roads safer for everyone. Now,

18:58

I'm sure all of you have already hit upon

19:00

the major issue here. The whole

19:03

concept of people being safer in

19:05

autonomous cars is contingent

19:07

upon those autonomous cars performing

19:10

better than humans already do and

19:13

in every type of situation in which

19:15

humans find themselves driving in. If

19:17

we can't get that right, then

19:20

we haven't made things safer at

19:22

all. All we would have done is

19:24

shifted the cause of the accidents

19:27

from human error to machine

19:29

error or computer error. So

19:31

we must be absolutely certain that the

19:34

vehicles we make meet a very

19:36

high standard if our goal

19:39

is to reduce car accidents.

19:41

So we have to prove that

19:44

these machines operate better than people

19:46

do in all the different situations

19:48

people find themselves driving in before

19:50

we can make any sort of declarative statement

19:53

of this is the best way forward.

19:55

Now, when we come back, I'll talk about why

19:57

this gets super tricky, and

20:00

talk about thought experiments

20:02

and things, and and also some

20:04

real world scenarios that kind of illustrate

20:07

why this is harder than what it

20:09

sounds. But first, let's take a

20:11

quick break. Before

20:20

the break, I positive that a future

20:22

with autonomous cars that all

20:24

but eliminate fatal car crashes hinge

20:27

upon building driverless

20:29

vehicles that are much better at driving cars

20:31

than humans are in all situations.

20:33

Now, we could get a bit more lucy

20:36

goosey here, but doing so brings

20:38

up some tough ethical issues. So, for

20:41

example, what if we knew that

20:44

machines were better. Right, autonomous cars

20:46

are better than human drivers, but they

20:49

are by no means perfect. So

20:51

what if we could be certain that autonomous

20:54

cars, if widely adopted, would

20:56

reduce those fatalities by half,

20:58

for example, but they would still be

21:00

at fault in the case of the other

21:03

half of fatal car accidents.

21:05

So let's say it's you know, I don't know, and

21:09

we have level four

21:11

autonomous cars that are pretty

21:14

reliably level four and

21:17

they are better as a whole than human

21:19

drivers are. So we've seen a vast

21:21

reduction in vehicles operated

21:23

by humans. And let's even assume that most

21:25

cars are now controlled by computers,

21:28

but let's also assume they're not perfect

21:30

now. Using the numbers from

21:33

if humans were still in control, we would

21:35

expect to see another forty thousand fatalities

21:38

due to human error. And I'm just using

21:40

that number as an example. I realized that in reality

21:43

we'd be talking about nine of

21:45

forty thousand. But now that cars

21:48

are in control, it means half

21:50

of those accidents are totally prevented.

21:52

But we still see fatal accidents that

21:55

claim twenty thousand lives.

21:57

On the one hand, we could look at

21:59

that scenari are you and say, based upon

22:01

what we know from past experience,

22:04

we would have seen many more people die in

22:06

accidents if humans were actually still operating

22:09

cars. But on the other hand, that's

22:11

all hypothetical, right, I mean, we can

22:13

only know anything based on

22:15

what actually happened, not on

22:17

what might have happened if things

22:19

had gone a different way. We can't be

22:22

sure. But more than that, though,

22:24

we're still talking about twenty thousand

22:26

people losing their lives and all

22:28

the ripple effects that that makes throughout

22:31

society, and moreover, we

22:33

have machines that are at the

22:35

fault for those twenty thousand lives

22:37

being lost, and the idea that people have

22:40

built machines that, through a failure

22:42

of some sort or another, resulted

22:44

in deaths is a very difficult

22:47

proposition to accept. Also,

22:49

it's just a key to think of in

22:52

terms like that. I mean, clearly, one

22:54

death is too many. We don't

22:56

want to see anyone die in a car

22:59

accident. Having a discussion

23:01

in which you compare a fewer number

23:03

of deaths and referring to it as quote

23:05

unquote better is something that's

23:07

pretty hard for us to process. It's

23:10

easier to do it the other way, right, I mean, it's

23:12

obvious that forty thousand

23:14

people dying is worse than twenty

23:17

thou people dying, But it's hard

23:19

to view it the other way because

23:21

anyone dying at all is awful.

23:24

Now. Part of this also really boils down

23:27

to a fear of handing over

23:29

control to a machine. I know a

23:31

lot of people bulk at that idea.

23:34

They don't like the idea of not being the

23:36

actual entity making decisions

23:38

behind the wheel. Confronting them

23:41

with statistics showing how human error

23:43

leads to catastrophe, doesn't

23:45

tend to sway them. I mean a

23:47

lot of people think, well, yeah, that's

23:49

other people. I am

23:52

not that person. Also, to be fair,

23:55

we don't have the evidence to show that computers

23:57

would necessarily be better, so

23:59

they're something to that right now.

24:02

Okay, let's let's get back to where we were in our

24:04

history. The Grand Challenges helped set

24:06

the stage for the next phase of development,

24:09

which was mostly the realm of startups

24:11

and some big companies, namely Google,

24:14

would hire participants from the Grand

24:17

and Urban Challenges to come and work

24:19

in new divisions dedicated to creating driverless

24:22

cars. The early pioneering work

24:24

was now shifting pun intended

24:26

into a phase of rapid iteration,

24:29

as engineers and computer scientists

24:31

and mechanics began to refine technologies

24:34

to help make them better. So going

24:37

from the first sort of proof of concept

24:39

approach to how do we make

24:42

this a better design so it does

24:44

the thing it does but more effectively.

24:47

Google's program began in earnest around

24:49

two thousand nine, not long after

24:51

the Urban Challenge. In twenty ten,

24:54

publications began to report on

24:56

the project. So it's been secret for about

24:58

a year, maybe almost two years. Google

25:01

had been testing vehicles in and around

25:03

the Mountain View, California, headquarters

25:05

for the company. And while the vehicles still had

25:07

manual controls and they still had a driver

25:10

behind the wheel, there were at least some

25:12

segments of some of these test drives

25:14

that felt totally under the control of the vehicle

25:17

itself. It was ranking up miles

25:19

of autonomous driving experience. It was gathering

25:21

data, and those people who are working in the

25:23

division use that data to further

25:25

refine their approach. By the

25:28

company had logged more than one hundred forty

25:31

thousand miles driven by autonomous

25:33

vehicles, which equals out to around two

25:36

five thousand kilometers. And that's

25:39

pretty, you know, respectable

25:41

distance. But let's compare that against

25:43

the miles that were driven by human drivers

25:46

in the United States. So in US

25:49

drivers accumulated nearly

25:51

three trillion miles

25:55

trillion. So that means

25:57

if you were to do a percentage and you were to say

26:00

how many how much percentage of miles did

26:02

Google cars drive compared to human

26:05

drivers in the US, in the

26:08

Google cars would account for about point

26:10

zero zero zero

26:13

zero zero four seven

26:15

percent of all miles traveled

26:17

vehicle miles. So not

26:20

you you can call it a fraction of a percent, But

26:23

even that is being generous. It's a fraction of a

26:25

fraction of a fraction of a percent. Now,

26:28

if you're familiar with the

26:30

idea of things like conducting surveys, you

26:32

know that sample size is really important.

26:35

Right, If you ask five people

26:37

a question, extrapolating those

26:39

five answers to try and apply it to

26:41

the population at large is a bad

26:43

idea. It's not a good sample

26:45

size. You don't have enough data to draw any

26:47

conclusions. It's definitely

26:49

bad science. So it makes little

26:52

sense to compare the results of autonomous

26:54

vehicles that haven't even come close

26:57

to accumulating a percentage

26:59

of the my was driven by the population at

27:01

large. You cannot compare the two because

27:05

the experience is so monumentally

27:07

different. Now, for several years,

27:10

Google's cars operated without

27:12

any accidents, at least not any

27:14

that were the fault of the driverless car

27:16

itself. There were a few

27:18

incidents, but they either happened

27:21

when the safety driver was operating

27:23

the car, so a human driver was driving

27:25

the Google car not autonomous vehicle

27:28

mode, or there

27:30

there was the fault of some other driver. Right,

27:32

someone in a totally different car got

27:35

into an accident with a Google car, and it wasn't the

27:37

fault of the autonomous system, but rather the

27:39

other driver. Those were really the only

27:41

two kind of categories of incidents that

27:43

happened in the early days of Google's

27:45

testing. So at first glance, it

27:48

looked like the driverless cars were truly

27:50

safer than a human operated vehicle. Right,

27:52

They had a much better record than human drivers

27:54

did, and it may very

27:57

well be the case that they were in

27:59

fact much much safer than human

28:01

drivers. But we have to go back to

28:03

the sense of scale here. So

28:06

in the United States, drivers travel

28:08

more than three trillion miles by vehicle

28:11

per year. I think the most recent one

28:13

was almost three point three trillion. We're

28:15

getting ridiculously high

28:17

in numbers, and there are around

28:20

forty thousand fatalities per year.

28:22

And for the sake of this example, will assume

28:25

all of those fatalities were caused by human error

28:27

or bad decisions, just to simplify

28:29

things. So if we do some rough math, we'll

28:32

see that that amounts to one death per

28:34

seventy five million miles driven. Now

28:37

that's my estimate just based on back

28:39

of the napkin. The actual estimates even

28:41

more generous than that. The National Safety Council

28:44

estimates that there's one point to five

28:47

deaths per one hundred million

28:49

vehicle miles driven. So

28:51

what does that mean for autonomous vehicles,

28:53

Well, they haven't driven close to a

28:55

hundred million vehicle miles. It

28:58

means those early days when we first learned

29:00

that Google had launched its project, there were so

29:02

few miles accumulated that you

29:04

can't draw any meaningful conclusions.

29:06

Now. To be fair, I don't think many people

29:09

were trying to argue that autonomous

29:11

car technology as it was in two

29:13

ten, was already clearly superior to human

29:16

driving. This was still an early

29:18

testing phase. This was a point where it wasn't

29:20

about showing that the technology was already

29:22

better than humans. It was rather showing,

29:25

hey, we've created technology that will allow

29:27

this card to navigate and maneuver

29:30

through human environments without

29:33

making it a problem. So it wasn't even

29:35

that our our standard is higher

29:37

than human capability. It's more

29:39

like, can this machine operate

29:41

at the same level as humans within

29:44

certain parameters pretty

29:46

restrictive parameters. Skip ahead

29:48

a few years, several companies invested

29:51

in driverless car technologies that

29:53

included big car companies, you know

29:55

Toyota and Chrysler and others GM

29:57

they've all invested huge amounts of money

30:00

in autonomous car research and development.

30:02

UH. It also included startup companies independent

30:05

startups that either we're working on components

30:07

for autonomous cars like light our

30:10

systems, or they were attempting to

30:12

convert or build fully autonomous

30:14

vehicles themselves. And then there

30:16

were ride hailing companies, most notably

30:19

Uber, that we're also investing

30:22

billions of dollars in this technology

30:24

with an eye on replacing

30:26

the fleets of human operated vehicles

30:29

that were the bread and butter of their company

30:32

to uh turn them all

30:34

over to robotaxis. So instead

30:36

of having human drivers over at Uber, you

30:38

know, Uber at the highest level wants

30:41

to replace them with autonomous

30:43

vehicles for reasons that

30:45

are complex but mostly come down to money.

30:48

So meanwhile, consumer vehicles

30:50

were getting more and more sophisticated,

30:53

and higher end vehicles started sporting

30:55

some really nifty features that

30:58

relate to autonomous cars are

31:00

semi autonomous in themselves. Some

31:02

of them are more modest, like lane assist

31:05

or breaking assist safety features.

31:08

Some are a little more spectacular,

31:11

like the self parking capabilities that some

31:13

cars have where they can park

31:15

themselves and and pull out of parking

31:18

spaces all by themselves, like that's

31:20

pretty cool. They weren't intended

31:22

to make consumer cars autonomous,

31:25

but were rather positioned as sort of value

31:27

added options for cars, like this

31:29

is something nifty this car has, other

31:31

cars don't have it. Don't you want to buy this car?

31:34

And they give a hint of what might be

31:37

in days to come. In Elon

31:41

Musk started talking about an autopilot

31:43

like feature for cars, and sure

31:45

enough, the following year, Tesla

31:47

unveiled a driver assist suite

31:50

of features called autopilot.

31:52

Now, personally, and I've talked about this before,

31:55

I think naming it autopilot was the

31:57

wrong move. I feel like the word

31:59

auto pilot has a loaded

32:01

meaning to it. It conveys a sense that

32:03

the car will take care of everything

32:06

for you, and that's not necessarily

32:08

the case. In fact, that's not the case at all.

32:10

The company tried to walk that back

32:12

a bit, uh not by renaming

32:15

it, which I think they needed to do, but

32:17

they included messages, and this

32:20

they also need to do, But they included messages that said

32:22

drivers were not meant to remove

32:24

their hands from the wheel or to take

32:26

their attention away from the road, that

32:29

these systems can assist,

32:31

but they don't replace the need for a driver,

32:34

and you have to agree to that

32:36

before you can enable the

32:38

autopilot feature. So the

32:41

goal was saying, well, you have to acknowledge

32:43

the fact that no, this is not meant for it

32:45

to be an autonomous car, and not

32:47

to go off on too much of a tangent. But I

32:49

feel as though Elon Musk might be a little

32:52

too aggressive with his

32:54

projections about autonomous cars.

32:57

And I don't mean to suggest that Elon Musk and

32:59

Tesla are interchangeable. I

33:01

do see that happening a lot in techt circles,

33:03

where people will use one or the other

33:06

interchangeably, and they are two different

33:08

entities. But maybe Tesla

33:11

the company's bravado stems

33:13

from Elon Musk's own personality.

33:16

But whatever the case, autopilot has

33:18

proven to have its own limitations,

33:21

and we saw that manifest in some

33:23

rather high profile and tragic

33:25

accidents. Beginning in ten,

33:29

there have been several fatal accidents

33:31

involving Tesla vehicles operating

33:33

in autopilot mode. The first

33:35

one took place on January twenty, two

33:38

thousand sixteen, in China, and

33:40

the most recent examples I know about

33:42

took place on December two,

33:45

thousand, nineteen, and there are actually two

33:47

crashes with fatalities that day involving

33:50

Tesla vehicles reportedly engaged

33:52

in autopilot. I say reportedly, because

33:55

I don't have access to all the data, I

33:57

don't know if conclusively

34:00

they've discovered that both of these vehicles were

34:02

actually operating an autopilot mode. One

34:04

of these happening California and the other

34:06

happened in Indiana, both in the United

34:08

States on December nineteen.

34:11

Now, Tesla states that autopilot is

34:13

meant as a driver assist feature

34:16

and it's only semi autonomous. But at

34:18

the same time, Elon Musk has said repeatedly

34:20

that his goal was to get a fully autonomous

34:22

vehicle on the road by the end of

34:24

twenty nineteen, which now has been pushed

34:26

back to sometime in the first quarter of so

34:30

there are some conflicting messages coming

34:32

out. Since a fully autonomous car

34:34

and I'm talking about something that we would at least classify

34:37

as level four, if not level five, is

34:39

well beyond just a driver assist

34:42

mode. And I should also add that Tesla

34:45

drivers have a responsibility to use

34:47

these features safely and as

34:50

intended. If someone is

34:52

taking their attention off the road, or

34:54

they're sitting back from their steering wheel,

34:57

or they're taking a nap, or they're watching Netflix

34:59

or whatever, that's dangerously

35:02

irresponsible behavior, and they

35:04

are accountable for it. I don't

35:06

want to give the implication to you guys

35:08

that I think Tesla the company is fully

35:11

to blame in this case. I actually think it's

35:13

a shared responsibility, and that

35:15

you've got some drivers who are eager to

35:17

test out admittedly really cool

35:19

and technologically advanced features,

35:22

and you have a company that might message

35:24

out these features in a way that isn't

35:27

perhaps the most realistic or responsible

35:29

method. It's a really bad

35:32

combination, right. You've got people

35:34

who are tech heads who are eager

35:36

to play with the newest stuff. You've got a

35:38

company that's Bill's reputation on creating

35:41

super cool new stuff. It's

35:44

only natural that you

35:46

get when you combine those two, you can

35:48

get some bad situations if they haven't been

35:50

messaged properly. And I really feel

35:52

that Tesla bungledness that

35:55

the rollout needed to be done in such a way

35:58

where there was never the implicate atian

36:01

that this was an autonomous

36:03

mode. Uh. Saying hey it's

36:05

not autonomous after you've already called it autopilot

36:07

and put the idea in the into people's heads is

36:10

a little late in the game. So I think

36:12

that that all parties here

36:14

share accountability. It's not just

36:17

Tesla the company's fault, and it's not

36:19

entirely the driver's faults, although I

36:21

think it's more their fault than the company's. Honestly,

36:24

I mean, we're all adults, right, you should

36:26

be if you're driving a car, and if you're an adult,

36:28

you should be able to make the determination of hey,

36:30

this is a bad idea. I should

36:32

also add that Tesla is

36:34

not the only company that has had autonomous

36:37

or semi autonomous vehicles involved in

36:39

fatal accidents. There was a case in

36:41

Tempe, Arizona, involving Evolvo

36:44

that had been converted into a semi autonomous

36:47

vehicle that was being

36:49

operated under Uber and

36:51

that car hit a pedestrian while an autonomous

36:53

mode, and the pedestrian died

36:55

as a result of that accident. So

36:58

Tesla is not the only company

37:00

that has had tragedy befall

37:03

it due to you know, failures

37:05

in autonomous systems. Getting

37:07

back to the scale argument

37:10

for a second, when we're talking about autonomous

37:12

systems allegedly at fault for accidents

37:15

that lead to fewer than

37:17

a dozen deaths, you could say, like,

37:19

well, it's all tragic. You

37:21

never want to see anyone die.

37:23

One death is really too many, but still

37:26

twelve less than twelve, that's that's so much

37:28

fewer than you know, forty thousand. And

37:30

you might be tempted to say these are tragic accidents,

37:33

but if you look at how many are caused by humans, there's really

37:35

no comparison. But once again, you

37:37

have to remember that humans account for way more

37:39

vehicle miles traveled by several

37:42

orders of magnitude. So

37:44

really the only way you could compare the two is

37:47

if you had autonomous systems driving

37:49

as many miles as humans are

37:51

driving, and then you'd have to see if

37:53

they still stacked up favorably, if those numbers

37:56

were still matching up are still mismatched,

37:59

like if if a ton of this car is still accounted for, you

38:02

know, uh, significantly fewer

38:04

accidents. But we can't say that because

38:07

the autonomous cars are driving far fewer

38:09

miles than humans are. So

38:12

it is true that most accidents involving

38:14

autonomous vehicles seemed to be the fault

38:16

of human drivers. You know, it's not like

38:18

most of the accidents we hear about were caused

38:21

by the autonomous vehicles themselves. It

38:23

tends to be that someone else,

38:25

some other human, caused the accident.

38:29

But the case of these

38:31

fatalities, it does look like it was the

38:33

autonomous system at fault, and that's truly

38:36

truly concerning um. And

38:39

also, you know, when when it's when

38:41

it's a person who's at fault.

38:44

We understand that people make mistakes, and

38:47

we can feel, at least in some

38:49

cases, we can feel some sympathy for a person

38:52

where perhaps the situation was truly

38:55

out of their control, that

38:57

that situation was was partcularly

39:00

extreme or unusual,

39:03

and so we can feel so some sympathy for the

39:05

person. But when it's a machine, then

39:07

we've already surrendered control up to it,

39:10

and that's where it

39:12

gets particularly scary. You

39:14

know, we have to trust in the machine, and

39:16

when the machine betrays that trust

39:19

by failing, that's a big problem. So

39:21

what happens when there are no controls

39:23

at all? The humans can access more

39:25

on that in just a moment but first, let's take another

39:28

quick break. One

39:36

of the challenges autonomous car companies

39:38

and engineers have faced is how do you

39:40

balance between computer and manual

39:43

control of a car? You know, how

39:45

should control switch from one

39:48

to the other. When should an automated

39:50

system take over to avoid an

39:52

accident like a collision

39:54

prevention system, or when should

39:56

a driver be able to override autonomous

39:58

commands and bring the vehicle under manual

40:01

control. Doing this is not as

40:03

straightforward as you might think, and

40:05

and doing it in a way that's safe

40:08

and has a seamless transition of control

40:10

is really hard. But what

40:13

if there's no question about

40:16

it at all? Because there are no controls

40:18

to take See back in twenty four Google

40:21

showed off a driverless car prototype

40:24

that had no steering wheel, had no

40:26

accelerator, no brake pedal, so

40:28

there were no controls for a human to take

40:30

over. The car would only operate

40:32

autonomously because there were no

40:34

other options. The prototype

40:36

worked with a smartphone app and acted as

40:39

sort of a ride hailing or robo

40:41

taxi service. Users could

40:43

summon a car using the app and

40:45

they would indicate where they were wanted to go within

40:48

a very restricted range

40:50

of operation. Like it was geo

40:52

fenced, so you couldn't go beyond

40:55

a certain border that

40:57

was pretty limited, and that

41:00

meant that the vehicle had

41:02

a lot of variables reduced, right it

41:05

It cut back on the types of conditions

41:07

and routes and situations the car

41:09

might encounter, and thus

41:12

made the problems of

41:14

having an autonomous car slightly

41:17

less complicated. There's still opportunities

41:20

for complications, but you've drastically

41:23

reduced them because you've reduced the variables.

41:25

Well. The vehicle used an electric

41:27

motor that was good for about one miles

41:30

of driving per charge, and it boasted

41:33

a top speed of twenty five miles per hour. So

41:35

this little car would only really be suitable for transportation

41:38

and restricted situations such as the

41:40

campus of a big company like I don't

41:42

know, Google. It wasn't intended

41:45

as a practical vehicle for widespread adoption,

41:47

but rather another iterative step

41:50

towards fully autonomous cars. The

41:52

robotaxi vision is one that tends

41:54

to be the most common across the autonomous

41:56

car space. That's largely because the technology

41:59

used to of cars autonomy, you know, the

42:01

the sensors, computers,

42:04

robotic systems, that kind of stuff they

42:06

don't come cheap, and a vehicle would

42:08

cost significantly more than a

42:10

manually operated vehicle a traditional

42:13

car, So most experts agree

42:15

that the future of autonomous cars, at least

42:18

in the near term, will

42:20

be in fleets that are operated by companies

42:22

like Uber or Lift. They will

42:24

be ride healing vehicles or robo

42:26

taxis, and they will take passengers to

42:28

their destinations, and then those

42:31

cars will then move on to pick up their next

42:33

fair, or they'll return to some sort

42:35

of h Q for recharging or

42:37

maintenance or whatever. It's unlikely

42:39

that we're gonna see autonomous vehicles offered up

42:42

for private ownership right away for the

42:44

most part, due to the prohibitive

42:46

expense of this additional technology.

42:49

The Google's experiment pointed out both

42:51

the advances of the tech and the

42:54

limitations of autonomous car technology.

42:57

Yeah, the car had no controls, which

42:59

is what you would expect only if you had

43:01

a level five autonomous car. But

43:04

it also had very strict geo

43:06

fencing restrictions and operational

43:09

restrictions, so it couldn't go very fast,

43:11

it couldn't venture very far, it wouldn't

43:13

likely encounter unusual situations.

43:16

So because of that, it wouldn't be

43:18

Level five anyway, because you've you've

43:20

limited the scenarios

43:23

where it would be operating in the first place, it

43:26

would not be driving into all the different situations

43:28

that a human driver would encounter. A

43:31

truly autonomous vehicle would need to be

43:33

able to handle everything, all

43:35

sorts of unpredictable situations. The average

43:37

person isn't likely to encounter a truly

43:40

unusual experience on any

43:42

given drive, right, It's not

43:44

like if you drive down the road you're going to

43:46

see every single outlier.

43:49

That's very unlikely. However,

43:52

when you have a collective three trillion

43:54

vehicle miles traveled per year, you're

43:57

bound to get some pretty extreme

44:00

situations somewhere in those

44:02

three trillion miles. So you might have a

44:05

person who has to drive through a dangerous environment,

44:07

like maybe mud slides

44:09

are coming across a road, or when

44:11

people were evacuating parts of California

44:13

that were affected by wildfires, or

44:15

there might be you know, animals in

44:17

the road. There could be people in

44:19

the road. Weather effects

44:22

can be unpredictable, and they can change driving

44:24

conditions rapidly. There are all

44:26

sorts of things that humans encounter every

44:29

year, with varying degrees of

44:31

success and maneuvering around or

44:33

through them. And if we actually

44:35

do see autonomous cars take up more

44:37

of the car landscape, those autonomous

44:40

cars are also going to encounter those situations

44:42

too. It's just a matter of the odds, you

44:44

know. And there are a lot of unanswered

44:47

questions about how these cars are going

44:49

to deal with those situations when they arise,

44:52

and that includes the famous trolley problem

44:54

dilemma. Now, in the classic trolley problem,

44:57

you're presented with a hypothetical situation

45:00

in which a trolley is out of control. It's

45:02

moving down the tracks, uh and it cannot

45:04

stop. So if you do nothing,

45:07

if you do not act, the trolley

45:09

will continue down the track and it's

45:11

going to hit a group of five people. It's gonna

45:14

there's no doubt it will kill those five

45:16

people. However, there next to a lever,

45:18

and if you pull that lever, you will send

45:21

the trolley down a side track, so

45:23

it will miss the five people, but it will definitely

45:25

hit and kill one person. So

45:28

if you do nothing, five people die, But

45:30

if you act, one person dies.

45:32

So does making the choice to pull the lever

45:35

amount to murdering that one

45:37

person? Did you just choose to kill

45:39

that person. Does doing nothing mean

45:42

that you've murdered five people or does

45:44

it just mean that you allowed five people to die?

45:46

Is there any meaningful difference between those two

45:48

things. Well, these are all questions

45:50

and ethics, but with autonomous cars it

45:53

gets into less hypothetical territory. You

45:55

have to actually start to answer these questions.

45:57

Cars may very well encounter, since

45:59

you, ations in which there is no

46:02

way to avoid injuring or

46:04

killing someone. So in those

46:06

cases, what do the cars do you

46:08

know who? How do the cars choose

46:11

which person is to be put at

46:13

risk? How do they decide what

46:15

action to take? Do they try to protect the people

46:18

who are inside the car at all costs,

46:20

so in other words, yeah, we're gonna make this decision

46:23

which will protect the people who are inside the car.

46:25

By anyone else there they are

46:27

fair game. Or do they try to protect people

46:30

who are outside the car who maybe don't

46:32

have the benefit of the car's other safety

46:34

features. Maybe you build it into

46:36

an autonomous car that the people

46:39

inside the car are allowed

46:41

to encounter a bit more risk because

46:43

your thought is, well, the inside the car

46:45

is very safe, so we want to make sure we

46:47

protect say a pedestrian or bicyclist.

46:50

We don't want the car to hit them because

46:52

they will suffer way more damage than

46:55

the people inside would. So we're going

46:57

to make that decision. That's a that's a possible choice

46:59

too, But these are not necessarily answered

47:01

questions. There are questions that are being answered

47:04

as people are designing these vehicles. One

47:06

benefit that autonomous cars might

47:09

have is that organizations overseeing

47:11

them could, at least in theory, use

47:14

the collective information across an

47:16

entire fleet of autonomous cars

47:18

to improve performance of each vehicle

47:20

within that fleet. So if

47:23

one car were to encounter a really

47:25

unusual experience, engineers

47:27

could take the data from that experience

47:30

and tweak the behavior of all the cars

47:32

across the fleet. So when

47:34

one individual encounters something

47:37

new, everyone learns

47:39

from that experience. So it's sort of like

47:41

the borg in Star Trek. It's

47:44

a collective and that's a big advantage

47:46

over human beings, right because when it comes to

47:48

humans, the person who experienced

47:51

something, they might learn from

47:53

that experience, but that that

47:55

learning, that knowledge doesn't automatically

47:58

spread across the population and general

48:01

So in that way, autonomous

48:03

cars can have a big advantage over human drivers.

48:05

If that is used properly

48:08

on the flip side, when it comes to something

48:10

as potentially deadly as a vehicle,

48:13

it's pretty cavalier to say, well, the

48:15

cars will learn as they go, and we'll apply

48:17

that knowledge to all the vehicles. They'll get better

48:19

the longer they drive, because

48:22

if learning also includes accidents

48:25

that could potentially result

48:27

in injuries or fatalities, that's

48:29

a really steep price to pay for

48:32

knowledge. And we're seeing more companies

48:34

developed vehicles that have no manual

48:37

control systems. You know, Google came out with

48:39

There's in but that's

48:41

not the only case of it. In January,

48:44

g MS Autonomous Car division, which is called

48:46

Cruise. Originally it was an independent

48:48

startup, but GM gobbled them up. A couple

48:50

of years ago, they unveiled a driverless

48:52

car called Origin. And the Origin,

48:55

like Google's prototype, has no steering

48:57

wheel, has no accelerator, no brake pedal.

49:00

It has seats that all face inward.

49:02

They're kind of like, you know, think imagine two benches

49:05

with with backs, but the two benches

49:08

are facing each other, so the people

49:10

sitting in what would consider to be the front

49:12

of the vehicle would have their backs to the

49:14

windshield and they'd be looking back

49:16

at the people sitting in the back seats, who'll

49:18

be looking forward. Uh. Now,

49:21

it's about the size of a crossover suv,

49:23

and that means there's a pretty good amount of space inside

49:26

the vehicle. So while you are facing

49:28

the other folks, like if if you're in the front seat,

49:31

you're facing the folks in the back and they're facing you. Because

49:33

there's so much space, you're not likely to accidentally

49:36

kick each other or anything. It looks pretty roomy.

49:38

On top of that, the car has a cool

49:41

little keypad on the doors. And

49:43

the idea is that a production model

49:45

of this car would be used like a robotaxi.

49:47

So you would hail a ride on your smartphone

49:50

and this little robo car would come driving

49:52

up to you, and then it would give

49:55

you a multi digit pass code.

49:57

You would get one on your app and you would

49:59

look at that ask God, and you would type the numbers into

50:01

the keypad and that would open the doors.

50:04

So that way, you know, some unauthorized

50:06

person wouldn't just jump into your car and

50:08

then go gallivanting off without you.

50:11

You would be able to unlock the

50:13

car yourself because you had a one time use

50:16

key code. That's a decent concept

50:18

for a working robotaxi, but the fact remains

50:21

that we haven't hit level five autonomy

50:23

yet. At best, we have limited level

50:26

four. Most of the vehicles we've seen in testing

50:28

can perform autonomously, but only

50:30

with pretty tight restrictions like that along

50:33

specific predefined routes or

50:36

within very strict geo fencing,

50:38

or at particular times of the year or

50:40

even particular times of the day.

50:43

Again, that helps reduce the variables

50:45

that the car mine encounter on any given

50:47

day, and it gives it the best chance to operate

50:49

safely, but that really limits

50:51

how useful the cars are in practical

50:54

applications. For autonomous cars

50:56

to work as an alternative to manually controlled

50:58

vehicles, they need to brand and pretty much all

51:01

the same conditions that regular cars

51:03

do without restrictions, and we just

51:05

aren't there yet, and we might not

51:07

be for several more decades.

51:10

The Prognos Research Institute

51:12

actually identified four factors that are

51:15

in the way of autonomous vehicles. They

51:17

include technological maturity, which

51:19

is what I was just talking about. Infrastructure

51:21

development, so having you

51:24

know, cities that are designed

51:26

in such a way that they can allow for

51:28

autonomous cars the inertia

51:31

of the fleet. This means that you

51:33

know, there's a ton of manually controlled

51:35

vehicles out in the world already, right The

51:37

vast majority of cars that are out there

51:40

are manual control vehicles.

51:42

They might have some limited autonomy, but for the most

51:44

part, they're controlled by humans. It would take a

51:46

very long time before autonomous

51:48

vehicles represent a significant percentage

51:51

of the overall vehicles on the road, let

51:54

alone a majority. So it

51:56

will take many, many, many years to

51:59

wean off of a human controlled

52:01

cars and go to autonomous cars, barring

52:04

any legislation that outlaws

52:06

vehicles um or human

52:08

controlled vehicles, I guess I should say. And then finally

52:10

we have legal hurdles

52:12

to overcome the regulations that

52:15

are going to be coming out around driverless

52:17

cars. We're seeing a lot of money

52:20

poured into research and development to push

52:22

the technological limits further

52:24

and to establish the foundation for truly

52:27

autonomous vehicles. But I wonder

52:29

if these various companies and their

52:31

investors are really in it for the

52:33

long haul, so to speak, because

52:35

I suspect it's going to take a pretty long

52:38

time to get to a point where we feel there's

52:40

really reliable safe level

52:42

five autonomous vehicles in the world, let

52:45

alone a world in which governments have also

52:47

agreed and have caught up

52:49

and have defined the legal parameters

52:52

for the operation of these vehicles. Because

52:54

you know, it's one thing to prove the technology works.

52:56

That doesn't necessarily mean that technology will

52:58

be legal to to operate, right

53:01

Like, governments tend to move a lot more slowly

53:03

than technology does. So if investors

53:06

are willing to play the long game, then

53:09

I think their investments will ultimately pay

53:11

off. But it's going to take

53:13

a long time, which means

53:16

lots of repeated investments are going to be

53:18

required to keep these companies going, to keep

53:20

them innovating and improving

53:22

technologies. And meanwhile, there's

53:24

not going to be an actual market for them

53:26

to capitalize on outside

53:28

a few, you know, test programs that

53:30

don't really count, because there's

53:33

no way that the revenue they're generating

53:35

is actually eclipsing

53:37

the cost of operation. It's

53:39

got to be a money losing proposition right

53:41

now in all the different test cases at

53:44

scale, with fully

53:47

legal vehicles that are embraced

53:50

by the general public shore it could

53:52

work from a financial standpoint

53:54

right now, though it's all just proof

53:56

of concept that hasn't uh

54:00

seem full fruition. Now. I still

54:02

believe in autonomous cars. I

54:04

still believe they will ultimately make the roads

54:07

safer and reduce the number of deaths

54:09

and injuries from car accidents. I

54:11

just think it's going to take a lot longer that

54:14

I had previously imagined.

54:16

And that's not necessarily a bad

54:18

thing. This isn't important enough issue

54:21

that we have to make sure we get it right

54:23

that we can deploy vehicles in ways that

54:26

makes sense, that are truly safe,

54:28

that are ethical, that there

54:30

are in as an ideal implementation

54:33

as we can manage UH.

54:35

And we have to make sure

54:37

that it makes financial sense

54:39

too, right. We need to have h make sure

54:41

that it truly represents an affordable way

54:44

to get around that eliminates the need for stuff

54:46

like garages and parking

54:48

lots and dense urban centers. Those

54:50

areas could be reclaimed and used for

54:52

other stuff, and that stuff

54:55

might be far more productive than just being a

54:57

storage place for a car when it's not being

54:59

in use. Personal ownership could

55:01

really be on a serious decline in that kind

55:04

of future, replaced with on demand

55:06

car service, and the cars

55:08

that are in service would be used much more frequently

55:11

rather than just sitting idle and taking up space

55:13

for the vast majority of their existence. If you think about

55:15

your average car um

55:17

it's the amount of time you're

55:19

actually using it versus the amount of time

55:21

it's just sitting there doing nothing is

55:23

staggering, right. So if you're able to

55:25

make more use of the vehicle,

55:29

uh, then it's a more efficient

55:31

use of the technology. It's a it's a better investment

55:35

for all the the materials

55:38

that went into making that vehicle. So

55:40

you could argue, well, this makes more sense from

55:43

multiple perspectives if we're

55:45

able to make better use of this technology

55:47

and not just have it sitting someplace taking up

55:49

room. But it's

55:52

it's a lot a lot of things have to fall into place

55:54

for that future to come true. I

55:58

think it's a future that it

56:00

makes sense, but only if we can get the

56:02

tech just right and before then,

56:05

what we're really risking is making

56:08

bad decisions that just make

56:10

it harder to get to the

56:12

right future. So we have to be

56:14

careful in how we're testing these things.

56:16

We have to minimize risk while

56:19

maximizing our our ability

56:22

to learn things, which is a very

56:24

tricky thing to do because Ultimately, you do

56:26

have to start deploying autonomous cars

56:28

into populated centers or

56:31

else. All you've done is created something that

56:33

works really well in the lab, but

56:35

not well in the real world, and

56:37

that would be useless to us because most

56:40

of us don't live in a lab. I know

56:42

I don't, not since

56:44

two thousand fourteen, but

56:46

that's another story. Guys. If

56:49

you have any suggestions for future topics

56:51

for tech Stuff, reach out to me.

56:54

You can find me on social media.

56:56

I'm on Facebook and on Twitter with

56:58

the handle tech stuff h s

57:01

W and I'll talk to you again

57:04

really soon. Hext

57:09

Stuff is a production of I Heart Radio's How Stuff

57:11

Works. For more podcasts from I heart

57:13

Radio, visit the I heart Radio app,

57:15

Apple Podcasts, or wherever you listen

57:18

to your favorite shows.

Unlock more with Podchaser Pro

  • Audience Insights
  • Contact Information
  • Demographics
  • Charts
  • Sponsor History
  • and More!
Pro Features