Podchaser Logo
Home
Learning from Good and Bad Outcomes

Learning from Good and Bad Outcomes

Released Tuesday, 31st March 2020
Good episode? Give it some love!
Learning from Good and Bad Outcomes

Learning from Good and Bad Outcomes

Learning from Good and Bad Outcomes

Learning from Good and Bad Outcomes

Tuesday, 31st March 2020
Good episode? Give it some love!
Rate Episode

Episode Transcript

Transcripts are displayed as originally observed. Some content, including advertisements may have changed.

Use Ctrl + F to search

0:00

It's the middle of winter, a

0:02

blizzard hits. Jackie,

0:04

give us some blizzard . Sounds thank

0:08

you. There's two people, Sarah

0:10

and Doug, they both have a

0:12

trip planned to a destination, a couple

0:14

of hours away. However,

0:16

with this storm comes a

0:19

no unnecessary travel

0:20

Warning to roads are icy conditions.

0:24

Basically the government is saying you shouldn't be

0:26

driving right now, both

0:29

Sarah and Doug make the decision to go

0:31

on their trip anyways. So

0:33

they made the same choice to brave the elements

0:36

go on their trip. Sarah

0:39

makes it to her destination safely.

0:42

Doug is in a ditch

0:44

halfway to his destination wedding

0:47

on triple a to come tell him out.

0:49

Nope . Both of them

0:50

Chose to drive in this whiteout condition. Right,

0:53

Right. But okay. Let's put ourselves

0:55

into Sarah's shoes, Sarah hanging

0:57

out with the friends. She feels like,

0:59

yo, I'm so glad I

1:01

made the choice to go. I made a good decision

1:04

to come on this trip because I made it right.

1:06

How do you think Doug's feeling Doug's over here? Like

1:08

why did I decide to drive in this

1:11

Idiot? I should have listened to the warning.

1:14

Now think about this though. They made

1:16

the same decision. Sarah

1:19

thinks it was a great decision and Doug

1:21

thinks it's a crappy decision, but it was the same

1:23

one.

1:24

So this story is an example of what's

1:26

called the outcome bias. And what's

1:28

happening here is that we end up judging

1:30

our process based on the outcome that it leads

1:32

to. Right? We

1:33

Assume it's like, oh, the quality

1:35

of my decision is based off what happened.

1:37

Right? Sarah, good decision, Doug

1:40

bad. But if you just

1:42

zoom out, they made the same. They made the exact

1:44

same decision. Okay. So we can see how outcome

1:46

bias is shaping how they

1:49

are judging their decisions. But this

1:51

is actually a bigger phenomenon that

1:53

affects all of us in big and small ways.

1:55

And when it comes to learning, this is

1:58

I think an essential tool to be aware

2:00

of. So that's what we're going to do.

2:05

Welcome to the litter lab podcast.

2:07

I'm Trevor Ragan . I'm Alex Belser

2:10

each week. We're going to explore a topic to help us

2:12

become better learners. If you're interested

2:14

in more, you can check out the learn lab.com

2:16

for videos, articles, and

2:18

more pots . Let's go.

2:30

It's February

2:32

1st, 2015 super bowl . Sunday, the

2:35

Seattle Seahawks are on the one yard line.

2:37

27 seconds left down

2:40

by four Russell.

2:42

Wilson begins his cadence for

2:44

what will become one of the most infamous plays

2:47

in super bowl history.

2:59

[inaudible]

2:59

During the game, the announcers, weren't huge

3:02

fans of this game . It's up

3:03

To the punch and I'm sorry, but

3:05

I can't believe the call me . I cannot

3:07

believe the call. You've got a few weeks.

3:10

It seems like everyone agreed on

3:12

one thing . One

3:15

of the dumbest costs offensively

3:18

at super bowl history. That was

3:20

the worst play call in super

3:22

bowl history. The worst play call I have

3:24

seen in the history of football. That

3:27

was the most idiotic. If I live to

3:29

be 200, is anything

3:31

as dumb in my life was the worst

3:33

play call in the history.

3:36

I think in a big way, this

3:38

Seahawks super bowl and the Pete Carroll

3:41

goal line call is a huge example

3:43

of outcome bias. It's showing like it's

3:45

literally the outcome and result.

3:48

We're using that to

3:50

judge his decision and process.

3:53

So like the funny thing is that commentators

3:56

during the game and sort of the narrative

3:58

the next day was this was the worst decision

4:01

in football history, Superbowl history,

4:03

worst play ever called, but

4:05

the stats show like we found two

4:08

great articles, one on 5 38, where they

4:10

break down, it was actually a really smart

4:12

move. Like it was better time management

4:14

gives us more plays.

4:16

The reason that people were saying it was the worst

4:18

decision was because the outcome was poor. Over-weighting

4:21

The outcome. And then a good sort

4:23

of thought experiment to prove. The point

4:25

is if they score on the

4:27

past play, Pete Carroll is a legend,

4:29

right? So sneaky. So smart. This

4:31

is the greatest. So it's , it's the

4:34

same decision, but the way

4:36

we judge it, it's totally based

4:38

on the outcome.

4:41

This is also reflected in the scientific

4:43

literature. So there was an original study

4:45

done in the 1980s by two researchers

4:47

named Baron and Hershey. They

4:49

wanted to know how knowing the outcome impacts

4:51

our judgment of a process. So they

4:53

came up with a bunch of medical scenarios where

4:56

doctors were having to perform surgeries and

4:58

they would present people with these scenarios. But

5:00

what changed from person to person is that some

5:02

of the surgeries were successful and then other

5:04

ones were not. They ended up in a patient dying

5:07

and after receiving all that information, they had

5:09

people rate the quality of the decision. What

5:11

they found is that the positive outcome group rated

5:13

the quality of decision-making higher. Whereas

5:16

the people who had negative outcomes, it was seen

5:18

as poor decision-making. So it

5:20

was the same . It's exactly the same. The only

5:22

thing that was different was that the outcome it changed,

5:24

one was poor and one was positive. Sarah

5:26

And Doug all over again. It's the Pete

5:28

Carroll example. It's like, because

5:31

it didn't work. It was a bad choice.

5:33

If it would have worked, it would have been a great choice,

5:36

outcome bias. And this happens,

5:39

but literally during sports, during a particular

5:41

play the parents in the sense I shoot

5:43

a three and miss , oh, bad shot. You got

5:45

to pass the ball. But if I make it, yeah,

5:48

Trevor, give it to him. Yeah. It's

5:50

like, we're putting all the weight into

5:52

the outcome. It's like , uh

5:54

, it's kind of a , a trap that we fall into

5:56

in different. But the problem is

5:58

it's sort of our default, but

6:00

if we're looking at just learning and performance

6:03

as a whole, like there's a lot

6:05

of traps we can fall into. One

6:07

could be this overconfidence.

6:10

And in the research, they call

6:12

it like conflating luck with

6:14

skill. So it's like, oh, I

6:16

drove during the winter storm warning and

6:18

made it, I'm a skilled winter driver

6:21

when the truth is like, maybe so, but

6:23

also you were lucky, right ? And then you flip

6:25

it, the next trip would I end up at the ditch

6:27

doesn't necessarily mean I'm not

6:30

skilled, but it's like, well, luck

6:32

was that play . There's a lot of other things that it's messier

6:35

than this is all luck are all skills . Right

6:37

.

6:37

And this is sort of also , this makes me think of like

6:40

a lot of people think that flying is more

6:42

dangerous than driving is. Sure. When in

6:44

reality, like flying is actually safer,

6:46

statistically. Yeah . Statistically speaking,

6:49

it's safer. But when we're driving,

6:51

we think that we have more control and we do have control

6:54

over our own car, but what we don't have control of is

6:56

everything else that's happening. Right? So we sort of conflate

6:58

luck and skill in that scenario. It was great example,

7:01

Professional poker player, Annie duke wrote

7:03

a fantastic book called thinking in bets

7:06

and early in the book, she talks about how in

7:08

the poker world, when people fall

7:10

into the trap of outcome bias, they

7:12

actually call it resulting for

7:14

today's episode. We actually have a professional

7:17

poker,

7:17

My name's Casey Yance professional

7:19

poker player. And I got 53rd

7:21

place in the main event in 2012,

7:25

To talk to us about how resulting works

7:28

in the poker world and some different strategies

7:30

that we could use to avoid it.

7:31

This is a concept that , uh,

7:34

almost epitomise is poker. The outcome

7:36

bias where people just,

7:39

if they win, they won

7:41

because of pure skill. And

7:43

if they lost, it was because of

7:45

bad luck. And so you attribute

7:48

the results, you know, if I'm winning

7:50

it's because I'm just a skillful player

7:52

who made good decisions. And if

7:54

I lost, well, I got unlucky and I

7:56

just, you know, I didn't deserve it. So

7:59

In the poker world, this would be like

8:01

, uh, I go to Vegas with my friends.

8:03

I enter like a Texas Hold'em

8:05

tournament and I win. Right. So

8:08

complaining , like, it's like, oh, you're a great poker player.

8:11

It's like, I am a great poker player player.

8:13

I should quit my job and GoPro . Right . But

8:15

I think that's a great example of like maybe

8:17

some of it had to do with skill,

8:20

but there was also a lot of luck at

8:22

play . It's like the right.

8:23

It's certain the ring cards. Exactly.

8:26

But if I overweight

8:28

that outcome, that I won the tournament,

8:31

therefore I'm amazing.

8:33

This is that overconfidence. And

8:35

it could send me down the wrong path. That's

8:38

another big example. Like a small

8:40

one could be an iPhone under this chapel

8:42

in college. Um, I

8:44

would depend on like all night benders

8:47

where I'd study for a test and then I'd pass

8:49

the test. And so I'm just like, oh,

8:52

I can just rely on these benders.

8:54

I don't in some, some classes

8:56

I wouldn't go to class because I'm like, well, I

8:59

can just like stay up all night the

9:01

night before the test and I can do good

9:03

Enough to pass. Right ? Like you've , you've sort of assumed that

9:05

because you were able to pass that test the first

9:07

time, the first time, then that's the process you should

9:09

stick with the next time and you don't have to go to class

9:12

to get

9:12

It. I think one that's

9:14

not sustainable. And it wasn't for

9:16

me too . It's messing me

9:18

up in my other classes. Three. I'm

9:20

definitely not learning as much as I can

9:22

from this approach. And so this, this

9:25

overconfidence trap in

9:27

a way, what happens is when we see

9:29

the good outcome, we don't really dig

9:31

into the process of like, well, was

9:33

the process actually optimal? There's

9:35

probably some things we did well, but there's also

9:38

some luck at play. There might be some things we could improve,

9:40

but if we overweight the outcome and

9:42

it's like, oh, that worked, I don't need to

9:44

dissect my process. Right . Think of all

9:46

the growth opportunities I'm missing, or

9:49

I just like totally depend on

9:51

this non-optimal process, which is

9:53

robbing me of learning , uh

9:56

, in these other classes and getting

9:58

the most out of the class I'm in. Right. Uh , so

10:00

that's kind of like sacrificing

10:02

the short-term outcomes for maybe

10:04

these long-term things that I should be focused

10:07

on more, that's all a by-product

10:09

of waiting the outcome too much. Another

10:14

trap. I think we can call it rinse and

10:16

repeat syndrome. So it's like, okay,

10:19

we won , I discovered the

10:21

exact formula change, nothing.

10:24

And this happens a lot with like successful

10:26

coaches. It's like we won the state championship

10:28

in 1998. Therefore I

10:30

run the same place with the same grade , the same

10:32

clipboard, like don't change anything.

10:35

So this rinse and repeat is, is

10:37

totally assuming that everything

10:40

in my process led to this outcome and

10:42

I shouldn't change anything. This makes me

10:44

kind of a more rigid and

10:46

less open to experimentation.

10:49

And innovation happens in the corporate world.

10:51

Right? Well, cause this is sort of assuming that

10:53

there's a set recipe for success and

10:56

if no luck was involved, maybe yeah.

10:59

But we're not baking muffins

11:00

Here. Exactly. And we haven't talked about

11:02

muffins yet, but that'll make sense in

11:04

a second. Now this isn't

11:06

always about like the positive outcomes . So

11:08

there's traps in the negative one. This is

11:10

sort of the underconfident .

11:12

This is when sort of you do something,

11:14

it doesn't end up well. And so you stopped doing

11:16

it.

11:17

It's like, I assume like, oh, that is not

11:20

the thing. It's like, oh, I'm trying to

11:22

, uh , interact more with people

11:24

or be more sociable

11:27

at networking events. I go talk

11:29

to someone one time and they don't respond

11:31

well, it's like, screw that. Right.

11:34

And it's even like, because oftentimes, you know,

11:36

conversations are never terrible, but it's like, oh, that was just

11:38

kind of awkward. Right . And we sort of assume

11:40

that because it was awkward,

11:42

my process was awkward and I'm just, I'm an

11:44

Awkward person. Right. Uh, that was a

11:46

bad decision. I shouldn't do it again. So

11:48

that's the under confidence, the

11:50

most common trap of the under

11:53

is I just stopped doing it. But

11:56

another one that I, I have

11:58

fallen into and then you could see playing out

12:00

in the world is bad.

12:02

Outcome meant everything

12:04

we did in the process was bad. It's like we

12:06

lost the game. We're terrible. We

12:09

need to relook at how we practice the drills.

12:11

We run the plays we run. But the truth

12:13

of the matter is because luck is at

12:15

play. It's like, I guess

12:18

again, sorry to go back to poker. It's like,

12:20

look, I can play a hand

12:22

really well. And when I

12:24

could play a hand really well and lose,

12:27

I could play a hand like an idiot. And

12:29

when I could play a hand like an idiot and

12:31

lose, it's like , it's not involved. It's

12:33

not a one-to-one good result

12:35

equals good or good process equals

12:37

good outcome or bad process equals

12:39

bad out . Right. Because there's involved

12:42

. And that's

12:43

Sort of like the big takeaway for me is

12:45

that we should be looking at the outcome as

12:47

a source of information, but it's not

12:49

the only information that exists.

12:51

Right. If the world

12:53

was absent of luck and

12:55

randomness resulting is

12:57

a tremendous strategy. Right. So think

13:00

of like a game like chess, right? Yeah . There's

13:02

not as much luck involved

13:04

in chess because there's no hidden

13:06

information. Right. You know, the rules of the game, there's an

13:08

optimal strategy. And based on what my

13:10

opponent does, there's kind of a right or wrong

13:12

choice. And so , uh , kind

13:14

of a good model of showing how chess

13:17

doesn't involve that much luck would be,

13:19

look, none of us listening

13:21

could go beat a Grandmaster

13:24

of chess. Right . We just can't. Right . But

13:26

all of us listening could

13:29

beat the best poker player in the world heads

13:31

up on a couple hands, right

13:33

? Because there's luck involved. There's this

13:35

hidden information there's cards. We don't see.

13:38

There could be a 2% chance of me winning. And

13:40

I do, because look , that's not necessarily

13:42

the case in chess right now. Any

13:45

duke does a great job of explaining this in

13:47

her, her book , uh , thinking in bets,

13:50

but she's like, life is more like poker

13:53

than chess because in

13:55

life just like poker, there's hidden

13:57

information. There's uncertainty, there's

13:59

luck. And when that's the case, we

14:01

have to avoid falling

14:03

into this trap of resulting or putting

14:06

too much weight into the outcome.

14:08

Right. Um, a good way

14:10

to think about it would be the more

14:12

luck involved, the less

14:15

weight we should give the outcome, the

14:18

less luck involved. The more that outcome

14:21

is a gauge on the court quality

14:23

of

14:23

Our processes . And the reason behind this is because

14:25

it's reflective of what's actually happening.

14:27

So like in chess, if you lose,

14:30

that is pretty reflective of the

14:32

decisions that you made. Right? But in poker

14:34

you could make, you know, the best

14:36

. Yeah. You can make optimal decisions and still

14:38

lose because there's luck at plagues . Exactly.

14:40

So we call this the muffin

14:43

to lotto, spectrum, great name,

14:46

trademarks . We're making hoodies. So if

14:49

I'm baking a muffin, I

14:51

have a set recipe. And

14:54

if I follow that recipe, I'm going to get

14:56

a pretty decent muffin, right.

14:59

Because there's very, there's really not

15:01

much chance or a lot of tunnels follow

15:04

the plan. You get the muffin, right. If

15:06

I bake the muffin and it's a disaster,

15:10

I can wait that outcome. And

15:12

it is a pretty good reflection that

15:14

there was something wrong in my process. So

15:16

it's like, whoops , added double

15:18

the salt. You have to

15:19

Be over something. There's something wrong with your product

15:22

.

15:22

So in that case, this relationship

15:24

of the process and outcome are

15:27

closely tied together. We can weight

15:29

that outcome more. Right . Okay. So

15:31

that's one end of the spectrum. Very little luck,

15:34

put a lot of weight into the outcome. What's

15:36

on the other . On the other side, we have the lottery.

15:38

It's like, okay, I win the lottery. That's

15:41

a great outcome. But I think all

15:43

of us listening know it's like, that is pure

15:45

Luck, right? Like you chose some random numbers. I

15:47

can't go

15:49

Create a seminar and write a book of like, here's

15:51

how to pick the winning lottery numbers. It's

15:53

like, no, bro, you can't wait

15:56

that outcome. That is not a reflection

15:59

of a good process because it was pure

16:01

luck right there. Like you're not

16:03

going to come to me and be like, Trev heard

16:05

you won the lottery. How'd you pick your numbers.

16:07

But a lot of us actually do this. Like

16:10

we don't, we might not

16:11

Do that with a lottery. Like I think that example resonates

16:13

with people, but we do this a lot with, with

16:15

other things. Like we assume like whatever we did

16:17

to get to a certain outcome, we have to do that even

16:20

in scenarios where there's a lot of luck involved.

16:22

So to use this spectrum, I

16:25

think it's actually a great tool. It's the

16:27

more luck at play, the less weight

16:29

we give the outcome and the less we

16:32

can like use it to judge our process,

16:34

right ? When the luck is sort

16:36

of minimized, then that relationship

16:39

is more one-to-one and we can use an

16:41

outcome to judge process, right ? Bad muffin,

16:43

bad process, fix the process, better

16:45

muffin. What we're trying to do is

16:48

whatever it is that we're doing, it's

16:50

like, okay, where are we at on that spectrum? I

16:52

would say most things in life

16:54

that matter, and that we care about are

16:56

going to involve uncertainty luck and

16:59

hidden information, right ? So they're swayed

17:01

more towards this lotto

17:04

end of the spectrum. Right ? Which means we

17:06

have to be careful about over

17:09

weighting , the

17:10

Outcome, right? Avoiding this outcome

17:12

bias. And just to reiterate the value

17:14

of this whole spectrum, the muffin lottery spectrum

17:17

is that it helps us sort of frame how

17:19

much we should be waiting our outcome relative

17:21

to the process. Absolutely.

17:23

And this is true in

17:26

sports in life. I can

17:29

create an awesome resume

17:31

and really prepare for this job interview

17:34

and like really like put a lot of

17:36

time and energy into it and I could not

17:38

get the job. Right. Does that mean everything

17:40

I did in my process was right . No, you could

17:42

even

17:43

Have done a really great interview, right? Like you could have

17:45

had a great resume and a great interview

17:47

All just because of luck. It could be, there

17:49

was just someone better or the interviewer was

17:51

in a bad mood or rubbed them. There's a lot

17:54

out of my hands. Right . There's luck at play. And so

17:56

I can't, I can't go

17:59

muffin syndrome on this of like, oh,

18:01

I didn't get it. Therefore the recipe

18:03

was off. It's like, right. Could have been bad luck.

18:06

Right.

18:07

It's also worth noting that we can't just assume

18:09

because we didn't get the job that it was just bad

18:11

luck also. Right. Like we can't just, you know

18:13

, put our process off because it

18:15

Was a bad outcome. Oh, that's too far

18:17

down the lottery. And it's like, didn't

18:19

get the job bad luck. Right .

18:21

They've been some issues with your interview process or your

18:23

resume. Right. I'm sure

18:24

There's stuff that could be improved. And then

18:26

Sort of the messages that we need to be thinking

18:28

more about how much luck is involved

18:30

in whatever

18:31

It's hard. The spectrum helps us. It's not binary

18:34

of it was all luck or

18:36

no luck. It's like somewhere in the middle.

18:38

And that's why this, this spectrum

18:40

is super valuable. Right ? It's we lost

18:42

the game. Does that mean everything we did

18:44

to prepare was bad? No, there's

18:47

probably some things we actually did well, but

18:49

when we overweight the outcomes , sometimes we don't

18:51

see the growth or the, these things

18:54

that we did well. And then you flip it. It's like, oh,

18:56

we won. But actually there was

18:58

some cracks in our process. We just

19:00

happened to get lucky and win the game. And

19:02

so again, we're just trying to be more

19:04

rational or objective about, okay,

19:07

this happened, let's weigh it

19:09

properly and start

19:11

to diagnose, not diagnose, but dissect

19:13

the process, like an

19:16

objective, a more helpful way. And we're

19:18

never going to know it all. It's like, we're never going

19:20

to know exactly why we didn't get

19:22

hired for the job, but we can be

19:24

objective and look at some things that went well,

19:27

We can fix, we can help ourselves become more

19:29

aware that all of the information does

19:31

not exist just in the outcome. And we

19:34

can

19:34

Fall down those two slippery sides

19:36

of the spectrum of like, oh, bad luck or

19:38

everything I did was wrong. Right . Okay.

19:44

One thing we gotta be clear on. We're

19:47

not falling into this , uh

19:49

, Twitter cliche of it's all about

19:51

the process, not the outcomes. That's

19:53

not what we're saying. It's like, honestly,

19:56

the outcome is a great source of

19:58

feedback and information. It's a measuring stick.

20:00

We're just saying don't overweight

20:03

It. Right. We just need to be aware of those two things.

20:05

Absolutely.

20:06

Even when luck is at play, the

20:08

outcome is useful information.

20:10

It's like, Hey, we lost the game. Some

20:13

of it was luck, but some of it could have been process-related.

20:16

So we're not saying yes,

20:18

it's all the process, not the outcomes. It's like, no,

20:20

wait it properly. Be objective.

20:23

Use it in a way that can inform

20:25

the process. But remember luck is that

20:27

play. That's all we're saying. Hopefully

20:29

we've done a good job of showing like positive

20:32

or negative. This can get in the way

20:34

of learning opportunities, overconfidence

20:37

trap, under confidence, trap, falling

20:39

into short term , doing things to get

20:41

these short-term wins, but sacrificing

20:44

long-term growth. Uh, I do something,

20:46

it doesn't work. I do it again. Like this is all

20:49

robbing me of reps and experiences

20:51

and just learning opportunities left.

20:53

And right. So now the question is

20:55

like, okay, now that we

20:57

know that the outcome bias is at play

20:59

and it's perhaps more powerful than we know,

21:02

how do we sidestep this ? Right. What do we do? One

21:05

step two to avoiding outcome

21:07

bias is to be aware of it. But I think we can

21:09

do better than that. Right . What are some tools

21:12

to work, to try to avoid

21:14

falling into

21:15

These traps? Yeah , I think the first one kind of piggybacking

21:18

on that idea of being aware of it is understanding

21:21

how much luck is it is at play in

21:23

whatever event or scenario we're taking

21:25

place in. Right.

21:26

And it's going to be hard to calibrate, but

21:28

just remember muffins and lottery.

21:31

We're operating in the middle of those two. Most likely

21:34

probably swayed more towards lottery.

21:36

Right . And that's a good gauge on, well, how

21:38

much weight should we put into this house?

21:41

And then another strategy for this is to increase

21:43

your sample size. Uh , yeah

21:45

. So oftentimes we might, like we're saying

21:47

we do something once and then we assume, oh

21:49

, um , you know, that's the outcome that is always

21:51

going to produce, like I'm never going to do that for your confidence

21:53

or under confidence . Another helpful way

21:56

of thinking about this is through the term regression

21:58

towards the mean, so as you increase

22:00

your sample size, you're going to get closer and closer

22:02

to like your true average. An example

22:05

of this can be seen in like the first few games of baseball,

22:07

right? You'll have a lot of players who might be hitting

22:09

like an 800 batting, average, ridiculous

22:12

, or maybe something like a hundred, right. They're

22:14

not hitting well at all, but what's going

22:16

to happen is as you play more games, people are

22:18

going to get closer and closer to their true average. And more

22:20

people will be like 200 to 300.

22:23

So you can't judge the quality

22:25

of the player after

22:27

the first two weeks, it's like, oh , batting 800 best

22:30

ever. It's like, that's ridiculous. It's like, we

22:32

need a bigger sample size, but

22:34

we have to use the same logic when it comes to

22:36

us. It's like, how are we judging the quality

22:39

of our process based off

22:41

one or two attempts. Okay. We

22:43

do approach the person at the networking event

22:45

and maybe it doesn't go so well. Right . Okay.

22:47

That was like one rep right

22:50

before I'm going to judge myself and

22:52

start shaming myself, go try it a little more

22:55

because maybe it was just like the wrong person at

22:57

the wrong time and they're in the wrong way .

22:59

Yeah . And I think this also works for overconfidence,

23:01

right? So if you, if you've done something just once

23:03

or twice and you do it really well

23:05

, uh, you might be overly inflating

23:07

your confidence in something, but if you increase

23:10

the sample size of it, you're going to reduce

23:12

that down. So it's a more accurate reflection of

23:14

your actual abilities.

23:15

The takeaway is simple. Get a few more

23:18

data points, like increase the sample size,

23:20

play more hands, go through the

23:22

interview process a few more times before

23:24

we completely throw our entire process

23:26

out the door or assume that we discovered

23:28

the secret sauce. So

23:34

step one, take into account luck

23:37

and understand like there's probably more

23:39

luck involved than we realize . Uh , tool

23:41

number two is get a bigger sample size, right?

23:44

Upside of that is getting a more accurate

23:46

reflection on our process. And too

23:48

, if you're thinking about learning, you're getting more reps.

23:50

And so like you're going to be increasing the skill

23:53

right? In the same process,

23:54

A third strategy that we can use to sort

23:56

of curb this outcome bias is

23:58

to just seek out feedback from people on

24:00

our process, on the

24:02

Process itself, the decisions you made, right

24:04

? The question I was most excited to ask

24:06

Casey was like, yeah, we know what resulting

24:09

is. We understand outcome bias, but

24:11

what are some strategies we could use to avoid

24:13

it? I think his advice was

24:15

not only relevant for poker players, but it's

24:17

something that we could all use. No matter

24:19

what we do.

24:20

The biggest thing is keeping an open mind to being

24:23

wrong. You have to be able to accept that, Hey,

24:25

maybe I was wrong. Maybe there's a different

24:27

way to play that hand

24:29

or maybe I could have done something differently. I

24:32

really think you've learned for me. I learn

24:35

by talking and playing. So

24:37

if you can get buddies and dissect

24:39

hands and just to be blunt

24:42

with yourself, be honest with yourself and say, yes,

24:44

maybe I made a mistake. Don't hold true

24:46

to, oh, well I lost,

24:49

I would've lost. And anyway, so yeah, there's always something

24:51

you can learn and the same holds true. Even if you

24:53

win. So even when you're winning, people

24:55

tend to think, oh, I won, I

24:57

played perfectly. I played well, that's

24:59

the same. You can fall into the same trouble

25:01

there where yeah. You may have won the hand,

25:04

but it wasn't a long-term winning

25:06

play. And that will catch up with you eventually.

25:08

So you need to be honest with yourself after

25:10

every time you play say, Hey, you know, go

25:12

through the big hands in your head, even

25:14

though the ones you want and the ones you lost and

25:16

think, did I play this

25:19

the best I could? Or did I get lucky?

25:21

Did I get unlucky?

25:22

I saw, this is what I did. Check my line of

25:24

thinking here. Do you think I made the right choice

25:27

or not? I mean, they're breaking down these hands.

25:29

Like a sports team might watch. Exactly.

25:32

So another way that we can elevate this feedback

25:34

technique is we can hide the outcome

25:36

wherever possible. When we're trying to get feedback.

25:38

That's smart because we know the

25:40

person that I'm getting feedback from their

25:43

outcome bias .

25:44

Right , right. They're going to be focused on the outcome.

25:46

So if you think back to that original study where they

25:49

gave people the decision-making

25:51

process, and then they also gave them the outcome. So,

25:53

you know, do you conduct surgery here? Yes

25:55

or no. And then does it lead to good

25:57

or bad outcomes, good or bad outcomes. They

26:00

Literally in the study, they, even

26:02

the groups where they told don't overweight

26:05

the outcome, we're just judging the decision.

26:08

They still still did it. So even

26:10

when we were aware of it, we're

26:12

still

26:13

Going to fall into that trap . So what this looks like is if

26:15

I'm going to try and get some feedback on my process

26:17

from you, I should tell you, look, here's

26:20

the scenario, right? Like in the medical scenario,

26:22

if I'm choosing to operate, I need to tell you,

26:24

you know, here's all the information I had. This is

26:26

what I saw. And this is my decision. And

26:29

then ask you for feedback. And then

26:31

I can tell you the outcome. It's

26:33

Like , uh , dude, I just

26:35

went for this job interview. And

26:38

um, the interview went really well. It went really

26:40

well. I got this job. One of

26:42

the interview was this.

26:45

Um, and this is what I said, what do you think about

26:47

that? You're probably going to be like, good answer.

26:50

Cause you got the job. Maybe it wasn't

26:52

right . Could have been better. Right . So hide the outcome.

26:55

And I know that's not always easy to do, but I

26:57

think if we want valuable feedback,

26:59

the smart approach, definitely

27:03

Another strategy that a researcher from Harvard

27:05

business school Francesca Gino talks about

27:07

is this term of counterfactual thinking. And

27:10

this is just sort of yeah.

27:11

Big words, but really it's just the idea

27:13

of

27:14

Envisioning like alternate realities. So what

27:16

would, what are other outcomes that could have happened?

27:19

So in the Pete Carroll scenario, right? When we talk

27:21

about what would have happened, if

27:23

they score, if they scored, then does that

27:25

make it a good decision

27:26

Or this legend? Right . Great strategy.

27:29

And out of the box thinking, yeah . Innovative

27:31

. If we realize that when

27:33

We're, when our judgment of

27:35

the process of the decision-making process

27:37

changes based on the outcome, then

27:40

we're resulting

27:41

Where we're resulting into the outcome. Whereas

27:43

If we don't change our judgment of that decision

27:45

based on the outcome, then it's probably a fair

27:47

assessment.

27:48

We go back to the parent watching the basketball game

27:50

, uh, the good

27:52

shot, bad shot. Shouldn't only

27:54

be, did I make the shot or miss the shot?

27:56

It's was I open ? Did

27:58

I shoot with rhythm? Was it in the right

28:01

like time and place of the shot

28:03

clock or whatever it may be. It's a good

28:05

shot. It's like, we want you to shoot that.

28:07

Right . That's a high percentage shot in the right time

28:10

and place. Good shot. Right . And that doesn't

28:12

change. If it doesn't go in,

28:14

if we're really approaching

28:16

this situation, avoiding the outcome

28:18

bias or on the flip side,

28:21

I take a terrible shot early in the shot clock

28:24

while there is a defender on me. It just so

28:26

happens to go in. Right. That doesn't mean

28:28

you should do that . Exactly. It's like, dude, great.

28:31

That it went in happy for you, not

28:33

a good decision. And if we do too

28:35

much of that in the long run, we're going

28:37

to get worse outcomes. So it's like, again,

28:41

counterfactual thinking, is that what you call it? Exactly.

28:44

It's the, the thought experiment

28:46

is if the outcome was different, would it change

28:48

my judgment of the process in

28:51

most cases? Yes. Because we're operating with

28:53

the outcome bias. What we're trying to do is say exactly

28:59

the filter we use when we create an episode is

29:01

will this help us become a better learner?

29:04

Think the answer for this topic is absolutely.

29:07

Yes. A couple of things

29:09

to keep in mind, this is a skill like

29:11

any skill we get better through practice. We're never

29:13

perfect with it. All of us are going to still fall

29:16

into the trap of outcome bias, but now

29:18

that we're aware of it, we can practice sidestepping

29:20

it a little bit more. And honestly,

29:23

I think this is a powerful one for

29:25

leaders and learners like this could

29:27

certainly help the way I approach

29:30

the , the failures, the mistakes and

29:32

the good outcomes in life. And

29:34

as a leader, the way I talk about

29:37

mistakes failures and the good outcomes

29:39

with the people around me. So it's one of those

29:41

useful tools for the leader and the learner.

29:44

And I think it's actually very simple.

29:47

Um, and it's a powerful one to try

29:49

to figure out, thank you so much

29:51

for listening. We'll be back next week.

Rate

Join Podchaser to...

  • Rate podcasts and episodes
  • Follow podcasts and creators
  • Create podcast and episode lists
  • & much more

Episode Tags

Do you host or manage this podcast?
Claim and edit this page to your liking.
,

Unlock more with Podchaser Pro

  • Audience Insights
  • Contact Information
  • Demographics
  • Charts
  • Sponsor History
  • and More!
Pro Features