Podchaser Logo
Home
How psychology can help people make better decisions, with Lace Padilla, PhD, and Hannah Perfecto, PhD

How psychology can help people make better decisions, with Lace Padilla, PhD, and Hannah Perfecto, PhD

Released Wednesday, 25th January 2023
Good episode? Give it some love!
How psychology can help people make better decisions, with Lace Padilla, PhD, and Hannah Perfecto, PhD

How psychology can help people make better decisions, with Lace Padilla, PhD, and Hannah Perfecto, PhD

How psychology can help people make better decisions, with Lace Padilla, PhD, and Hannah Perfecto, PhD

How psychology can help people make better decisions, with Lace Padilla, PhD, and Hannah Perfecto, PhD

Wednesday, 25th January 2023
Good episode? Give it some love!
Rate Episode

Episode Transcript

Transcripts are displayed as originally observed. Some content, including advertisements may have changed.

Use Ctrl + F to search

0:00

All day, every day, we make

0:03

decisions. Some are so

0:05

small we barely think about them. Such

0:07

as what to have for breakfast, or which

0:10

route will avoid the most traffic on the way to

0:12

work. Others are more consequential

0:15

whether to take a new job or

0:17

how to spend or save our money.

0:19

And some decisions can have life or death

0:21

consequences. Such as weather to get

0:23

a vaccine or evacuate ahead

0:25

of a hurricane. Over the past

0:27

several decades, psychologists and other

0:30

behavioral science researchers have become

0:32

increasingly interested in understanding how

0:35

people make decisions like these. Why

0:37

we so often make bad decisions? And

0:39

how even seemingly small changes

0:41

in the way that choices are explained and

0:43

presented can make a big difference

0:45

in the decisions that people make. So

0:48

what have researchers learned about decision

0:51

making? Why do people

0:53

make bad decisions? Do bad

0:55

decisions happen when people don't have enough

0:57

information or when they're overloaded

0:59

with too much? How do behavioral

1:02

scientists define a bad decision

1:04

anyway. And how can decision

1:06

researchers findings best be deployed

1:08

in the real world to make a positive

1:10

difference in people's lives? Welcome

1:15

to speaking of Psychology. The flagship

1:17

podcast of the American Educational Association.

1:20

That examines the links between psychological science

1:22

and everyday life. I'm Kim Mills.

1:27

have two guests today. First is doctor

1:30

Lace Padilla, an assistant professor

1:32

of cognitive and information sciences at

1:34

the University of California Merset. Doctor

1:36

Padilla studies how people use data visualizations

1:39

to make real world decisions with

1:41

life or death consequences in

1:43

areas including hurricane evacuations

1:46

and vaccine uptake. Her

1:48

lab's mission is to help people make the best

1:50

possible judgments about their health and safety

1:52

by developing and testing new ways to

1:54

visualize and communicate complex

1:57

data. Our second guest today

1:59

is doctor Perfecto, Doctor

2:01

Perfecto is an assistant professor of marketing

2:03

at the Olin School of Business at Washington

2:06

University in St. Louis where she studies

2:08

how consumers make decisions. Much

2:10

of her work also focuses on improving

2:12

research methods, including designing

2:15

decision making studies that are

2:17

more likely to replicate in

2:19

real world settings. Thank

2:21

you both for joining me. Thank you for having

2:23

us. Thanks so much. So I just mentioned

2:25

about how behavioral science research can

2:27

help people make better decisions, but

2:29

the term better can be

2:31

a subjective term. What makes

2:33

something a bad

2:34

decision? How do you define good versus

2:36

bad in your research? Dr.

2:38

Padilla, let's start with

2:40

you. I think different groups of people

2:42

define good and bad decisions differently.

2:45

And, you know, there's some people

2:47

who might think that

2:49

a good decision would be the best

2:51

computational decision. If we had

2:53

all of the information available, we could

2:55

you know, calculate the most optimal decision.

2:58

But the truth is, is that humans don't function

3:00

like computers. And from

3:02

my standpoint and my work, I

3:04

think of a good decision as

3:07

someone making the, you know, using

3:09

all the information available to them to the

3:11

best of their ability. And

3:14

in contrast, we can compare that to a

3:16

bad decision. And those are a little bit easier

3:18

to identify. And I often see bad

3:20

decisions as clearly someone

3:22

misinterpreting or misunderstanding

3:25

information that was presented to them that

3:27

would ultimately lead to some type

3:29

of error in their reasoning. You

3:31

too come to the topic of decision making from

3:33

very different angles. So let's talk a

3:35

little bit about each of your research backgrounds.

3:39

So Perfecto, in your research,

3:41

what's the main thing you're trying to understand about

3:43

how people make

3:44

decisions? That is a great question.

3:46

So I would say the work that

3:48

I do tackles this

3:50

issue from a broader lens

3:52

for better or for worse than than

3:55

Dr. Pidez. I

3:58

spend a lot of my time making

4:00

sure that the

4:03

work that we're all doing in this

4:05

field of decision making research

4:08

is able to actually have

4:11

a useful impact out in the world. So

4:13

the work that we're doing, we're not learning

4:16

about how people make decisions just for the

4:18

fun of it, the hope, is that in the

4:20

end, we're actually able to

4:23

improve people's lives down the road.

4:25

One area that I'm focusing

4:27

on right now and some ongoing

4:29

work is realizing

4:32

that a lot of the ways that we've previously

4:35

studied decision making is

4:37

pretty abstract. So

4:40

we're using very hypothetical

4:44

design, sometimes a little bit strange,

4:46

a little bit artificial. And

4:48

that's done so that we can have a

4:50

really clear understanding of

4:52

what is contributing to somebody's

4:55

decision. But out

4:57

in the world, the world is not clean, the

4:59

world is not so tightly

5:01

controlled. And so something

5:03

that I'm looking at is that

5:05

in designing those studies, we've used

5:07

a lot of positive stimuli.

5:09

So we've asked people when we're trying to learn about

5:11

risk preferences are dealing

5:13

with uncertainty. We say, do

5:15

you want to have this

5:18

gamble with only upsides or

5:20

this gamble with smaller upsides.

5:23

And so we can certainly see if someone's

5:25

willing to go for a

5:27

bigger, riskier outcome versus play it

5:29

safe with a smaller positive

5:31

outcome. But both are still great. You

5:33

still get something. You still have a chance

5:35

at something. In the end. And

5:38

I have found my work and others

5:40

as well that those types of more

5:42

positive decisions we're more likely

5:44

to make errors in. When things are

5:46

going well and things are feeling great, you're

5:48

more likely to just go with

5:50

your gut. And I'd go with

5:52

that initial response. Sometimes

5:54

that's fine, but a lot of times

5:56

that then leads us to as Dr.

5:58

Bidea said misinterpret the information

6:00

we're dealing with makes errors

6:02

in evaluating those outcomes

6:05

and and lead us to an outcome

6:07

that is suboptimal for our situation.

6:10

So I'm sort of assessing the

6:12

magnitude of that issue going

6:14

back looking at some things we thought were super

6:16

duper robust and really

6:18

big, different really big effects

6:21

that may not actually be we're robust down the

6:23

world so that when we're actually trying to implement

6:26

our findings, we

6:28

will see results. Howard Bauchner: And Dr.

6:30

Padilla, what's main thing that

6:32

you're trying to understand how is your research different

6:34

from Dr. Perfecto?

6:36

My research is maybe different from

6:38

most people who study decision

6:40

making. Honestly, in in the fact

6:42

that I like to examine how

6:44

we can change the

6:46

information that people are presented with

6:49

to help them make better decisions. And

6:51

I specifically study data visual stations.

6:53

So I'm looking at creating

6:55

forecasts with uncertainty and visualizing

6:58

that data to try to help

7:00

people understand the risk

7:02

that they might be hunter. Said

7:05

that's a very, you know, holy

7:07

different angle that I bring to it.

7:10

So

7:10

what are some of the main reasons

7:12

that people make bad decisions?

7:15

THAT IS SUCH A GOOD QUESTION. IT IS WHAT

7:17

HUNDREDS OF PEOPLE HAVE SPENT DECADES TRYING

7:19

TO GET A GOOD GRIP ON

7:21

AND HUNDREDS MORE WILL IN THE DECADES TO

7:23

COME. I think both of us can speak

7:25

to one or two particular

7:27

reasons where people

7:29

might go wrong. I touched on one

7:31

already where folks might

7:33

go with their gut a little bit too readily. I

7:35

think that aspect of the positivity

7:38

or negativity of what you're dealing

7:40

with, people might not realize the extent

7:42

to which. That can influence

7:44

how likely you are to make

7:46

a snap decision versus sit with it

7:48

for a while. So I

7:50

would say one main group

7:52

of the problem is that people

7:54

just sometimes go too fast. You

7:57

know, feels right. They go with what

7:59

they see and then

8:01

they they end up somewhere they don't want to be

8:03

whereas if they had taken a little bit more time and

8:05

sat with it, they wouldn't have made those errors. And when

8:07

I say more time, I don't mean like

8:09

sleep on it, come back to it

8:11

in a few hours. I mean like thirty

8:13

seconds. Ten seconds. Even

8:16

that amount of time, we

8:18

readily find that folks can

8:20

and by we, I mean, not just me, but decision

8:23

making researchers more generally find that

8:25

people can recognize that

8:27

they might have misinterpreted something

8:29

or they're coming at the problem

8:31

from the wrong perspective or

8:34

even just that they made a mistake. They

8:37

and so by

8:39

just taking a

8:40

second, checking yourself, people

8:43

could be much better off.

8:45

Yeah. I I really agree with that. And I

8:47

think that that ties

8:49

nicely into the way that I think about it, which

8:51

is that, you know, throughout

8:53

human evolution, we have been

8:55

able to make some pretty fast snap

8:57

decisions that worked out fairly well for us. We

8:59

are now in a very complicated world with

9:01

lots of things that there's no reason that we

9:03

would have evolved the ability to

9:06

know, calculate long term

9:09

financial stock projections or,

9:11

you know, any of the very complicated

9:14

long term things that we have to make decisions

9:16

with. So for that reason,

9:18

our intuitions sometimes

9:20

fail us. So if we

9:22

can slow down, and try

9:24

to kind of activate a more

9:26

analytical approach that can sometimes

9:28

be very useful. And I

9:30

think in addition to

9:32

that say

9:35

by general refrain of

9:37

the way that information has

9:39

been presented to us isn't easy to

9:41

understand either. It is

9:43

hard for the average person to

9:45

look at probabilities and

9:48

forecasts and understand what they mean and make

9:50

effective decisions. And it's not

9:52

their fault. And I think that there's a

9:54

lot more that can be done to make that

9:56

information easier for all people to

9:58

understand, and a lot of people are working on

10:00

that particular problem. So I think it's a

10:02

dual issue of, you know,

10:04

we have some strategies that might not

10:06

be working for us, including making these

10:08

gut decisions. And also the

10:10

information that we're working

10:11

with. It's very complicated, more probably

10:13

more complicated than it necessarily

10:15

needs to be. But it's some of the

10:17

problem that we as a society

10:19

are not as math literate

10:21

or probability literate as

10:23

we should be. For example,

10:25

when people hear that a vaccine

10:27

is ninety percent effective, then

10:29

there are people out there who say, well, but

10:31

that's not good enough. What about the other

10:33

ten percent? And then don't get the vaccine. I

10:35

mean, what is wrong with us that we can't

10:38

understand that ninety percent is a pretty

10:40

good number.

10:41

Yeah. I'm People always

10:44

mention that as, you know, wouldn't it be

10:46

great if we all could be more mathler and

10:48

I agree? However, I do

10:50

think that that's not the only solution

10:52

because the American

10:54

population is fairly well

10:56

educated and We're not the

10:58

only group of people who need to

11:00

interpret this information. If

11:02

you can think about the people who

11:04

are most in need of decision

11:06

support, they likely have the least

11:08

amount of education. So I think in

11:10

some ways blaming

11:12

people for not being more literate

11:15

isn't fair. I think that

11:17

people want to make their best

11:19

decision and they might be in a

11:21

circumstance where they didn't have access to certain types of

11:23

education and that shouldn't mean

11:25

that they have poor

11:27

health outcomes. I think it's

11:29

our responsibility scientists and good

11:31

citizens of the world to ensure that

11:33

were not negatively impacting people simply

11:35

because they didn't have access to high

11:38

quality, higher education. Well,

11:40

let me ask, what are some of the effective

11:43

strategies for helping people to

11:45

make good decisions. And what are some of the

11:47

strategies that don't work and that might

11:49

even backfire by driving people to do

11:51

exactly the instead of what trying to

11:53

convince them to do.

11:55

And maybe this is the place where I

11:57

ask you both to explain some

11:59

terms that are used in in your

12:00

research, choice architecture, and

12:03

libertarian paternalism, for

12:05

example. Yeah. I

12:07

mean, I think choice architecture, one

12:11

could describe it. I don't know if

12:13

that Bidia sees

12:15

sees herself this way, but you could

12:17

imagine framing the

12:20

information, visualizing the information

12:22

differently, as a form of choice

12:24

architecture. So when you think of the choice architect,

12:26

it is the person who is

12:28

structuring how the

12:30

decision is phrased. So

12:32

it is not, it doesn't fall out of the sky.

12:34

Someone has to decide, are we gonna

12:37

frame the decision positively?

12:40

Like, do you want

12:42

to increase your contributions

12:44

to retirement? Or more

12:46

neutrally, what do you want to do with

12:48

the with your upcoming

12:51

elections? Do you want to increase or

12:53

decrease? Should increase come first? Should increase.

12:55

Come second, when should

12:57

you mention how much it's going to increase? All of these

12:59

decisions have to be made by somebody, that is

13:01

the choice architect. And

13:03

the way in which all of those

13:07

approaches, I don't want to keep saying

13:09

decisions, but the way in which all of these little

13:11

decisions that the person framing

13:13

the actual decision needs to make.

13:15

How that impacts what people

13:17

actually do in the end would

13:19

be would be choice architecture. Think,

13:22

libertarian paternalism comes from

13:24

a particular way of

13:28

creating of being that choice architect, of

13:30

creating those decisions. Thing. I'm

13:32

gonna I know as the person

13:34

as that's the choice

13:35

architect. I know what answer you should pick.

13:38

I don't wanna force you to

13:40

pick it. But I'm gonna make it

13:42

really easy for you to pick it and hard for you

13:44

to not pick it. And so are

13:46

are those methods then that that help people

13:48

make better

13:48

decisions? Well, that's to that. Now that

13:50

that's a thorny question. It

13:53

depends on the on the benevolence of the

13:55

the choice architect. Certainly,

13:57

AAA controversial question. These

14:00

days of who has the best

14:02

idea, what is best for people. So this

14:04

development of this term choice architecture

14:06

and libertarian paternalism is

14:09

a big celebration and getting

14:11

more people to save more for

14:13

retirement. That's great. If

14:15

you have the money to save for retirement, And

14:17

so some people who might not

14:19

be in such a good financial position,

14:21

they might be able to do better for

14:24

themselves with that money now instead

14:26

of being funneled

14:28

through defaults and

14:30

what options are preselected for you

14:32

and you need to do work to

14:34

do something else. They

14:36

might not understand, like, what

14:39

Dr. Badillo is talking about.

14:41

They might not be able to

14:43

figure out the how the decision

14:45

has already been made in some way

14:47

for them. And so whether

14:50

that's the right move or

14:53

maybe we've gone too far is is

14:55

certainly a hot topic these

14:57

days. Dr. Pete, I want to ask you

14:59

because you study some very

15:01

consequential decisions that people have to

15:03

make such as how to decide whether to

15:05

evacuate during -- when you're faced

15:07

by the third of a hurricane. And I know

15:09

you've come up with some data

15:11

visualizations that help people, but I mean,

15:13

what are weather forecasters doing

15:15

wrong that drives people to make bad

15:17

decisions when they are perhaps in

15:19

the path of of a

15:20

hurricane? This kind of ties

15:23

into what Dr. Perfecto was just

15:25

mentioning about who is the decider of

15:27

what the right decision is. And

15:29

the approach that I take is I

15:31

don't know if everyone should evacuate. That's, you

15:33

know, I'm I'm a cognitive

15:35

scientist. But I

15:38

do know if someone misinterpret

15:41

something, misunderstands information

15:43

being presented to them. So I can at least

15:45

identify when there

15:47

is a, you know, a massive

15:49

failing in how people are interpreting what

15:52

what they're presented with. So

15:54

I kind of take the standpoint of if I

15:56

can at least reduce some of

15:58

the errors, then,

16:00

you know, the the lab won't be in the process. I'm

16:03

not sure if I'm making the decision better, but I'm

16:05

certainly removing some of theirs. And we

16:07

can do that with data visualizations. And

16:10

part of what has happened historically is

16:12

people didn't appreciate the importance

16:14

of how that data visualizations

16:17

were. Displayed to people. And there's some

16:19

type of visualizations like the cone of

16:21

uncertainty, which kind of starts

16:23

at a point and grows, the width of

16:25

the cone grows, and

16:27

it is intended to show

16:29

the kind of the mean

16:31

path of the storm. And it is a

16:33

sixty six percent confidence

16:35

interval around the mean predicted

16:37

path. Well, most people don't know what a

16:39

sixty six percent confidence interval

16:41

is. And so instead they see it

16:43

as the storm growing in size over

16:45

time because they see a small point and it

16:47

gets bigger. So reasonably, they think the

16:49

storm is growing in size. That

16:52

is I

16:54

think visualization that hadn't been

16:56

tested for the last thirty

16:58

years that it's been in use. And so

17:00

no one really appreciated that it was

17:02

causing massive confusion

17:04

and misinterpretation of what the

17:06

storm was going to do. And

17:08

more recently, we've been studying these in a

17:10

careful way and studying alternatives

17:13

to those classic visualizations.

17:16

That at least don't have those same

17:19

misinterpretations. Again, I'm not trying to make

17:21

someone evacuate or not. I'm just trying to help

17:23

them understand where the storm is

17:25

gonna go. Successfully to

17:27

empower their decision making.

17:29

Let's talk for a minute about the

17:31

the dichotomy between time

17:33

perception and decision making.

17:36

So if you're making a decision

17:38

that it's what you're gonna

17:40

wear to work tomorrow, that's one thing. But if you're trying to

17:42

make a decision about how

17:44

we save the planet, you know, what what do

17:46

we do to stop global warming? People

17:49

have problem understanding something that is

17:51

so far out in the distance that they don't know

17:53

how to make any kind of decision about. What

17:55

can we do to get people to better understand

17:58

how to make those kinds of decisions effectively at

18:00

at this point in

18:01

time. You were hinting at it a

18:04

bit, Kim, of sort of making

18:06

that really far off abstract

18:09

thing much more concrete

18:11

and feeling real here.

18:13

There's some co work by

18:16

Hal Hirschfield that

18:18

talks about sort of imagining

18:20

yourself farther along.

18:22

And so he does he does cool stuff

18:24

like like aging, peoples,

18:27

faces in in photos and being like this

18:29

is you in fifty years. Now what

18:31

are your thoughts about sort of what's gonna happen

18:33

in fifty years? And you're like, oh, like me.

18:35

Like, I'm gonna be there. Like, it feels it feels more

18:38

concrete and so we think about it as feeling

18:40

sooner and

18:42

evaluating it. Like we would a

18:44

a more closer in time

18:46

event. So

18:48

taking steps like I said, like, you hinted

18:50

at taking steps to

18:53

to make that feel real now could

18:56

certainly be beneficial? So

18:58

it's personalizing it.

19:00

Yeah. So that that is one

19:03

way to do it. So certainly, you

19:05

could think of time.

19:07

So there's this there's this whole literature on construal level

19:10

theory that talks about

19:12

how things can feel very abstract

19:14

and concrete. And

19:16

that abstract versus concrete continuum

19:18

can emerge in time. Like,

19:20

things are far away in time versus

19:22

close in time. But they can also

19:25

emerge interpersonally like

19:27

things that matter to me versus things

19:29

that matter to people farther away from

19:31

me that are less related to

19:33

Concepts that are less related to me.

19:35

So psychology is very related

19:37

to me. Economics is maybe

19:39

next closer, but theoretical

19:42

physics much farther away

19:45

from me. I don't do

19:47

anything related to to theoretical

19:49

physics. And so by

19:51

by bringing it closer to me

19:53

on any of those dimensions can help

19:55

make it more more concrete. So

19:57

Bringing myself into

20:00

that situation can help.

20:02

Dr. Piediane, thoughts on how

20:04

to help people visualize things that

20:06

are so far in the

20:07

future, but help them make the right decisions

20:10

now. Yeah, I think

20:12

one of the things that data visualizations can

20:14

do is to remove some of that

20:16

obstruction. We can show

20:18

similar to what Perfecto was

20:21

indicating what the flood plan of

20:23

your home will look like in fifty years. And

20:25

you can use that information. You can just

20:27

see it to help you make decisions now

20:30

about getting flood insurance. Or

20:32

we can show you what it would look

20:34

like if we did forest restoration in

20:36

this area and how we can reduce the risk

20:38

over long term. And you can just

20:40

see it with data visualizations in

20:43

ways that you wouldn't be able to

20:45

before. So I

20:47

think making things more

20:49

concrete less abstract, especially

20:51

in these long term future projections.

20:53

I think that's really one of

20:56

the powers of data visualizations

20:58

that have just recently

21:00

been considered

21:02

as as a resource? So

21:04

with the advent of the Internet, we

21:06

now have access to huge

21:08

amounts of information in an instant,

21:11

whereas in pre internet days,

21:13

if we wanted to make a decision regarding,

21:15

say, which car to buy or what

21:17

college to attend. The research was

21:19

very labor intensive. You might have to go to

21:21

a bunch of dealerships. You talk to your

21:23

friends and neighbors. Maybe you go to the

21:25

library and look at books. Now,

21:28

you know, everything is is

21:31

immediately available on the wondering

21:33

is that making decision

21:36

making easier for

21:38

people or More difficult do

21:40

we

21:40

know? I know very

21:43

short, long time

21:46

psychologist, Swarthmore and

21:48

Berkeley, her talks about how when

21:50

we have so many options

21:53

available, the

21:55

the work on choice overload, whether,

21:57

like, having so many and I feel

22:00

I can't choose. I feel less happy

22:03

with that is

22:05

mixed. Still an

22:07

ongoing debate, something that's hard to study

22:09

in the lab that probably does

22:12

exist out in the field. But

22:14

very short is where it says, if we

22:16

if we have a lot of options, it feels like we

22:18

have a lot of potential, like, we should be

22:20

able to find the perfect thing.

22:23

We I'm in marketing, so I was couchated

22:25

products and whatnot. But if

22:28

I if I'm looking for a college, like, I

22:30

should be able to eliminate

22:32

my I have all these

22:34

forums with students I can talk

22:36

to. I have lots

22:38

of resources available at

22:40

at the click of a mouse that

22:42

to be able to learn about the school. So I should

22:45

feel really confident.

22:47

But that all

22:49

that information doesn't perfectly

22:51

solve the problem for us. And

22:53

so we might feel

22:55

less satisfied with

22:57

what we come up with in the end that I feel like I

22:59

should know exactly what I want. I

23:01

feel like I should find everything, but

23:05

I don't. And so now I'm

23:07

less less happy with what I

23:08

get. Dr. Padilla, easier or

23:11

harder. I

23:13

think more complex would be my

23:15

answer to that. I think

23:17

what happens partly

23:19

is that, you know,

23:22

we are information forwarders. We go online

23:24

and we kind of search for information. And

23:26

so the way we do that is biased. We're

23:29

kind of looking for things that affirm our own

23:31

beliefs most of the time. So

23:33

we are to have a tendency to find things

23:35

that affirm our own beliefs.

23:38

And within that, we're exposed

23:40

to lots of misinformation and being

23:42

able to identify what misinformation

23:44

is is very challenging, especially

23:46

if it your own more likely to believe it's

23:48

true if it matches

23:50

your own beliefs. So we have a

23:52

whole new set of problems that we

23:54

have no haven't,

23:56

you know, developed any skills at

23:58

combating, which makes the whole

24:00

thing very complicated. And

24:02

I think for me, it really

24:04

brings this new concept of trust to the

24:06

forefront. Now

24:08

we have to evaluate the information that we're taking

24:10

in and think about how much we trust it.

24:12

And trust is a whole complicated

24:16

area of exploration that

24:19

doesn't necessarily correspond

24:21

with how effective or

24:23

useful information is. Oftentimes, we

24:26

trust things that don't always help

24:28

us make the best possible decision.

24:30

At least that's what my work has been finding,

24:33

that they don't always go together.

24:35

So I think that that's a new problem that the

24:37

Internet is presenting

24:39

to us is that we now

24:41

have to evaluate trust, and we

24:43

have to forward

24:45

this information for us. And there's certain

24:47

people who are profiting off

24:50

of trying to get us to

24:52

trust things that aren't necessarily

24:54

true. So is

24:56

artificial intelligence going to

24:58

save us? If we can

25:00

use artificial intelligence to

25:02

sift through a lot of

25:04

this data and maybe identify

25:06

the things that are fake. You know, we're confronted with

25:08

a lot of information, as you just said, Dr.

25:10

Padilla, that isn't – it's just not

25:12

accurate. How will – what role

25:15

will AI be playing going

25:17

forward, do you think? I

25:18

mean, it's really gonna be playing a massive role.

25:20

Maybe not as much in our

25:22

own personal decisions, but certainly

25:24

decisions that are

25:25

made. On institutional government

25:28

level kind of high impact decisions.

25:30

And

25:30

I feel like I'm an outlier

25:32

in saying that AI can be

25:35

very useful. I've always every time I talk

25:37

with colleagues about this, I'm I'm the one who

25:39

thinks AI could be good. there's

25:42

this general apprehension to trust AI

25:44

to kind of give our decisions over

25:46

to artificial intelligence,

25:48

partly because there's a whole history

25:51

of AI also making biased

25:53

decisions because they're made by people

25:55

and people, you know, have biased input

25:57

data and have biased, you know,

25:59

different algorithms and so forth. And I

26:01

think that's a very real problem. And I'm very

26:03

excited that there's many ethicists

26:06

who are working on those

26:08

particular issues. But I do think that

26:10

we should start to get comfortable

26:12

with a future that is not far

26:14

off where many

26:16

decisions will be

26:18

curated in some capacity by AI.

26:20

Where rather than having thousands of things available

26:22

to us, there'll be some type of

26:25

decision scaffolding that

26:27

AI will be doing for us. And

26:30

that's the future that I see,

26:32

which I think could be a good one if, you

26:35

know, if there's a lot of people

26:37

involved in the AI development

26:39

process and not just certain groups of people

26:41

that don't necessarily

26:43

represent all of the interests.

26:46

Of the people making the decisions. Howard

26:48

Bauchner: Dr. PERFECTO, do you share

26:49

those thoughts, or do you come down on a

26:52

different side of the question? Yeah, I

26:54

think I think it's that it's that last

26:56

if that is the most important

26:58

component. Because as

27:00

Dr. Pete has said, it's And like we were

27:02

talking about with choice architecture, it's the

27:05

people are the ones making the

27:07

decisions, whether

27:09

it is in

27:11

terms of choice architecture actually forming the

27:13

decision that is being made for you

27:15

in that moment. The actual thing you're gonna

27:17

see when you're electing your

27:19

retirement plans or whether they

27:21

are farther upstream, creating

27:23

the thing that will eventually

27:26

help create a decision scaffolding,

27:28

as Dr. Fadia said, to

27:30

then present me with a

27:32

decision and its options down the road. So I

27:34

think that's a really big component to

27:36

consider. But I think everyone

27:38

is acknowledges

27:41

that may Dr. Pedia's main point

27:43

that it's happening. It's

27:46

AI and algorithms

27:48

are constantly increasing

27:50

every day their presence in our lives

27:53

and in our decision making.

27:55

And so there's a lot of work ongoing

27:58

in the field of decision making on

28:00

the academic side investigating how

28:03

people evaluate algorithms

28:06

and how we can

28:09

how people or how and

28:11

when people shy

28:13

away or shun completely

28:15

algorithm in certain context,

28:17

what happens when they make mistakes? How do

28:19

people respond to that? And how

28:22

we can improve people's

28:26

perceptions of them? And making

28:28

sure that they since

28:32

oftentimes, the the

28:34

algorithm is going to help you make a better

28:36

decision if the big if it's made

28:38

properly and has the right inputs.

28:41

And we want people to

28:43

be excited and

28:45

willing to go with this good decision

28:47

that that the algorithm is is

28:49

helping you make. So a lot of cool

28:51

work ongoing by

28:53

people helping learning

28:55

about ways to facilitate

28:57

that adoption of of algorithms in

28:59

people's decision making. So let me ask

29:01

you both. What are the big questions

29:04

you're working on

29:04

now. Padilla, what's

29:07

your research looking at and right

29:09

at this moment? I'm really interested

29:11

in the question of how

29:13

to facilitate trust and what

29:15

that means. I think

29:18

partly because I study a lot of

29:21

forecast visualizations, There's interesting

29:23

questions about how

29:26

much uncertainty is

29:28

useful to show, particularly

29:30

in facilitating trust. You

29:32

can imagine if you had a forecast of,

29:34

you know, that's, you know, flooding in California

29:37

right now. If there was a a

29:39

forecast that said

29:42

you know, every day this month, there was some potential

29:44

of flooding. That wouldn't be

29:47

correct. But in some ways, it would feel

29:49

like a cry wolf effect,

29:51

like it's it's saying that they'd be

29:53

flying too much going on. So

29:55

what you really want is a forecast

29:57

that gives you just the right

29:59

amount of information to make your decision and not

30:01

too much uncertainty because

30:03

then you can't, you know, make your decision with it.

30:06

So I'm looking at trade

30:08

offs and trying to identify the sweet

30:10

spot and how much uncertainty

30:12

we wanna show people and

30:15

how that can support both trust

30:18

and performance. I think there's

30:21

certainly thresholds there that that haven't

30:23

been carefully identified. And

30:25

Dr. Perfecto, what are you looking at

30:28

these days? I

30:30

teased it a little bit earlier. I'm

30:32

right now spending a lot of

30:35

time getting a sense of whether

30:37

and how we've been

30:40

creating these more abstract studies in

30:42

the lab, how

30:44

-- whether and how that influences the

30:48

likelihood of success down the

30:50

road. How closely is is

30:52

the lab matching the real

30:54

world. Since in the end, that's what we really

30:56

care about is is helping out the real world.

30:58

And so In

31:01

doing that, I am having lots

31:03

of fun going through, lots

31:06

of long standing literature, seeing

31:09

identifying common mistakes

31:12

made, rectifying those

31:14

mistakes, and getting a sense

31:16

of the impact that that

31:18

that might have in terms of

31:21

when this particular

31:23

error or bias might be

31:25

made out in the real world. And for

31:27

our listeners, any words

31:29

of wisdom? I think Dr. Perfecto, you talked

31:32

about not rushing to make decisions to even

31:34

just take take a few

31:36

seconds to think. Dr. Padilla, any other ideas

31:38

that you can offer to to our listeners?

31:40

You know, more practical advice on how to

31:42

how to make better decisions.

31:45

Findings situations in which

31:47

you can play around with the data

31:49

can start to give people an

31:51

intuitive sense of some

31:54

of the uncertainties and variability in the

31:56

data. I think what tends to happen

31:59

is people in the general

32:01

public they just get shown a forecast on the news

32:03

and they just have to

32:03

decide, do I trust it? Do I not? What do

32:06

I do

32:06

with it? And that's a very one-sided

32:10

path of the information. There's some

32:12

places that you can go online that allow

32:14

you to interact with the data and give

32:16

you the opportunity to say, what

32:18

if I extend this forecast a little or what if I change

32:20

the parameters of this forecast, and

32:22

then you can kinda get a sense of

32:24

what the different aspects are

32:26

affecting this particular model for really any

32:29

type of natural disaster

32:31

or no biological disaster. They

32:33

have these with COVID, have it

32:35

with fires and and hurricanes and so forth. So

32:38

if you're wanting to make a a better decision

32:40

for those types of things and you

32:42

sincerely care about it, consider

32:45

getting a little bit more comfortable with the data yourself and

32:47

finding those types of sites that

32:50

empower people with

32:52

the data that they can play

32:54

around with. That that could help to give people

32:57

this more intuitions about

32:59

what the forecast will do and and the uncertainty

33:01

that goes into those

33:02

forecasts. And doctor Perfecto,

33:04

last word, any practical advice?

33:06

In addition to hold on

33:08

a second. Hold on ten seconds.

33:10

And and think about

33:13

it. I would say

33:16

to make sure

33:19

you are bringing

33:21

the right tools to task that if it's

33:23

a very difficult decision, then you

33:25

should really take some time and sit with it. If

33:27

it's a really easy, like, what cereal am I

33:30

gonna have? Maybe don't. You

33:32

know, we only have so much time here on

33:34

this Earth. Just just grab grab

33:36

the cocoa puffs. Whatever. You can

33:38

you can take those specialty tomorrow. And

33:43

making sure that you are asked answering

33:45

the the question that is being

33:47

asked. So if if you are in

33:49

the supermarket and you're trying to

33:52

decide what to get. And

33:54

the easiest question is, what do I

33:56

want to buy a harder question is, what should

33:58

I be buying? What

34:01

what what is the best? What is the healthiest

34:03

option? What did my doctors say that I should

34:06

get? That's a that's a harder question to answer. Maybe when

34:08

you don't want to answer as much

34:10

as what do I

34:12

want. And so making sure that

34:14

you are actually

34:16

not substituting a different

34:18

question, an easier question for a

34:20

harder one can also

34:22

be helpful. Well, I wanna thank you both for joining

34:25

me today. This has been really interesting and

34:27

and enlightening. I appreciate your thoughts, and

34:29

I'm gonna grab the

34:29

CocoPuffs. Thank you. Enjoy.

34:32

It was great talking with you. Thanks

34:34

so much for having us.

34:36

For previous episodes of speaking of

34:38

psychology, you can visit us on our website

34:40

at WWW dot speaking of psychology dot org or you can find

34:43

us on Apple Stitcher or wherever you get

34:45

your podcasts. And if you like what you

34:47

hear, leave us a review. If

34:50

you have comments or ideas for future podcasts, you can email

34:52

us at speaking of psychology at

34:54

APA dot org. Speaking

34:57

of psychology is produced by Lee

35:00

Warnerman. Our sound editor is Chris

35:02

Condein. Thank you for

35:04

listening. To

35:06

the American psychological association. I'm

35:08

Kim Mills.

Unlock more with Podchaser Pro

  • Audience Insights
  • Contact Information
  • Demographics
  • Charts
  • Sponsor History
  • and More!
Pro Features