Episode Transcript
Transcripts are displayed as originally observed. Some content, including advertisements may have changed.
Use Ctrl + F to search
0:02
Welcome
0:14
to the Psych Podcast, a podcast
0:16
where we discuss everything
0:18
intro psych with my friend,
0:20
my colleague, Paul Blum. Welcome Paul.
0:23
Good to talk to you again, David. Today we
0:25
are talking about a difficult topic, Paul, one that
0:27
is near and dear to my heart. Important,
0:30
urgent, stressful. It's a topic that
0:33
you, Paul, could have not
0:35
included in your book and nobody would have said
0:37
anything. I think that's right. It was a conscious
0:39
decision to put into my book. There's a couple
0:41
of aspects of it, but the primary aspect
0:44
that we'll talk about isn't even about psychology. It's
0:46
about a lot of contemporary science in general. But
0:49
I do think it's important and I do think
0:52
it's kind of an elephant in the room
0:54
where I could have skipped it, but then
0:56
people would say, well, you made a choice skipping
0:58
it and maybe it was an irresponsible choice. So what we're
1:00
referring to is what has been dubbed
1:03
the replication crisis in psychology.
1:05
But as you point out in other fields
1:07
as well, and I guess
1:10
simply put, it is the growing
1:12
realization, the sort of quick onset
1:14
realization within
1:17
psychology, especially in some fields
1:20
of psychology, that many of the studies
1:22
that we thought
1:25
were contributing to the scientific
1:27
body of literature actually can't
1:29
be repeated. They can't be replicated.
1:32
So a paper, for instance, gets published
1:34
once and 10 years later,
1:36
people try to do the same experiments that were
1:39
presented in that paper. But over
1:41
and over again,
1:42
people have an inability to replicate
1:44
the basic result. That's exactly it. So
1:47
there are sort of two ways to approach the crisis.
1:50
One is what you're talking about, which is a
1:52
lot of the findings, a lot of the findings that people
1:54
who are listening to us now say, oh, that's a finding
1:56
from psychology. I heard that. Don't seem to
1:58
replicate. And it doesn't seem to replicate.
1:59
because it wasn't true.
2:02
And so that's one part of it, that observation.
2:04
There was a study in science
2:07
that found that about 40% of
2:09
studies are successfully replicated when
2:12
you try to replicate them. And that's part of it. And
2:14
then the other part is people have realized
2:17
why they
2:19
failed to replicate. And the
2:21
answer is that the original studies
2:23
were done in such a way that they
2:25
artificially inflate
2:27
the chances of getting a positive result.
2:29
And we could talk about the different ways in which they do that.
2:32
One simple thing is publication bias. If you
2:34
make a practice of only publishing studies
2:36
that work,
2:37
you kind of skew things to give
2:39
a feeling that more things work than what they don't.
2:42
There's something which comes under different names.
2:44
P-hacking is the standard name.
2:47
One statistician, Andrew Gelman, calls it the Garden
2:49
of Forking Pass, which as a
2:51
Borges fan you should appreciate. Absolutely.
2:54
Do you want to give a shot in describing what P-hacking
2:57
is? Sure. P-hacking is the practice
3:00
of using various tactics or
3:02
techniques that lead you to
3:05
find and focus on positive results,
3:07
even
3:07
though they're embedded in a whole
3:09
host of negative results.
3:11
If you run 20 analyses, say
3:14
statistical analyses, chances are you might
3:16
find one that gives you a significant result
3:18
purely by chance. And researchers
3:22
were often inclined to search
3:25
until they found that one analysis, whether that
3:27
was by splitting their data set,
3:29
looking at certain questions and not other questions.
3:32
There are a lot of decisions that can be
3:34
made along the way, what people have referred to as researcher
3:36
degrees of freedom. That is, you can decide
3:39
which observations should
3:41
be included in your data set and which
3:43
shouldn't. You can decide whether to
3:45
look at your analyses, dividing it up
3:48
by male and female or by age
3:50
brackets. And once you start
3:53
engaging in all of those steps,
3:56
you're gonna see statistically
3:58
significant results in the future.
3:59
emerge, that's always going to
4:02
happen so long as you do enough
4:04
of that, even when there's absolutely
4:06
nothing in the data. That's a great summary. I mean,
4:08
the issue, one way to think about it is like
4:10
drawing the bullseye after you fired the arrow. That's
4:13
a much more efficient way of describing
4:15
it. No, but I mean, going through it, it's worthwhile.
4:18
I mean, I tell a sort of parable in my
4:20
book about you discover that people
4:23
who work in
4:24
pink rooms do more creative work
4:26
than people who do work in white rooms. They say, wow,
4:28
that's a great finding. And it's true, it would be
4:30
an amazing finding if you found it a practical
4:32
value, theoretical value. But
4:34
then suppose you found out that,
4:37
I'll tell it slightly different from my book, you found out that
4:39
the experimenters had like 20
4:41
conditions with all these different colors. And
4:44
then they broke them up into men and women
4:47
in different ages. And then it didn't
4:49
work. And then they tried it again. And then they tried it again.
4:51
And finally, they found out that people who
4:54
work in pink rooms are more creative. Well,
4:56
that's going to happen by chance.
4:59
It's kind of, I mean, an even simpler
5:01
example is they say, I'm amazing. I could
5:03
do three heads in a row. I said, well, that's really impressive.
5:06
And then I keep flipping my coin until I got three
5:08
heads in a row. I said, there, I got it. Well, that's not impressive
5:10
at all.
5:11
And even more distressing
5:14
is that
5:16
you could flip a coin a hundred times and
5:18
videotape that. And
5:21
you get three in a row. And
5:23
what you do is edit your video.
5:25
So you cut out
5:27
everything but that little streak. And
5:29
then you upload that to YouTube and say, look
5:32
at this amazing talent of Paul Bloom
5:34
to hit three heads in a row.
5:37
It's not that you didn't hit three
5:39
heads in a row, right? It
5:41
is absolutely the case that
5:44
there is this observation that there were
5:46
three heads in a row. What it means though,
5:48
is on
5:48
its own, that can't
5:50
be evidence of
5:53
anything because you have failed to
5:56
report that you did it a hundred times
5:58
and you got 50 heads and 50 tails.
5:59
And what you're showing is
6:02
that chance is uneven.
6:04
People sometimes say chance is lumpy. The distribution
6:07
of results clumps. So
6:09
you can selectively draw circles around
6:11
various results and if you only report
6:14
that, it seems like you have very powerful
6:16
evidence. And one thing to emphasize
6:18
in all of this is, I mean, the way
6:21
we're talking, we sort of, you may have an image
6:23
of psychologists rubbing their hands together and saying, how
6:25
are we gonna cleverly go through the data and
6:28
figure out some results so we can get things published,
6:29
knowing that this
6:32
is a bad practice. But for
6:34
the longest time,
6:36
maybe people thought there was something a little bit
6:38
wrong about this. I quote some critics who
6:40
say, people usually think it's wrong, like
6:42
it's wrong to jaywalk and now we're discovering
6:44
it's wrong, like it's wrong to rob a bank. But
6:48
back in the day, and I was as guilty
6:50
of this as anybody else, it was kind of what
6:52
you expected to do. You do a big study,
6:55
you look at the data, you know if you're students, you know if
6:57
you're collaborative, you look at the data, you say, well, what
6:59
worked? What effects do we get? Okay,
7:02
there's no overall effect, but gee, if you
7:04
just look at the, you know, developmental study, you look
7:06
at the 10
7:06
year olds, there's something really interesting
7:08
going on here. And it's like you're looking at data,
7:11
discovering findings. And
7:13
then later on when you write the paper,
7:16
things kind of transform a
7:18
bit in your mind and you kind of get
7:20
the feeling you predicted, those weird
7:22
effects you found.
7:24
And at no point did you say,
7:26
I'm gonna publish something which probably isn't true,
7:28
you really believe this stuff. But
7:31
if we fail to take into account the fact that
7:33
always looking at the data in multiple
7:35
ways increases our us of a false positive finding,
7:38
we're gonna get ourselves in all sorts of trouble. And we got
7:40
ourselves in all sorts of trouble. We certainly
7:42
did. And yes, I
7:44
think it's absolutely important to
7:47
distinguish malice, malicious intent
7:50
from just error. And I
7:52
will say, like I was
7:54
trained in a very specific way to do
7:56
that, to look for results. There's
7:58
a naive, I'm sorry. saying naive
8:00
belief here, by which I mean, I mean,
8:02
even I had it, that when you collect
8:05
data and you have a dataset, that
8:07
there are results
8:10
in there, that there are
8:12
findings in that dataset
8:15
that have to be uncovered. And
8:17
that what running statistical
8:20
tests on all of the data is
8:22
doing is, well, there might be a needle in this haystack,
8:24
let me find it. So that when you
8:26
finally get a significant statistical
8:29
result, you say, aha, that's
8:31
where it was. And I think
8:34
there is this sense that,
8:35
well, how could it have given me a significant
8:38
finding if that weren't true?
8:41
And what's really being missed is how
8:43
statistics work and how chance works.
8:45
I think that's exactly right. If you run enough tests,
8:48
you're going to get something by chance. That's
8:50
right. And I think we should be careful
8:52
here since with the exception of ourselves, I don't
8:54
want to point to anybody and
8:56
present them as bad examples. I think there's,
8:59
I think one thing we talk about is there's a little bit too much public
9:01
shaming about failure to replicate in someone
9:03
which has some bad consequences. But
9:06
as an example, without giving a name, there's a
9:08
colleague of yours, not within the psychology
9:10
department who put up on some sort of social
9:13
media parable. And the story was
9:15
he had some data and then one
9:17
student looked at it and gave it a cursor and says, well, our experiment
9:19
didn't work. And the second student looked at it really
9:22
hard and did more analysis
9:24
and says, no, if you
9:26
look at this
9:26
and look at this, look at this, we have a really cool
9:28
finding. And as you told a parable, the
9:30
second student was the hero. The first student
9:33
was the layabout, the lazy person, the person who didn't
9:35
try hard enough. And then it
9:37
turns out a lot of people said, that's exactly
9:40
wrong. The second student is
9:42
doing something which is bound to get
9:44
you results that are not real. I can't
9:46
emphasize enough, this is the way
9:48
in which many of us were trained, myself included.
9:51
While I can be blamed
9:53
for not taking steps
9:56
to
9:56
be less ignorant about statistics,
9:59
it wasn't.
9:59
to anything that I thought was wrong. Same.
10:02
So we have a body of work that
10:05
for good reason people are now going back to
10:07
and trying to see if we are a
10:09
bit more strict in our procedures,
10:13
will we still get this result? That's right.
10:15
And so there's been a lot of reform.
10:17
And sometimes the replication
10:20
crisis talked about in the context of social psychology.
10:22
And for a couple of reasons, there's some evidence
10:25
that there's more replication failures
10:27
in social psychology than some other fields,
10:29
but also anything more to the credit of this
10:32
field. Social psychologists have been upfront
10:35
in identifying the problem and
10:37
developing measures to address it. So
10:40
one measure is preregistration.
10:43
This is sort of equivalent like calling your shots in
10:45
pool or equivalent of actually drawing
10:47
the bullseye before you shoot the arrow, where
10:49
you say ahead of time, we expect
10:52
this finding.
10:53
This is what our theory predicts. That
10:55
doesn't preclude you from looking at other things as
10:57
long as you're honest about it. But this is our
10:59
main finding.
11:00
And putting that
11:02
publicly so that reviewers could see this before
11:04
you run the experiment. And the logic
11:07
here, which I think is basically right. I think there's been a
11:09
controversy, but I think basically right, that
11:11
by doing so, you discipline yourself
11:13
to not peehat.
11:15
And so if you find a finding you're looking
11:18
for, people could be confident that
11:20
this is a real, more or less a real effect
11:23
and not the result of you looking through a hundred possible effects
11:25
and then choosing the one that worked at random. One
11:27
of the things that has made this
11:29
possible is the existence of websites like
11:32
Open Science or aspredicted.org,
11:36
where you can publicly issue a prediction.
11:39
Yeah, that's right. Let's talk a little bit more
11:41
about social psychology. As you
11:43
say, a lot of the reform came from social psychology
11:45
and
11:45
a lot of the failures to replicate have come from social
11:48
psychology. So that's concerning. It's
11:50
also the case, though, that a lot
11:52
of the social psychology that
11:54
has been done in the last 20, 30 years
11:58
is work that's a lot easier to
11:59
to do in the sense that collecting data
12:02
presents less of a challenge
12:05
in the sense that it's often easier
12:07
to collect data in social psychology
12:09
because we're not dealing with babies or clinical
12:12
populations or non-human
12:14
primates. We can go
12:16
and use a service like Amazon's Mechanical
12:19
Turk and run a quick study
12:21
and have the results in an afternoon.
12:23
So it makes sense that social psychology
12:26
might discover it first. Yeah, that's right.
12:28
I think there are certain problems, reasons
12:30
for the replication failures in social psychology
12:32
that wouldn't arise in other fields, which
12:35
is, I'm going to run this by you. You tell me if
12:37
you think that this is right. But a lot of
12:39
social psychology hypotheses, for
12:42
one thing, there's a strong desire to
12:44
have sexy findings, really cool paradoxical
12:47
findings. Sexy hair has nothing to do with sex, but
12:49
rather just the appeal of a finding, the
12:51
exciting ness of a finding. That's right.
12:54
If you do a study on moral blame
12:57
and you find that we tend to be more and more
12:59
angry at people who commit murder
13:01
than people who can do shoplifting,
13:04
that's not very interesting. That's not going to get
13:06
published in Science or Nature because it kind of makes
13:08
sense. On the other hand, if you find,
13:10
I'll make up something, that you're more likely
13:12
to blame somebody if they're wearing the color red
13:14
than the color blue,
13:16
or more likely to blame somebody if you just
13:18
took a hot shower than a cold shower. Well, that's
13:20
really interesting. You wouldn't have predicted that.
13:23
And that's the kind of thing that gets in
13:25
the top journals and gets on TV, gets...
13:28
Gets press coverage. Gets press coverage. And
13:30
there's also a political aspect to it
13:32
where sometimes a lot of social psychologists
13:35
have kind of political agendas and
13:37
this tilts the kind of finding that you want
13:39
to get. So just as one example
13:41
a lot of people discuss, it's a lot
13:43
kind of easier to publish a paper that finds
13:46
bias of some sort, racial bias,
13:49
sexist bias, and so on, than one that doesn't.
13:52
That also kind of skews
13:53
the field in ways that may lead to some
13:55
problems. And chances are
13:57
in the field of visual perception, you're not...
13:59
likely to encounter that kind of
14:02
resistance to a finding if it doesn't
14:05
meet somebody's ideology. That's
14:07
right. And you might also say that other fields are so boring
14:09
anyway, that, you know, there's not
14:11
that problem. As you say in your chapter, I mean,
14:14
there's a reason that social psychology is
14:16
that is interesting to people because it deals
14:18
with a lot of the problems that we
14:20
face in our everyday
14:24
lives, in our public and political lives,
14:26
as a nation, as a society, as a group
14:29
of humans on the earth.
14:29
Social psychology addresses some of the
14:32
most complex and important
14:34
problems that there are, which
14:36
leads me to another reason why social
14:39
psychology findings might
14:41
have a harder time replicating. And that is that
14:44
we're often studying something
14:46
that has a level of complexity that
14:50
some of the, for lack of a better term,
14:52
the lower level areas of psychology
14:54
don't have, where if I want to
14:56
understand why people vote
14:59
conservative or liberal, there are so
15:01
many things in the world that would influence
15:03
that.
15:04
Things like where you grew up, who
15:06
you married, what your income
15:08
level is, maybe something genetic,
15:11
something about your personality. There's a ton,
15:13
a ton of things that might influence that. And
15:15
so finding anything that's
15:18
reliable, that's going to hold up across
15:20
a bunch of studies, I think can
15:22
be tricky. Yeah.
15:23
Yeah. And maybe trickier than a
15:26
study on a visual illusion or a study
15:28
on memory. And it may be tricky for
15:30
reasons that you might have studies
15:33
that fail to replicate due to no
15:35
fault of the experimenter, but because
15:37
the world has changed in certain ways. So, so,
15:40
you know, I would think, assume the basic properties
15:42
of say color, vision, or short-term memory
15:45
are, that's true for us. Is there true for a hundred
15:47
gathers? I mean, it's interesting. There
15:49
may be, there may in fact be some cultural effects
15:51
and so on, but for the most part, you can say this
15:53
is studies
15:53
of things that are, are universal. There's
15:56
some classic studies, for instance, finding about when
15:58
you get people to think about old people.
15:59
You find their behavior changes in
16:02
certain ways, consistent with how they think about old people. And
16:05
some of these studies,
16:06
like a decade later, don't give you
16:08
the same result. And you could say,
16:10
oh my gosh, those first studies were poorly
16:12
done. And, you know, maybe. But it's also
16:15
possible that our stereotypes of old
16:17
people have changed.
16:18
And so the fundamental architecture of
16:20
how we form stereotypes, how we think about them, could
16:22
be the same across all humans. But the
16:25
specifics change.
16:26
Any study that asks you what
16:29
you think of about, say, the American political parties, Democrat
16:32
and Republican, if you test it, you've been doing
16:34
a study in 1940 versus 1960 versus 1980. So
16:37
it's going to yield very different results because the world
16:39
has changed. That's right. They're going
16:42
to be what you might call historical effects, which
16:44
is what you're talking about. For instance, we've
16:46
documented, because we have data now
16:48
that spans 15 years or so, we've
16:51
documented that bias against homosexuality
16:54
has gone down over time. Because
16:56
we have continuous data
16:59
over all those 15 years, we can
17:01
see that that happened. If you ran
17:03
it in 1998 and you ran it now and you got a difference,
17:05
you might think, well,
17:09
the researchers just did a bad job that first
17:11
time around. But that's not the case. That's
17:14
exactly right. And so there's
17:17
all sorts of reasons. I think the replication crisis
17:19
is real, but we're talking about reasons why a
17:21
failure to replicate doesn't necessarily
17:23
show the original finding didn't exist. We
17:26
also got to account for the fact that sometimes replications
17:28
are poorly done. We're presenting
17:31
this as if this is just yet another topic,
17:33
but I think we'd
17:35
be missing something if we didn't talk a little bit about
17:37
how emotional this issue is for many people.
17:39
Careers have
17:42
been destroyed
17:44
by accusations of shoddy science.
17:48
And I think if you asked a hundred psychologists,
17:50
you get a hundred different opinions over, you know, these
17:53
people, they should have their careers destroyed or no. There's
17:55
a bunch of bullies and people being
17:58
savagely unfair.
17:59
And this has really
18:01
upset the field in a lot of ways. Absolutely.
18:04
In fact, it's a bit difficult to
18:07
talk about it without keeping in mind
18:09
the various people who might listen to us talk
18:11
about it and have very different opinions because
18:14
of the experience that they have had with
18:16
this sort of sweeping set of reforms and
18:19
the accusations of shoddy science and all
18:22
that. It's as much a sociological phenomenon
18:24
as it is one of scientific progress.
18:27
Maybe I can talk a little bit about the experience
18:30
that I had watching this whole thing
18:31
unfold. It started for
18:34
me with a colleague of mine at
18:36
Cornell who pretty soon
18:38
after I arrived at Cornell, published a
18:40
paper
18:41
showing
18:43
some small but he
18:45
claimed reliable effects that
18:47
extra sensory perception was real. So
18:49
this is Darryl Bem, this is not a secret
18:52
and I'm not speaking ill of him. He published
18:54
the paper, he believed it. He
18:56
made all of his materials and data
18:59
publicly available. He wasn't trying to hide
19:01
anything. I remember being faced with
19:03
this real dilemma. I did not
19:05
believe that this was true. I
19:08
in fact firmly believed that
19:11
a social psychologist couldn't upend
19:13
the laws of space and time
19:15
by showing that 53% of the time some people could
19:18
predict the future better than chance. I
19:20
remember thinking to myself, well, this is a
19:23
weird dilemma
19:25
to be in because as a field,
19:27
we have sort of agreed
19:28
on a set of statistical
19:31
tests, methodological procedures
19:34
that count
19:35
as legitimate. And
19:37
here Darryl Bem has shown using
19:40
those very methods that there
19:42
is a result that I have so much reason to
19:44
believe is not true. I'm not sure
19:46
what to do about
19:47
this paper. And sure
19:49
enough, many people count that as sort of- Patient
19:52
zero. Patient zero, exactly,
19:55
of the replication crisis. I remember
19:57
that it was published in I think the top journal of
19:59
social science.
19:59
psychology, JPSP, Journal of Personality
20:02
and Social Psychology. And I remember a
20:04
lot of people were furious and thought the journal
20:06
should not have published it.
20:07
Yeah. I personally thought that it shouldn't have been published
20:10
as well. I thought that there
20:12
was so much more reason to believe in the laws
20:14
of space and time that,
20:17
that I would go against it. But, but I
20:19
have very, very smart
20:21
colleagues who, who disagree with me. And now
20:24
looking back, it may be that this was an
20:26
important thing to publish because of the,
20:28
the result that it had. There's a point made by the
20:31
philosopher David Hume and others, which is the
20:33
extent to which you believe, a claim depends
20:36
on sort of your, what sort
20:37
of prior beliefs it would shake
20:39
up. So with regard to miracles, you know, if
20:42
I would trust you. So if you said you had
20:44
orange juice for breakfast today, I say, well, sure. But
20:47
if you told me you have the ability to levitate and
20:49
you were flying around your apartment, your house
20:51
yesterday, I would, I would
20:53
say, well, I normally trust you, but that's so outlandish.
20:56
I have to say, you're lying. I know you've gone crazy. It's
20:59
much more likely to me that you're
21:01
lying or gone crazy than you've acquired a
21:03
powers of flight. It's much more
21:05
likely that
21:07
something went wrong and Daryl Bem study
21:09
of some sort than people
21:11
can, can tell the future. But now having
21:13
said that,
21:14
you now have to, in some way, the reason why
21:17
this was patient's errors of force to say, well, his
21:19
analysis and his methods were no different
21:21
than any other paper published
21:24
in this prestigious journal. So if
21:26
you don't trust them, now you have to question
21:28
those analysis and those methods. And that's
21:30
the only conclusion that I could come to. Luckily
21:33
there were smarter psychologists than me
21:35
who published a paper entitled
21:37
cleverly false positive psychology, sort
21:40
of a play on, on positive psychology
21:42
showing exactly how they
21:44
could manipulate data, manipulate analysis
21:47
in order to find something that
21:50
could not be true. So showing
21:53
for instance, that an experimental manipulation
21:55
of listening to one song versus
21:57
another caused the
21:59
average age of a participant to
22:02
decrease. The song was when I'm 64.
22:06
And I think the finding was listening to that made
22:08
you older. Made you actually older. And
22:11
then they said, since, you know, well, you're probably not gonna believe
22:13
that. So now you have to question, you know, how we
22:15
got that conclusion. That's right. What
22:17
they demonstrated systematically was what we were
22:20
discussing before, that whenever you have
22:22
the ability to make certain choices in your
22:24
analyses, carve up the data in
22:26
ways that you may not have said you
22:28
were going to,
22:30
all fairly innocuous steps,
22:32
you can actually arrive at a
22:34
conclusion that statistically seems
22:37
robust, but that
22:39
cannot in fact be true. And when this happened,
22:42
we keep saying this, it shook up the field
22:45
tremendously and continues to. So
22:47
this morning,
22:48
I'm on Twitter and somebody publishes
22:51
this analysis and this set
22:53
of replications saying that yet another
22:56
well-known psychological phenomena apparently
22:59
doesn't replicate. Apparently it's not
23:01
real. And I don't know if I believe it or not. I
23:04
think sometimes people are too quick to reject these, but
23:06
it's reached a point where
23:09
when we're writing a book on psychology
23:11
or lecturing to intro psych students,
23:13
we have to sort of ask for just about
23:15
everything. Okay, does
23:18
this hold up? How much can we trust this? Yeah,
23:21
that leads me to a question that I wanted to ask
23:23
you, which is, has
23:26
this
23:27
so-called replication crisis changed
23:29
the way that you teach? I have found
23:32
that I'm sort of forced to
23:34
make some judgment calls about
23:37
what to talk about
23:38
in an introduction to psychology
23:40
class. It's even my concern now,
23:42
Paul, that we have gone too deep into
23:45
the weeds for our audience.
23:47
How much should our students know about
23:49
this? So I think there's two ways
23:51
to take your question. One way in which I
23:53
think for both of us to crisis has changed how we
23:56
teach is that, I'll put it in my terms,
23:58
I used to talk about some studies that,
23:59
This is going to seem unbelievable to you, but
24:02
it's true. And then I talk
24:04
about the study. And now for some
24:07
of those specific studies, I no longer do that
24:09
because in fact it probably isn't true. This,
24:11
you know, and so a lot of my psychology
24:14
class has gotten smaller in
24:16
some ways where a lot of experiments just I don't talk
24:18
about because I don't believe them anymore. So
24:20
that's one way, which I think has affected us. The
24:22
second question, which is harder is
24:25
when you're teaching intro psych, how much do you go into
24:28
the weeds of as we're doing now?
24:32
And I tend to not,
24:34
I tend to want to teach the students
24:37
about what we know about the
24:39
mind. And since there's just so many
24:41
hours we have, I spend less
24:43
time on the base within the
24:45
field of where we've gone wrong and how our practices
24:47
could be improved. Not because I mean, it's interesting
24:50
enough for us to talk about it here. I just
24:52
think I'd rather be telling students, here's what we
24:54
know about memory. Here's what we know about sexual desire.
24:56
Here's what we know about schizophrenia rather
24:59
than focusing too much on the metascience
25:01
of how we know it and maybe how we got it
25:03
wrong in the past.
25:05
We want to communicate true things to
25:07
our students. And in part, I
25:09
take it as my responsibility
25:12
to distinguish true
25:14
from false. And so the replication crisis
25:16
has affected me in as much as I have
25:18
shifted my standards for what counts
25:21
as evidence and what doesn't. And so now
25:23
I may talk about
25:24
different studies than I used to talk
25:26
about, just like you. I've discarded discussion
25:29
of some studies entirely because I've lost faith
25:32
in whether they're true or not. But
25:33
that doesn't mean that I have to jump into
25:36
a discussion of replication
25:39
any more than I would have to jump into a discussion of particular
25:42
statistical techniques that are used in the field. That's
25:44
right. That's right. I don't talk
25:46
about statistics either in the field or experimental design
25:48
very much, only the amount they need
25:51
to know to follow the studies. There
25:53
are people we both know who would say, look, this
25:56
matters a lot more than we're talking about.
25:58
The replication crisis shows that.
25:59
scientific
26:01
psychology has been built on a house
26:03
of sand. And I think
26:05
you and I, since we're this
26:07
far into this discussion, it's pretty
26:09
clear that we don't believe that. And
26:12
I think each of us could list several
26:14
dozens of established
26:17
findings and interesting theories
26:19
and real discoveries from our field that
26:22
have stood the test of time that have survived a
26:24
replication crisis. And that's
26:26
kind of what I want to focus on in my course.
26:29
There is a bit of schadenfreude,
26:32
I think, sometimes when people hear
26:34
about psychology not working
26:37
or not being reliable. Because I think
26:39
that there are a lot of people who doubt whether
26:42
or not psychology can be a real
26:44
science
26:45
for less well thought out reasons.
26:48
Maybe they think that there is something special about
26:50
humans that we don't lend ourselves nicely to
26:52
scientific investigation maybe because we are maybe
26:55
they hold dualistic beliefs. They believe that
26:57
human behavior can be treated
26:59
as caused in the same way that weather
27:02
can or the motion of billiard balls
27:04
can. A lot of reasons why people might suspect
27:08
psychology. But I
27:11
firmly believe that the human mind
27:13
is a product of natural
27:15
causes and
27:17
to the extent that we can study
27:20
natural causes and effects we can
27:22
build a science of psychology and I think we
27:25
have plenty of evidence that that is the
27:27
case. It's just harder.
27:29
It's just way more complicated
27:31
than a lot of things that we study. The weather
27:33
doesn't even begin to offer
27:36
a proper comparison to the complexity
27:38
of the human mind but it's a good one
27:41
to point to every once in a while when people want
27:43
some higher level of prediction
27:45
than they're getting from psychology. That's right
27:47
and
27:47
I think when people focus
27:49
a lot on the sort of sexier claims, sexier
27:52
in the sense of intuitively interesting and
27:54
wow I couldn't believe that falling apart.
27:57
I
27:57
think that that's true.
28:00
But we shouldn't lose track of
28:02
the real foundational discoveries we've
28:04
made. So just, you know, just to take one example
28:07
of many and many and many, one
28:09
of the interesting findings of psychology is how our memories
28:12
of the past can be distorted by the questions
28:14
we're asked, can be distorted by our expectations and
28:16
belief. And that's rock
28:19
solid. And we could list hundreds of other
28:21
ones. We can also list cases where there's
28:23
interesting theoretical debate, where
28:25
you just don't know the answer
28:27
to them, but the data is clear
28:29
enough.
28:30
Yeah. You know, I like to talk
28:32
a little bit about replication
28:35
when I talk about psychology as a science.
28:38
One reason that I do mention it is
28:40
that I think that it has become
28:42
enough of a topic of discussion for
28:44
an educated general audience that
28:47
I feel the need to say something about it,
28:49
to preempt questions or
28:51
other concerns that students might have
28:54
if I didn't bring it up. But one of the things I
28:56
really want my students to have
28:58
by the end of the course is some
29:01
ability to distinguish
29:03
good from bad when it comes to psychological
29:06
studies. But one clear rule of thumb
29:10
that I think everybody should have for
29:12
all sciences is that
29:14
no single study
29:17
really ever offers enough
29:19
evidence to convince you of anything.
29:21
There are for so many reasons
29:24
you can doubt the results of a single study.
29:27
But to go to the example that you gave, that
29:30
memories can be distorted. That
29:33
has been established by hundreds and hundreds
29:35
of studies. That's right. I
29:37
think that's an excellent rule of thumb, which is when
29:40
you read in the newspaper or
29:42
online or something like that, some scientists
29:44
discover that, you know, taller
29:47
men like the color purple or whatever, just
29:50
say, OK,
29:51
you know, I'm not going to change my worldview
29:53
on this. Wait until
29:55
there's 10, 100, 1000 studies supporting it.
29:58
And then I could believe it.
29:59
single study almost never
30:02
provides enough information to change your,
30:04
your, your, what they call your priors, your expectations
30:06
about the world. And when you have a single
30:09
study, say showing something that seems
30:11
kind of crazy, it's also important to remember
30:13
that a single study failing to
30:16
replicate that is also one
30:18
study. That's right. Just because it
30:20
came after doesn't necessarily mean that it's
30:23
better. Of course, a
30:24
lot of times there is some
30:26
care to collect more data than the
30:28
original study or to fix some problems,
30:30
but it's not always the case. So what
30:33
you might have when you have one study that fails
30:35
to get replicated is
30:37
two studies, one that worked and one that didn't.
30:39
That's right. And you have to do a little bit of work
30:41
to figure out which one might be true.
30:44
And I think the best course of action
30:46
is to wait until other people jump
30:48
in and try to do it as well. Yeah, the
30:51
order shouldn't matter.
30:52
If you know, if you'd respond differently, if
30:54
the one that word came second
30:57
rather than first you're thinking of it the wrong
30:59
way, there's just two studies. Throw
31:01
them in the mix and see how things work out. You
31:03
know, I don't talk about replication in my course,
31:06
but maybe I should. I think you're making kind of a case
31:08
for it. Another reason
31:10
to talk about it is it nicely
31:12
illustrates some aspects of this, of
31:15
how science in general and psychology
31:18
in particular is a human endeavor.
31:20
It's not done by gods. It's not
31:22
done in some sort of abstract way. A
31:25
lot of the reasons for the problems of psychology
31:27
are because psychologists, they're ambitious.
31:30
They have personal ambitions. They want to get well known.
31:32
They want to get tenure. They want to get jobs. They
31:34
want to get into graduate school. They want to get positions.
31:37
And there's tremendous pressure to make
31:39
an impact.
31:40
And this isn't just, of course, not
31:42
just psychology. It's everything. It's
31:45
certainly every science. And so
31:47
the replication craze and how we responded
31:50
to it is a good illustration of how
31:52
a sort of society can check
31:56
reasonable normal human
31:58
impulses at the store.
31:59
science. And I firmly believe
32:02
that most sciences suffer from
32:04
these problems in a way that will
32:07
emerge.
32:08
Who knows how replicable, I don't
32:10
know, biology is? I don't
32:13
know. Well,
32:14
nature did this, I don't remember
32:16
the numbers offhand, but did a review
32:18
where it's got cancer research. And
32:21
an extraordinary number of studies in that field
32:23
don't replicate because there's, because again, there's
32:25
various incentives there, including a lot of financial
32:27
ones to talk about your studies and make
32:29
them work.
32:30
That's right. You talked about publication bias a little bit
32:33
in the beginning of this conversation. And
32:35
I think sometimes of the field of nutrition,
32:38
I'm just... Oh God, yes. Not
32:40
to diss nutrition, but false positive findings
32:42
pop up there all the time. So imagine if
32:45
you ask
32:46
a thousand people,
32:48
all of the things that they eat every
32:50
day, and then you get them
32:53
to list all of the various diseases
32:55
or maladies that they've had,
32:57
and you toss it all into one
33:00
big statistical analysis and you find that people
33:03
who eat broccoli sleep better. That
33:06
might get published and you might even get
33:08
a headline, eat your broccoli, you'll sleep better. And
33:11
that is insane
33:14
making. It drives me insane
33:17
to see something like that because for
33:19
one, it conflates correlation
33:22
and causation. We don't know
33:24
what the direction of causality might be
33:26
there. But even
33:29
more importantly, it might not be a finding at all because
33:31
again, if you've done a
33:33
hundred analyses, you're going to find
33:35
a handful of them that are significant.
33:39
And if you had no way of distinguishing
33:41
beforehand which ones you thought were going to come
33:43
out, then you're
33:45
simply drawing circles again around targets.
33:48
That's true. That's true. I mean, one way,
33:50
I'm not the first to put it this way, but if you did a huge
33:52
study on broccoli eaters versus non-broccoli
33:55
eaters and found no difference. And then you
33:57
said, well, let's look at it in terms of astrological
33:59
science.
33:59
Let's first look at the Sagittarius
34:02
and the Leo and the Capricorns. You
34:05
know, you wouldn't be surprised if you found, you know, oh my
34:07
God, Pisces, you get an effect.
34:09
And then you can publish, you know, if you're Pisces,
34:11
eat broccoli. That's right. And,
34:14
but there's, of course, if you, if you ask the question
34:16
enough times by chance, you'll get the answer.
34:18
That's right. This episode
34:20
of Psych is sponsored by GiveWell.
34:23
When you give to charity, how much impact will
34:25
your donation actually have?
34:27
This question can be hard, if not impossible
34:30
to answer. Most charities just can't
34:32
tell you how your money will be used or
34:34
how much good it'll accomplish. You
34:36
may know it could theoretically help a cause, but
34:38
how, or more importantly, how much? If
34:41
you want to help people living in poverty
34:43
with evidence-backed, high-impact charities,
34:46
we recommend you check out GiveWell. GiveWell
34:48
spends over 40,000 hours
34:51
each year researching charitable organizations
34:54
and only directs funds to a few
34:56
of the highest impact evidence-backed
34:57
opportunities they've found. Over 100,000
35:01
donors have used GiveWell to donate more
35:03
than $1 billion.
35:05
Rigorous evidence suggests that these donations will
35:07
save over 150,000 lives and will improve the
35:12
lives of millions more. Using
35:14
GiveWell's research is absolutely free. GiveWell
35:17
wants as many donors as possible to make informed
35:19
decisions about high-impact
35:22
giving. They publish all of their research and
35:24
recommendations on their site for free. There's
35:27
no need to sign up, no need to give any
35:29
personal information. They want you to have it. They
35:32
allocate your tax-deductible donation
35:34
to the charity or fund you choose
35:37
without taking a cut. So
35:39
if you've never donated to GiveWell's recommended
35:41
charities before, you can have your donation
35:44
matched up to $100 before
35:46
the end of the year or as long as matching
35:48
funds last. To claim your
35:50
match, go to GiveWell.org and
35:53
pick podcast and enter Psych
35:55
at checkout. Make sure that they know you heard
35:57
about GiveWell from Psych to get your donation.
35:59
matched. Our thanks to GiveWell
36:02
for sponsoring this episode of Psych.
36:05
So we're talking throughout and in
36:08
every case we're talking about a well-intentioned people, kind
36:10
of doing their best, but being either
36:12
ignorant of what they're doing or sometimes
36:15
being motivated by certain very reasonable
36:17
motivations. And then there's the question, which we just got
36:19
to talk about, however briefly, of total fraud.
36:22
We're people making up their data. And I'm
36:24
curious what you think to proportion.
36:26
So we know it's not zero. There has been famous
36:30
cases. There's one famous case in social psychology
36:32
of someone who got caught
36:35
doing many studies,
36:37
well-known studies, and just literally
36:39
making up the data. And
36:42
psychologists always ask each other this.
36:44
I'm curious what you think. How frequent do you think that
36:46
is? Perhaps I'm a bit naive here.
36:49
The example that you're referring to of Dietrich
36:51
Stappel, who made up dozens
36:53
of entire papers with purely
36:56
fabricated data, I think that's, I
36:58
would put money on that. That's very rare. If
37:00
only because you're entering a level of
37:03
risky behavior that is, I don't know, I
37:05
think that the chance of getting caught doing that
37:08
might be pretty high. What concerns
37:10
me are the little ways in which people might
37:12
lie. And I would still classify it as fraud,
37:15
right? I'm not talking about innocent mistakes or naive
37:18
mistakes of method or or
37:21
statistics, but rather somebody
37:24
who realizes that,
37:27
well, if I
37:28
exclude these three
37:31
in this condition, my
37:33
numbers all of a sudden look just like
37:36
I predicted them. And so they'll remove them.
37:38
Even worse, maybe add some numbers.
37:41
Then I think you're getting into some more
37:44
common territory, but I'd still think it's single
37:46
digit percentages. That's right. There's Stappel
37:48
was an extreme extraordinary
37:51
blatant fraud. Once people noticed it,
37:53
it was very, very clear what had happened. But
37:56
then there's the question, which is every scientist in
37:58
our field has appeared.
37:59
where you're looking at the Excel data sheet with
38:02
all the numbers there and everything and doing analyses. And
38:05
you're right, which at any point one of us
38:07
could say, well, it's almost working.
38:10
If I switched this number from a
38:12
five to a six, all of a
38:14
sudden it would work and there's not a soul in the world that
38:16
would know I did it. And I'd like to think that
38:18
there's enough moral and cultural
38:20
forces saying you are, that is forbidden.
38:23
You must not do that. That very
38:25
few people do it.
38:26
My sense also is,
38:28
and this I don't have any data on, is that when
38:30
somebody does it, they'll do it again. And
38:32
again, this sounds like some sort of, I
38:35
don't know, Martin Scorsese film or something, but
38:37
they've fallen or applaud a Breaking
38:39
Bad. But once you do a little sin like that,
38:41
the rest of the sins come easy. Right.
38:44
It's like a mob movie where the first time
38:47
a guy has to whack somebody, they throw up,
38:49
but then the next time it's easier. The
38:52
third time they're kind of enjoying it. That's
38:54
right. And that's the trickstoffel is the equivalent of Scarface.
38:56
Right.
39:00
So, you know, I don't have any
39:03
evidence for my conjecture. It
39:05
is conjecture. But I will say
39:07
that one of the things that has
39:10
happened in the past few years as a result
39:12
of
39:13
both
39:14
p-hacking and questionable research practices
39:17
and outright fraud is that
39:19
there have been people who have developed techniques
39:22
to look at data and
39:24
sniff out suspicious
39:26
data to see whether or not
39:29
it's the kind of data you would actually find in the real
39:31
world. I think making it part of the culture
39:33
of science to make your data openly
39:36
available, to
39:37
responsibly keep track of the
39:40
sort of chain of evidence, the data
39:42
from the time it's collected all the way to the time
39:44
that it's posted publicly. These
39:46
are all, I think, the kind of structural
39:49
things that will make it less so
39:51
that we have to rely on any individual's moral
39:54
judgment because that's always a bit risky
39:56
and more on the fact that it
39:58
would be difficult to...
39:59
to get away with this. That's right. And
40:02
with regard to P-hacking and other things, I think
40:04
a cultural change as well, which I think has
40:06
largely happened, where if you look
40:08
at one of my papers, you reanalyze the data
40:10
and you say, no, you did the study wrong, you
40:13
analyzed it wrong, you don't get to find it,
40:15
you say you get it. And
40:17
I say, you're right, that result,
40:19
I'm glad you caught that. That should be a good thing.
40:21
Absolutely, yeah. And I think through the credit
40:23
of our field, a lot of people have said about their previous
40:26
work, I've looked at Sohr and I
40:28
no longer believe this result, and a lot of people
40:30
just say, that's wonderful, you're acting
40:32
like a true scientist. And that can be very difficult
40:35
when you have built a career on a particular
40:37
set of hypotheses that
40:39
you've tested over dozens of papers,
40:42
and somebody points out the flaws,
40:45
it's very easy to get defensive about it. It's
40:48
very easy to think of your papers and your studies
40:51
as your child, or an extension
40:53
of you, and defend it to the death.
40:56
But I think if I've contributed in any
40:58
way that obfuscates the truth, I
41:01
prefer that to be known. And I
41:03
prefer to know it, after all, I undertook those
41:05
studies because I'm curious about that specific question.
41:08
If it's not true, I would like to know. Well,
41:10
maybe you're a better person than me. I
41:13
do have that attitude, I do wanna know the
41:15
truth, but
41:15
my first reaction, if
41:18
somebody says, oh, your study is, particularly
41:20
if they say it rudely, or derisively,
41:22
your study is nonsense, is to get defensive.
41:25
Not only do I wanna know the truth, I also
41:28
wanna be well paid and have a job and
41:30
be well respected in the field.
41:32
Yeah, and there are concerns about your
41:35
legacy, your contribution. I think that's
41:37
totally fair. I don't mean to say that I won't
41:39
get defensive, because I'm sure that I would. You
41:42
get sometimes requests to share
41:44
your data. My heart does sink
41:46
a little bit, because it means that that's probably,
41:49
it's somebody who doesn't believe my original finding, and
41:51
wants to catch me in
41:53
something. You treated that, and I do too, like an IRS
41:56
audit, which is, oh, it is
41:58
nice people from the IRS.
41:59
arrest are going to analyze my finances and
42:02
make sure I did my taxes right. How
42:04
good it is to get that all nailed down. That's
42:06
true. I wouldn't want to have paid the wrong
42:09
amount of tax. That's right.
42:11
What if I paid too little? They're going to correct
42:13
us all and maybe find me. So it
42:17
is difficult. And I think we have to work very
42:19
hard on working on our attitudes
42:21
about this. It's been at some level
42:24
a difficult correction, but it's just
42:26
been a really wonderful
42:28
way. We've sort of largely we're
42:31
working very hard to clean up our act. There are
42:33
all sorts of details that are still being
42:35
worked out. But if there's one thing
42:37
I can see in my students is a
42:39
completely different attitude
42:42
toward transparency in
42:44
scientific practice, a different attitude
42:46
toward things like sharing data.
42:49
Absolutely.
42:50
Making all of your materials
42:53
available. And I
42:55
hope I think even not
42:57
feeling too defensive. So should
42:59
we shift gears and spend a few minutes talking
43:01
about a problem in psychology where
43:04
some people believe to be even worse than the replication
43:06
crisis?
43:07
Yes. The weird problem. The weird
43:09
problem. The term was thought up
43:11
by Joseph Henrich and his colleagues,
43:14
the term weird and it's an acronym meaning
43:16
Western educated, industrialized,
43:19
rich democracies published originally
43:21
in your journal. In my journal, Behavioral Brain Sciences,
43:24
I was the editor of a handled paper called
43:26
The Weirdest People in the World. The idea
43:29
here is that people in
43:31
these societies, Canada, United States, New
43:34
Zealand, Australia, France are in some
43:36
ways psychologically different from the rest of the
43:38
world. That's sort of an empirical claim to make. But
43:40
whatever you think of that, what's
43:42
what's really clear is that
43:45
they are wildly overrepresented
43:47
in our psychological science. So
43:50
one statistic that
43:52
Henrich and his colleagues say
43:54
is I'm going to hear a randomly
43:57
selected American undergraduate is
43:59
more than fourth.
43:59
thousand times more likely to be research
44:02
participants than a randomly
44:04
selected person from outside the West. So
44:07
what other people have said is, we should take
44:09
our journals, like Journal of Personality
44:12
and Social Psychology, and just change
44:14
it to Journal of American Personality
44:16
and Social Psychology, and change
44:18
the journal cognitive development
44:22
among highly educated undergraduates
44:24
and the children of those same families
44:27
in the West. And
44:28
of course we don't want to do that. We say, no, I'm
44:31
not studying memory in Americans.
44:34
I'm studying memory. I'm studying vision. I'm
44:36
studying stereotyping. I'm studying mental
44:38
illness. But the
44:41
critique is, since we almost exclusively
44:44
study from this narrow subpopulation,
44:46
we are unequipped to tell whether
44:49
or not we get a universal, so we're getting cultural particulars.
44:52
I have mixed feelings about this
44:55
critique because on the one hand,
44:57
I think it is
44:58
a no-brainer to
45:01
try to collect data
45:03
from a diverse
45:05
set of participants. And it's absolutely
45:07
true that the field has been
45:09
built on 20-year-old
45:12
college sophomores at universities
45:15
that have a psych department.
45:17
Surely
45:18
there are limitations in what you can
45:20
conclude from studying just
45:23
those people. So I want to take that
45:25
seriously, and I believe it to be true.
45:27
On the other hand, there is
45:29
a way in which some processes
45:32
I really do believe to be universal. But
45:35
I don't think that what you need to do, if
45:37
you want to know whether a visual illusion
45:40
works, is I don't need to study
45:43
college undergraduates, old people
45:46
living in assisted living
45:48
homes, and young
45:50
Incan children.
45:52
It seems as if there are
45:55
some things that we might
45:57
be able to know a priori.
45:59
are not going to be that
46:02
changeable across people. Now, of course, Henrik wants
46:04
to say that some of the things that you thought
46:06
were universal, here's some evidence that they're
46:08
not universal. But I'm not
46:11
sure I agree with all of what they say. I think
46:13
I agree with you regarding both aspects of your
46:15
claim. So I also think that
46:18
a lot of what you find among
46:21
rich, educated, even undergraduates
46:23
at universities will
46:26
also show up
46:27
everywhere in the world because you're capturing facts
46:30
about humans. And it's no different
46:32
than if you were to study aspects of their biology
46:35
or their genes or their physiology. But
46:37
having said that,
46:38
I have to admit it's sort of an empirical
46:41
claim and you're not gonna know what's right until you look
46:43
at other populations. So, you
46:45
know, it's very tempting for me to say, and I do think
46:47
that the fundamentals of how we perceive
46:50
the world are just part of the
46:52
human architecture and don't differ. Henrik
46:55
and his colleagues will say, you know, visual
46:57
illusions, for instance, work in some
46:59
cultures and not others. And
47:02
I think we have a really vibrant argument of
47:04
whether that's true. And I actually
47:06
think that the universal side is more evidence for
47:08
it, but we won't know for sure until
47:11
we have a broader sample. And sometimes
47:13
when you do these broader samples, those are really
47:16
interesting, supporting both universals
47:18
in particular. So studies, for instance, have
47:20
sex differences in certain
47:22
aspects of who you find attractive.
47:25
Find universals among every culture
47:28
ever studied.
47:29
This is striking universals. But
47:31
at the same time, they also find
47:34
that cultures differ in certain ways.
47:36
So I share your sort
47:39
of priors, your biases
47:41
about how things are gonna work out, but we can't know for
47:43
sure
47:44
until we do more broader studies.
47:45
You're absolutely right. And so maybe the burden
47:49
is on the researcher who makes a claim
47:51
about universality, which is not always
47:53
the case. This is another misunderstanding, but
47:55
I'll get to that in a second. But if you
47:57
say we have demonstrated...
48:00
this universal process in
48:03
the way that humans use stereotypes. The
48:05
burden is on you to test
48:07
it in... It doesn't mean you have to test it in every
48:09
population in the world, but
48:11
maybe one or two other ones could
48:14
at least falsify that claim. Suppose
48:17
that I study something that I truly believe is a robust phenomenon across
48:20
all human beings. The storage
48:22
capacity of short-term memory.
48:24
And I find you get somewhere
48:26
between five and ten words are easily remembered.
48:29
If I claim that it's universal and I say, well,
48:33
let me give this a shot. Let me study
48:35
it in undergraduates in Argentina.
48:38
And I fail to find it there.
48:41
Then
48:41
boom, now at least you have the
48:44
ability to... Because you
48:46
can never say this is truly
48:48
universal in
48:50
any confident way unless you test it across all people.
48:53
And nobody's going to test it
48:55
across all people. But you can at
48:57
least make some hypotheses about where you
48:59
ought to find it and where you don't. That's right.
49:02
And I think if you test enough
49:04
people from enough of a range,
49:07
you can at some point reasonably extrapolate
49:10
to the rest of the world. So if you tested people from a hundred
49:12
different cultures in
49:15
a hundred different countries
49:17
and get the same result, and then I come up
49:19
to you and say, well, you haven't tested
49:21
elderly people in Tucson, Arizona.
49:24
You can say, well, there's no reason to expect that to be different. That's
49:26
just basic scientific generalization.
49:29
If you find that some blood pressure medication
49:31
lowers
49:32
people's blood pressure and you test
49:34
it across a hundred diverse populations,
49:37
and it always does that, you
49:39
can say, oh, it seems to work for humans.
49:41
It seems to work for people.
49:42
To be even more fair to
49:45
the weird hypotheses, the stuff that
49:47
I study, I should look more at
49:49
other people outside
49:51
of the US, for instance, because when
49:53
you're studying things like moral
49:55
judgment or emotion, we have
49:58
good reason to believe.
49:59
that that might actually differ. And
50:02
you might wonder whether
50:04
or not, for instance, the fact
50:07
that people seem to have strong intuitions,
50:10
that
50:10
the intention behind an action matters
50:13
more than the consequences of that action. It
50:15
could be that what's happening
50:17
is I have a failure of imagination
50:21
to
50:22
predict that in some cultures,
50:24
consequences might beat out intentions.
50:27
Yeah, that's a great example. And
50:29
for that kind of example, you probably want
50:32
to go beyond cross-cultural studies. So there's
50:34
no law saying that psychologists
50:37
are forbidden to look at history. I sometimes find
50:39
psychologists make claims about
50:42
universals that are falsified by anybody who
50:44
would read the Bible or
50:46
read your list, or
50:48
actually do more to the point about non-universals
50:51
where they say, I've heard psychologists
50:54
make claims that some of our biases,
50:56
call them racial biases, are a modern
50:58
invention that one could find
51:01
in the West. And they say, really?
51:03
Have you read books from
51:05
a thousand, from a while ago? No, this
51:08
stuff's all around. And so
51:10
there's history. There's looking at developments
51:12
to see how things emerge in kids. I to some
51:14
extent think if you find it in
51:16
babies, that's a good argument for
51:18
universality, though I think in some ways
51:20
there's some exceptions. And then for some of the
51:22
things, not what you're talking about regarding prejudice,
51:25
but for some other things, you could look at other
51:27
creatures, other animals.
51:29
Like our friend Laurie Santas'
51:31
work looking at
51:33
financial decision making in Capuchins.
51:36
Yes, that's right. That's right. So if, and
51:38
again, often these things provide
51:41
surprising force for people to argue for universals.
51:44
If somebody says, oh my gosh, that capacity
51:46
we're interested in seems to be a product of Western
51:49
culture since 1900, and you find it in dogs,
51:53
you're probably wrong. And
51:56
I think a lot of the evidence from history, from cross-cultural
51:59
studies, from development.
51:59
and points to universals. But having
52:02
said that, there have been some surprises.
52:04
So one surprise for instance is
52:06
that you think of hemispheric differences.
52:09
Language is typically located in the left
52:11
part of the brain, face recognition typically
52:14
located in the right part of the brain. And there's
52:16
some evidence that this is not universal,
52:18
but to some extent influenced by
52:20
reading. And
52:21
when you learn to read, it creates what's called
52:23
a letterbox in the brain. And
52:25
that it sort of accentuates hemispheric
52:28
differences. And I don't know
52:30
if I think that that's right or wrong. I
52:32
think I need to look closer at that, but I think it's possible
52:35
that even things having to do with sort of fundamental
52:37
neural architecture could be affected
52:39
by culture in interesting ways. Maybe
52:41
here's a way to end
52:43
a discussion like this. And
52:46
I'm curious about your thoughts, but
52:48
I would tell anybody who has
52:51
gotten this far
52:51
into listening to this podcast
52:53
and reading your book and consuming
52:56
behavioral science in general, that
53:00
all conclusions, all scientific conclusions
53:02
are usually tentative. And what
53:04
you really want to do is look
53:07
at an entire body of evidence. You
53:09
want to educate yourself enough to know
53:12
when something is clearly wrong with
53:15
the conclusions of either the researchers
53:18
or more often the way in which
53:20
the press release or
53:21
the press talks about a study.
53:25
Kind of train yourself to sniff out some of
53:27
the problems like we've talked
53:29
about p-hacking or publication
53:32
bias, selective reporting, all that stuff. And
53:36
look for multiple studies that
53:38
would support that conclusion.
53:41
And I think to end with what we were just talking
53:43
about, the more evidence
53:45
that you have that this is true in
53:48
not just one small sliver of the population,
53:50
the better you can feel about
53:52
concluding that it's a universal.
53:55
So all this is good stuff, I think,
53:58
as a consumer of science, which we...
53:59
increasingly have to be. Yeah, I
54:02
think I'm largely
54:04
a developmental psychologist and I've
54:06
raised two sons, as many people ask me. So
54:09
how has your understanding of developmental psychology
54:11
affected how you're a parent? And I
54:13
mean one way is I know enough about
54:16
the field to know that I shouldn't listen to individual
54:18
studies telling me how to raise my kids. That's
54:20
right. So some way there's a skepticism
54:23
but there's also, as you say, there's an optimism.
54:26
In some areas we have large bodies
54:28
of studies painting really interesting picture
54:30
of what's going on. And I also
54:32
think just in general we have the tools now
54:35
of better science and you know
54:38
cross-cultural and historical studies
54:40
and so on to really really help
54:42
us learn more.
54:43
Absolutely. There we go. That's
54:46
an optimistic ending. That is. Thanks,
54:48
Paul. Thank you.
Podchaser is the ultimate destination for podcast data, search, and discovery. Learn More