Podchaser Logo
Home
285 - What Do You Mean? - Celeste Kidd (rebroadcast)

285 - What Do You Mean? - Celeste Kidd (rebroadcast)

Released Sunday, 14th April 2024
Good episode? Give it some love!
285 - What Do You Mean? - Celeste Kidd (rebroadcast)

285 - What Do You Mean? - Celeste Kidd (rebroadcast)

285 - What Do You Mean? - Celeste Kidd (rebroadcast)

285 - What Do You Mean? - Celeste Kidd (rebroadcast)

Sunday, 14th April 2024
Good episode? Give it some love!
Rate Episode

Episode Transcript

Transcripts are displayed as originally observed. Some content, including advertisements may have changed.

Use Ctrl + F to search

0:00

Time for a quick break to talk about

0:02

McDonald's. Mornings are for mixing and matching at

0:04

McDonald's. For just $3, mix and match two

0:07

of your favorite breakfast items, including a

0:09

sausage McMuffin. Welcome

0:33

to the You Are Not So Smart podcast. Episode

0:53

285. Are

1:11

you familiar with the internet debate over

1:14

whether or not a hot dog is a sandwich? I'm

1:17

familiar. That

1:24

is the voice of famed

1:26

psychologist Celeste Kidd. She

1:28

is a cognitive scientist and developmental

1:31

psychologist who is well known for

1:33

her research on human curiosity and

1:35

human certainty, how brains

1:38

develop knowledge. My

1:40

name is Celeste Kidd. I am

1:42

an assistant professor in psychology at

1:44

UC Berkeley. And my

1:46

lab is the Kidd Lab. We study human belief

1:49

formation. Kidd

1:54

studies how we acquire and

1:56

conceptualize information, how we form

1:59

beliefs around those concepts and

2:01

in general makes sense of the

2:03

torrent of information blasting its way

2:05

into our nervous systems via all

2:07

the senses and how that affects

2:09

our development how the development of

2:11

the mind proceeds from childhood all

2:14

the way to the day we

2:16

find ourselves in an argument about whether

2:18

a hot dog is a

2:23

sandwich so

2:27

what do you think is a hot dog

2:29

a sandwich this is one

2:32

of those debates that became the central

2:34

topic of discussion in our marketplace of

2:36

ideas a few years back one

2:38

of those viral moments that spread

2:40

all across the internet and then

2:42

into homes and house parties if

2:45

you missed out on this you

2:47

really missed out on a mid

2:49

2010s cultural milestone because everyone chimed

2:52

in on this from Meryl Streep

2:54

to Matt Damon to better homes

2:56

and gardens to readers digest to

2:58

the National Hot Dog Council and

3:01

yes there is a National Hot Dog Council the

3:03

Today Show ran a piece about it and

3:05

so did the verge and USA Today and

3:07

parade and the Guardian even

3:11

Supreme Court Justice Ruth

3:13

Bader Ginsburg chimed in on this

3:16

saying quote you tell me

3:18

what a sandwiches and then

3:20

I'll tell you if a hot dog is

3:22

a sandwich Stephen Colbert

3:24

who was interviewing her replied

3:27

to that definition by suggesting the definition

3:29

of a sandwich was two pieces of

3:31

bread with any filling in between as

3:34

long as the filling was not also

3:36

bread Ginsburg then

3:38

asked if that included a

3:40

roll cut openly but not

3:42

completely he said yes since

3:45

many subway sandwiches fit that definition

3:47

and the two agreed that yes hot

3:50

dogs were sandwiches and soon

3:52

the Miriam Webster dictionary

3:54

came down definitively on

3:57

this topic stating that yes a

3:59

hot dog is is a sandwich because,

4:01

quote, the definition of

4:03

sandwich is two or more slices

4:05

of bread or a split roll

4:08

having a filling in between. And

4:10

they didn't mention if bread could be that filling, but

4:13

we have to assume maybe so. However,

4:16

many others disagreed with all

4:18

of these sources. Anthony

4:20

Bourdain said the bread in a

4:22

hot dog was just a ballistic

4:24

delivery system for the meat and

4:26

its toppings. And therefore, it was

4:29

not a sandwich in his view. Eric

4:31

Minton, tall, the president

4:33

of the National Hot Dog

4:35

and Sausage Council concurred, telling

4:38

all recipes.com, quote, a

4:41

hot dog is not a sandwich. If

4:43

you go to a hot dog vendor and you say, give me

4:46

a sandwich, they're going to look at you like you're crazy. It's

4:48

just culturally not the same as

4:50

a sandwich. And he

4:53

added, quote, in essence, it

4:55

boils down to a hot

4:57

dog is its own unique

5:00

item that exceeds

5:03

the sandwich category. It

5:05

breaks itself free of

5:08

the sandwich category. You

5:10

saw that debate. What did

5:13

you think of

5:16

it going like, first of all,

5:21

what is your opinion? And then what do you think of the

5:24

public discussing the area of your

5:26

expertise? What is my opinion?

5:29

It's a hot dog, a

5:32

sandwich. I don't

5:34

have a strong opinion on that one.

5:36

But I love

5:38

that that's a question. I love hearing

5:40

people talk about it. I love seeing

5:43

how confident people are that whatever

5:45

their intuitive judgment is, is the

5:48

right thing. And I love seeing people argue

5:51

with each other and be sure they're going to convince

5:53

the other person when I like, I've seen this these

5:55

kinds of debates before. And I know it's minds

5:58

are very rarely changed. for this one.

6:04

I love that debate in

6:07

part because it demonstrates not only

6:09

are people's concepts not aligned, people

6:12

are not good at

6:14

representing that variability. People are

6:16

generally expecting that whatever

6:19

concept they have in mind the other person will

6:21

share. If you think that a hot

6:23

dog is a sandwich, you are sure that

6:25

everybody else should think a hot dog is a sandwich

6:27

and you are offended. If they do not, it

6:29

feels wrong. It feels wrong

6:32

and seeing that play

6:34

out in this debate is

6:36

the most fun part of that debate for

6:38

me. As Kid just mentioned, our

6:41

concepts don't always align and that's

6:43

what we're going to talk about

6:45

in this episode with psychologist Celeste

6:48

Kid, semantic disagreements.

6:50

The notion of talking past

6:53

each other. Kid has a new

6:55

paper which details her research

6:57

into just how misaligned we

6:59

are conceptually speaking, but

7:01

also how unaware of how

7:03

misaligned we are, which results

7:06

in an unsupported confidence in

7:08

the notion that when you

7:10

are discussing just about anything

7:12

with just about anyone, the

7:14

odds that you share the same subjective

7:17

idea, notion, concept, mental

7:20

model, schema of what you are discussing

7:22

is very, very low. For

7:25

instance, in the paper they found that

7:27

when one person says the word penguin

7:30

out loud in a conversation, the

7:32

odds that the other person, the

7:34

person listening, is imagining the same

7:36

concept, the same penguin as the

7:39

speaker, the odds are around 12%.

7:41

Yes, there's a nearly 90% chance that

7:46

the last time you discussed penguins

7:48

with another person, you

7:51

weren't really discussing the

7:53

same idea. And if we aren't

7:56

sharing the same penguin in our

7:59

penguin discussion, questions, then

8:01

imagine what happens when

8:03

we discuss politics or religion

8:06

or philosophy or anything

8:09

even slightly more abstract

8:11

than penguins. But

8:14

before we get into that, Kidd had something

8:16

else to add to the sandwich hot

8:19

dog debate. Have you ever...there's work where

8:21

they ask questions that are even weirder than like

8:23

is a hot dog a sandwich. I feel like

8:25

it's a pretty reasonable question. You

8:27

can ask people weird questions like what's

8:30

a better hot dog? Is it a

8:32

snake or a table? And

8:35

that's a weird question to ask because it is

8:37

like a snake is not a hot dog, a table

8:39

is not a hot dog. But

8:41

you can ask that and people will answer...they

8:44

don't like answering it, but they'll answer it

8:46

and they'll all say, snake, a snake is

8:48

a better hot dog than a table. That

8:51

says something very interesting about our conceptual systems.

8:54

It says something about them being probabilistic. It

8:57

says something about the compositional nature of

8:59

them. So presumably a snake is a

9:01

better hot dog than a table,

9:04

maybe because it involves living

9:07

matter as a kind of gross. It's like meat,

9:09

it's also the same shape as a hot dog.

9:12

But again, it's weird that we

9:15

can answer that question and we can all

9:17

be surprisingly aligned on agreeing that a snake

9:19

is a better hot dog than a table. I

9:24

find these debates enjoyable

9:28

in part because of all of the

9:30

things they call attention to that we

9:32

don't understand about human cognition. I love

9:34

seeing people get very passionate. Again, that

9:36

says something about the

9:39

strengths of our intuitions that are not

9:41

right about when I use the word and

9:43

use the word, people really feel like

9:46

we should be activating the same concept. Even in

9:48

the face of evidence that we don't, as

9:50

people are split on that one, it's

9:52

really hard for us to wrap our head around

9:54

that. It feels like it's wrong. The

10:04

Last Kid is a professor at

10:07

Berkeley and heads up the Kid

10:09

Lab, which carries the torch of

10:11

psychologists like Jean Piaget, Maria Montessori,

10:14

and Lev Vygotsky.

10:17

In her lab, she and her

10:19

colleagues conduct all sorts of experiments

10:21

to better understand how our minds

10:23

engage in learning and create knowledge

10:25

and conjure abstractions and interpret ideas

10:27

and conceive and make sense of

10:29

concepts and schemas and models of reality. They

10:32

use really cool stuff like eye

10:34

tracking and they develop apps and they measure

10:36

all manner of human behavior and communication. It's

10:38

an incredible world. I

10:42

was very excited to get a chance

10:44

to spend some time with Kid because

10:46

her lab recently published a paper that

10:48

is exactly the thing I was looking

10:51

for right now at this moment because

10:53

I'm in the middle of researching,

10:55

I'm in the middle stages of researching

10:57

my new book about what the word

10:59

genius really means, which has me exploring

11:02

linguistics and cognition and intelligence and creativity

11:04

and so on. In

11:06

particular, I'm fascinated with how we

11:08

articulate the ineffable and how we

11:10

come to agree or disagree on

11:12

concepts like what is a

11:15

genius, what does that word mean.

11:17

But also, Kid's newspaper is a great

11:20

topic to discuss as a sort of

11:22

epilogue to the project we just finished

11:24

here on the podcast, the three-part series

11:27

where I interviewed authors of recent books

11:30

about how to have better conversations and

11:32

better disagreements with people who see the

11:34

world differently, which is the topic of

11:36

my most recent book, How Minds Change.

11:39

Those were the three episodes before this one and as

11:41

it turns out, Kid's work has

11:44

a lot to say about that topic from

11:46

a cognitive science perspective and

11:49

Kid's newest paper puts in my

11:51

mind the final nail in the

11:53

coffin of something called the information

11:55

deficit model, one of those old

11:58

ideas about scientific communication. that

12:00

ironically has not stood up to the scientific

12:03

analysis applied to it. In

12:05

that model, there's this concept called

12:08

knowledge deficit, which posits

12:10

that many of the

12:13

problems faced by civilizations arise

12:15

from a lack of access

12:17

to scientific information. And if more

12:19

people had more access to scientific

12:21

evidence, if they just knew the

12:24

facts, then they'd stop being

12:26

so unscientific and superstitious and cult-like

12:28

and extremist and so on, we

12:30

would become less polarized

12:32

and more engaged politically. It's

12:35

a great idea. It's a wonderful dream. But

12:38

the last 100 years of psychological

12:40

research on this topic has revealed

12:42

that the more education you

12:44

gain on any topic, the

12:46

better you become at rationalizing and justifying

12:48

your existing beliefs and attitudes, regardless

12:51

of their accuracy or harmfulness. And

12:53

the better you become at working

12:55

towards your goals, however

12:58

motivated they may be, however

13:00

motivated your reasoning may

13:03

become. And that's what we're going

13:05

to talk about in this episode. All

13:07

that and more after

13:09

this commercial break. Welcome

13:21

back to Money Mania, where we're

13:24

talking tax season. And what to

13:26

do with that refund? If

13:29

you really want your tax refund to have

13:31

a positive impact, I'm bullish on Ohio's

13:33

529 plan. I always

13:36

say invest in something you love.

13:39

Well, if there's nothing we love

13:41

more than our kids, so invest

13:43

in their future and let your

13:45

tax refund help. For as little

13:47

as $25, you can get started

13:49

today at collegeadvantage.com. and

14:00

on delivering your product or your services, the

14:03

more margin you have and the more

14:05

money you keep. Okay, that seems pretty

14:08

obvious, but with higher expenses on materials

14:10

and employees and distribution and borrowing and

14:12

everything else, even the price of bananas

14:14

and cookies and tires for your car,

14:18

you wanna reduce costs and you

14:20

wanna reduce headaches. Everything

14:22

costs more and smart businesses know

14:24

this and that's why they're graduating

14:26

to NetSuite by Oracle. NetSuite

14:28

is the number one cloud financial

14:30

system. It brings accounting and financial

14:33

management and inventory and HR, all

14:35

of this into one platform, which

14:37

gives you one source of truth.

14:40

With NetSuite, you reduce your IT costs

14:42

because NetSuite lives in the cloud with

14:45

no hardware required, so you can access

14:47

it from anywhere. It

14:49

cut the cost of maintaining multiple

14:51

systems because you've got one unified

14:53

business management suite and you're improving

14:56

your efficiency by bringing all of

14:58

your major business processes into

15:00

one platform, slashing manual

15:02

tasks and errors. More

15:05

than 37,000 companies have already done this. They've

15:07

already made the move to this. So

15:09

you can do the math here. You'll

15:11

see how you will profit with

15:14

NetSuite and get this, now through

15:16

April 15th, NetSuite is

15:18

offering a one of a kind

15:20

flexible financing program. To take

15:22

advantage of this through April 15th, go

15:26

to netsuite.com/

15:29

not so smart. Do that

15:32

right now. It's

15:34

NetSuite, N-E-T-S-U-I-T-E, netsuite.com/

15:37

not so smart. One more time, netsuite.com

15:41

slash not so smart. And

15:49

now we return to our program. Hey,

16:06

One of the things that I love

16:08

most about psychology and cognitive science in

16:11

general is that much of the work

16:13

up until now has been taking concepts

16:15

from philosophy and putting them to the

16:17

test, quantifying them,

16:19

seeing if they hold up to scientific

16:21

scrutiny. In the realm

16:24

of epistemology, the philosophical discipline that

16:26

asks, how do we know stuff?

16:29

How do we know anything? And what does

16:31

it even mean to know things? What

16:33

is the nature of knowledge itself?

16:36

Is the question of how

16:38

people's concepts align? That

16:41

is, when I debate politics with you, are

16:44

my concepts of democracy and taxation

16:46

and national defense and so on,

16:48

are they the same concepts as

16:51

yours? But it goes much deeper

16:53

than that. Your concepts of bananas

16:55

and biscuits and beach volleyball, are they

16:57

the same as mine? Are

16:59

your concepts of love and justice and truth,

17:02

are they the same as mine? Here's

17:05

Celeste Kidd. This has been studied

17:08

in various ways over a very

17:10

long period of time. As long as people

17:12

have been trying to communicate with other people,

17:14

they've wondered why is this so hard? Why

17:16

is this so hard? See, there's a

17:19

chance that when we meet to discuss

17:21

an issue, we're just going to

17:23

talk past each other while assuming

17:25

we're not doing that. That we're

17:27

truly connecting and that when we can't get

17:29

on the same page, when we disagree, we

17:33

assume all sorts of other things about the nature

17:35

of our disagreement when it could be as simple

17:38

as our concepts are not aligning. This

17:40

whole idea, it's old, it's ancient. So

17:43

understanding and overcoming all of this could

17:45

be a true breakthrough. But in psychology,

17:47

with the scientific method applied to the

17:49

matter, much of our

17:52

past attempts have, in Celeste Kidd's

17:54

words, stumbled because we don't really

17:56

understand the compositional nature of most

18:00

concepts. In other words, we can't

18:02

yet open the black box of the

18:04

skull and peer into the

18:06

subjective experience of consciousness to measure

18:08

and quantify and map out the

18:11

differences between one person's mental models

18:13

and another's. We know there

18:15

must be variation in our personal

18:18

concepts, in our subjective representations, in

18:20

our interpretations, but all of that

18:23

must be measured indirectly. Until

18:25

now, there hasn't really been a great

18:28

way to do that. There have been various

18:30

attempts to look at to what degree we

18:32

are aligned with one another. One of my

18:35

favorite examples of an attempt to do this

18:37

was Bill LeBeouf. Bill LeBeouf was

18:39

interested in the ways in

18:42

which people's concepts aligned or

18:44

they didn't. He picked

18:46

the domain of cups and bowls because this is

18:49

a domain in which actually you can have pretty

18:52

good intuitions about which dimensions might matter

18:54

for where you draw this category boundary.

18:57

If you think about what makes a cup a cup or

18:59

what makes a bowl a bowl, it's not

19:01

really about the absolute size. It's more about

19:03

the relationship between the height and the width,

19:05

the shorter and spottier and the wider it

19:08

is, the more people tend to say that

19:10

that's a bowl. The taller and skinnier it

19:12

is, the more they tend to say that

19:14

that's a cup. Then there's

19:16

a couple other features that you can add

19:18

that will bias people who think it's a

19:20

cup or a bowl. If you had a

19:22

handle, that's more likely to be assigned the

19:24

cup category than the bowl category. Then this

19:26

is really cool. He also manipulated the context

19:28

in which you show people pictures of possible

19:30

cups and possible bowls. If

19:33

you add a food context, people will accept things

19:35

as a bowl for longer than they will if

19:38

there's not food present. Bill

19:41

LeBeouf is a great example of someone

19:44

who was thinking about conceptual variability and

19:46

was coming at it from a compositional

19:48

perspective. He was trying to figure

19:51

out how do we represent these concepts? How

19:53

can you explore alignments if you don't know

19:55

how the concept is represented? You

19:57

can't really. That's basically the impediment that that

20:00

people came into when they were thinking

20:02

about these questions. So for Cups and Bowls,

20:04

what his work showed is that there is

20:06

quite a bit of alignment at the center

20:08

of the category. There's more disagreement at the

20:10

edges and you can

20:12

manipulate the category boundary. It's not just based

20:15

on the height and the width of the

20:17

bowl. It's also the context in which it's

20:19

occurring in. So he studied this, but

20:23

he was not able to,

20:25

he wasn't really focused on trying to

20:28

quantify how much variability there is. That

20:31

was a question people had asked,

20:33

but had difficulty making progress

20:35

on because of all that

20:37

we don't know about the compositional nature

20:39

of concepts. So like Cups and Bowls,

20:41

heights and widths, irrelevance, how

20:44

do you represent a dog? How do you represent

20:46

the concept of love? How do you

20:48

represent even a table, even

20:51

for concrete objects, it's not clear how

20:54

we're doing that. There's various proposals

20:56

on the table. There's a lot

20:58

of work trying to look specifically

21:00

at that part of the problem.

21:03

Eleanor Roche famously proposed prototype theory by

21:05

which she suggested that how you're representing

21:07

these things is you take all of

21:09

the particular instance of a cup or

21:11

a bowl or a table, you integrate

21:14

those, but what you're storing and what

21:16

you're using in order to judge whether

21:18

something is cup-like or bowl-like is a

21:20

prototypical example of a cup or a

21:22

bowl. So there've been a lot of proposals

21:25

over the years for how we represent concepts,

21:27

for what the compositional nature is.

21:29

If we don't have that solved, it

21:32

was hard to see how you could measure

21:34

to what degree concepts align when we don't even

21:36

know how to imagine how

21:39

they're represented. So Celeste Kidd and

21:41

her colleagues in their new paper

21:44

titled, Latent Diversity in Human Concepts,

21:46

that's what they set out to do

21:48

with their research to solve some of

21:51

these problems, to understand how these became

21:53

great cocktail party questions. Hot dogs, are

21:55

they sandwiches? Is a hot dog more

21:57

a snake or is it more

21:59

a cat? more a table. And in so

22:01

doing, they discovered some really

22:04

unexpected things, one of which

22:06

involves how surprisingly misaligned we

22:09

all are when it comes

22:11

to the subject of penguins.

22:14

Tell me if I'm wrong here. The probability

22:16

that two people selected at random

22:18

will share the same concept about

22:20

penguins is around 12%.

22:25

That sounds about right. It's about that. More often

22:27

than not, when you think of penguins, when I

22:29

think of penguin, we are not activating the exact

22:31

same concept. When you think about penguins,

22:34

and when I think about penguins, and dear listener,

22:36

whoever you are, whatever

22:38

you're thinking about when it comes to penguins, there's

22:41

an assumption that if we all sat down and sort

22:43

of talking about how much we like penguins or how

22:46

cute they are, that we all have the same thing

22:48

happening in our brains. And your

22:50

research suggests that's

22:52

about a 12% chance that that's actually

22:55

happening. Very likely, it's super certain

22:58

that we are not

23:00

actually having the same subjective

23:03

experience of feeling, thinking, imagining,

23:05

and remembering penguins. That

23:07

is correct. There's further

23:09

evidence of exactly why

23:12

in the paper. So we

23:14

did a second study

23:17

where we looked to see, given that we

23:19

can now quantify that your penguin and my

23:21

penguin are probably not the same penguin concept,

23:24

why are our penguin concepts different?

23:27

So what we are not talking about when we talk

23:29

about differences in the concept is differences

23:31

in the context. So we're not talking about like,

23:34

maybe you're thinking about a cartoon penguin, and maybe

23:36

I'm thinking about a penguin I saw at the

23:38

zoo, controlling for the context, your

23:40

overall concept of penguin is fundamentally

23:42

different from mine most of the

23:44

time. And when we

23:46

go back and then try to figure

23:49

out why, we can do that by

23:51

looking at agreement or disagreement across people

23:53

in terms of the features. So

23:56

I can tell you that there are some

23:58

aspects of penguin concepts along the way. which

24:00

people are very aligned. People

24:02

agree that penguins are pointless.

24:04

They agree that penguins are

24:07

cute. They agree

24:10

that they are birds.

24:12

They agree that they

24:14

are not furry. Where

24:17

people differ in terms

24:19

of their concepts are along features like

24:21

the weight of a penguin. So people

24:23

disagree about whether a penguin is

24:26

heavy or light and people are pretty

24:28

much split on that one, which is

24:30

very interesting because it's clear

24:33

in that case that that variability is

24:35

in part coming from a lack

24:38

of experience getting to pick up a penguin.

24:40

If you pick up a penguin, you would

24:42

know something about the weight of a penguin.

24:44

Really to have a good

24:47

sense of how heavy your light penguins are,

24:49

you want to pick up a bunch of

24:51

different penguins with a bunch of different rays.

24:53

People when you ask them are split on

24:55

penguins weight because some people have a very

24:57

strong intuition that they should be heavy because

24:59

they kind of look like they're when they're

25:01

wobbling they're not flying. Maybe they're not

25:03

flying because they're heavy. Other people have

25:05

a strong intuition that penguins should be light

25:07

because they are birds and they have bird

25:09

bones. So this is another thing

25:12

you can bring out of the party when you're trying to

25:14

divide people. I want to go to your cocktail parties where

25:16

you're asking people contentious questions about hot dogs and whether they're

25:18

sandwiches. I was like, do you want to further divide your

25:20

cocktail party? You can ask them whether they think penguins are

25:22

heavy or light. You're invited. You're invited. It

25:28

cannot be understated just how much

25:30

semantic disagreement factors into our

25:33

social, legal, and personal conflicts.

25:35

So understanding scientifically the origins

25:37

of those disagreements is vital

25:40

and important work. Semantic cognition

25:42

is understanding the meaning of

25:45

things. I guess simply put,

25:47

there is substantial variability

25:49

in terms of how people are representing

25:51

different concepts, what they understand them to

25:53

mean. When I say a word and

25:56

you hear the word, when you say the word

25:58

and I hear the word, what the

26:01

paper is showing is that we have

26:03

a tendency to overestimate the degree to

26:05

which our concepts align. The

26:07

paper presents a new method for

26:09

quantifying the degree to which our

26:12

concepts do not align and

26:14

it turns out even for very common concrete

26:17

objects. Yeah, our concepts are misaligned more often

26:20

than they are aligned. In kids

26:23

most recent research, she and her

26:25

team found at least 10 to

26:27

30 quantifiably different

26:30

variations of even

26:32

our most common nouns. And I

26:34

love this because I've been

26:36

interviewing all sorts of experts on this topic

26:39

on how difficult it is to agree

26:42

on definitions and how difficult

26:44

it is to define some

26:46

words in particular, especially the

26:48

abstract ones, what linguists sometimes

26:50

call conceptual shells. Words like

26:52

genius and morality and intelligence

26:54

and curiosity and life in

26:56

the biological sense. And I've

26:59

learned that for each one of those words

27:01

and for many others, a huge part of

27:03

what goes on in any scientific discipline is

27:06

participation on every conceivable level

27:09

in a debate over just what those words

27:12

mean exactly. I've

27:14

been told that one of the quickest ways

27:16

to derail a conference or a meeting

27:18

or any gathering of academics is to

27:21

write a word on a whiteboard or

27:23

put up on a slide, something

27:25

like genes or species or

27:27

measurement, and then suggest

27:30

before we continue, let's

27:32

agree on what that word means exactly.

27:36

This rarely goes well and can send

27:39

people into an hour-long debate until they

27:41

finally go, what are we doing? Psychologist

27:43

Andy Luttrell told me that much of

27:46

psychology, especially in the early decades, was

27:48

mostly just a debate around definitions

27:51

and he pointed me toward a

27:53

book by Kurt Danziger titled, Naming

27:55

the Mind, How Psychology Found Its

27:57

Language, which is about this very

27:59

thing. So, I asked

28:02

the kid about how all of that might

28:04

have influenced her decision to get into

28:07

this line of work. I

28:10

think it started for

28:12

me with wanting

28:15

to understand what was true. My

28:17

background was actually not in

28:20

science. I didn't grow up thinking I was going to be a

28:22

scientist. I thought I was going to be an investigative reporter. I

28:25

had Associated Press reporters in my family, and

28:27

I thought the highest calling was going out,

28:30

finding out what's true in the

28:33

world, especially discovering truths that are

28:35

reflective of people

28:37

that don't usually have a voice. You

28:40

find what's true, you tell people

28:42

what's true, and the world changes,

28:44

was my view of things. I

28:48

went to college. My first majors were computer science

28:50

and journalism. I thought I was going to do

28:52

that. It doesn't take much

28:55

interacting with people about

28:57

a topic that you care about

29:00

before you realize it's really hard

29:02

to align in terms of values,

29:04

in terms of concepts. I

29:07

think a lot of the inspiration for my

29:09

work was arguments I had with editors over

29:12

the years about, like, no, this is important.

29:14

Also, you've edited my piece, and this is

29:16

not what I meant. You

29:18

still have the right idea. I'm

29:21

very interested in what

29:23

are the psychological limits to

29:26

how close we can get to objective

29:28

truth in the world. How

29:30

can we get better access to what

29:32

is true through coordination with one another?

29:35

That necessarily involves having conversations with people

29:38

and trying to align in terms of

29:40

what conceptual representations you're thinking of. You're

29:42

trying to, when you talk to somebody

29:45

and coordinate with them, get

29:47

the same image and concepts and

29:49

ideas in their minds as what you were

29:51

thinking. That's really hard. I

29:54

was interested in this piece

29:57

in exploring what... uh

30:02

exactly how hard of a problem

30:04

is that like it feels like it's very hard

30:06

uh how much are people online um uh this

30:08

is an age-old question and uh yeah

30:10

we wanted to we wanted to see you had

30:12

this beautiful chart i'm looking at it right

30:15

now of like and i love like visualizing

30:17

this like there's this sort

30:19

of nucleus of things

30:21

we would all kind of say

30:23

are penguin-ish penguin-like but

30:26

as you move away from

30:28

that nucleus into the the electron cloud

30:31

of penguin-ness it's very very like people

30:33

will not agree with all the other

30:35

concepts that we may have about this

30:37

thing and i will i dig this

30:40

because you show this for

30:42

every concept like whales salmon barack

30:45

obama uh people

30:47

have sort of a small

30:50

number of things that we would

30:52

agree upon as the features

30:55

of this concept and

30:58

then we have a much larger range of things we

31:00

would not agree about and

31:03

if i'm thinking correctly

31:05

that that leads to an assumption that

31:07

my gigantic word cloud association

31:09

super network whatever metaphors we need to

31:12

make sense of this i

31:14

have this intuition that mine must be pretty much

31:16

more similar to yours than it actually is and

31:19

i'm wondering the

31:24

two people who like work in if somebody

31:26

who works in a zoo in san diego

31:28

and somebody who works in a zoo in

31:30

atlanta georgia who both uh deal

31:32

with penguins every day um and

31:35

a veterinarian who treats penguins specifically

31:37

if that is such a person

31:40

like i would imagine their overlap is much

31:42

greater right

31:45

yeah that is a prediction you would expect that

31:47

is not something that we test in the paper

31:50

but there's a lot of other

31:53

evidence that the beliefs

31:55

that we form are a byproduct

31:57

of the experiences that we have

32:01

This is a really simple

32:03

but really profound aspect of

32:05

human psychology. The

32:08

world is super big and you're forming

32:10

beliefs about the big wide

32:12

world necessarily on a very

32:15

sparse sample. That's

32:17

a subset of it, a really small subset.

32:19

So because all people are living very different

32:21

lives and having very different sets of experiences,

32:23

they're taking very different samples and you'd expect

32:26

that that is the origin point

32:28

for why there is variability. I

32:30

would add though that while

32:33

you would expect two people who

32:35

work at two different zoos to have

32:37

borderline concepts on penguins, two people, even

32:39

if they're raised in the exact same

32:41

environment, you would not expect for, I

32:43

would not expect for them to have

32:45

exactly the same concepts. I sometimes use

32:47

the very dramatic example

32:49

of conjoined twins. Two

32:52

conjoined twins, even though they

32:54

are physically in the same space all

32:56

of the time, they have two different minds, two

32:58

different sets of eyes. They are

33:00

going to be sampling differently even though

33:02

they're in the same physical environment.

33:05

We know that the way in

33:07

which people sample from the world follows

33:10

from the samples that they've accumulated

33:12

previously and especially to people that

33:14

are spending a lot of time

33:16

together again, conjoined twins is a

33:18

dramatic example, they tend to specialize.

33:20

So if you live with somebody,

33:22

people report acquiring very specialized knowledge

33:24

so that one person will know

33:26

what trash day is. The

33:29

other person will know something else about how to

33:31

clean the gutters and when one person

33:33

is out of town, the other person might be

33:35

surprised that they don't have some set of knowledge

33:37

and it's because it doesn't make sense to have

33:39

redundant knowledge. People specialize in that way. Even

33:42

two people that were physically in the same

33:44

space all of the time should be taking

33:46

slightly different samples from the world and I would not expect

33:48

we have exactly the same concepts as a result of that.

33:58

Let me briefly run through the paper. talk

34:00

about it. All right, so you

34:02

got about 3,000 people, nearly 3,000 people, and you divided

34:06

them in half. I'm looking at my notes.

34:08

Yeah. And then you, what did you do with

34:10

those people? Yeah, so

34:13

we got around the problem that other

34:15

people had had in trying to quantify

34:17

how much concepts align. The primary

34:21

impediment to doing that in

34:23

the obvious way you think to do it first

34:25

is like, well, let me understand something about the

34:27

compositionality. That's hard. We can't do that. So

34:29

we skipped that step and

34:31

instead had people

34:34

make similarity judgments across

34:36

common concepts in two

34:39

domains. So the two domains we chose

34:41

were common animals. And then

34:43

we wanted something that was like common

34:46

animals. We want something that's animate, but

34:48

we want something about which people may

34:50

have different

34:52

types of judgments. We want something kind

34:54

of moralistic. So we also took US

34:56

politicians and had people make

34:59

similarity judgments in that domain too. Each

35:02

person was assigned to one target. So let's

35:05

say you have chicken. You make

35:07

similarity judgments like what's most similar to a chicken?

35:09

Is it a dolphin or is it a finch?

35:12

And we do that for

35:15

all, we do all of those pairwise comparisons

35:17

for each of the concepts that we're testing.

35:20

What you get out is a vector

35:24

of responses about what you rate to be

35:27

most similar to, for example, a

35:30

chicken. And the intuition

35:32

there is if it's the case that we

35:34

all have crudely pretty much the same

35:37

concept, all of your similarity

35:39

judgments should be the same also. So this allows

35:41

us to get at your conceptual

35:43

representation without knowing the details of what

35:45

the composition of your concept

35:48

is. So that's the hack that we

35:50

use in order to get at this. Once

35:53

we have those response vectors, we

35:55

can now perform clustering over them in order

35:57

to figure out what we're

35:59

testing. figure out how many latent versions

36:01

of each concept exist in the population.

36:04

And so we can see things like for

36:07

this concept, we have two distinct clusters. For

36:09

this concept, we have ten distinct clusters. And

36:13

what we found was that there's a substantial amount

36:15

of variation. There are quite a

36:17

number of different clusters for

36:19

both the more

36:22

concrete concepts and for the more abstract ones.

36:25

So you have these people, you've divided

36:27

them up, you've had them

36:29

talk about animals. You also have this

36:32

other thing where you had them list

36:34

ten adjectives and rate features.

36:37

I love the idea that seals are not feathered,

36:40

but they are slippery. This is something most people

36:42

think. But not everyone agrees whether

36:44

that seals are graceful. Some people think see

36:46

them as graceful and some do not. And

36:50

then with Donald Trump, you had like most

36:52

people agree that this is a man who

36:54

is not humble and that he is wealthy.

36:57

But they disagree quite a bit over whether or not he is

36:59

interesting, which is I love how nebulous the

37:02

term interesting is. What

37:04

did you what did you learn from this?

37:07

Did this these these two phases of the

37:09

research, what stuck out

37:11

to you? Yeah, so so

37:13

there's there's a lot that we learned

37:16

and a lot that was surprising. I'll

37:19

start with the first set of things. So

37:21

the like doing the clustering over people similarity

37:23

judgments showed us that there's a substantial amount

37:25

of variability in both concrete

37:28

and abstract concepts. There's

37:30

a second part of that first experiment, which

37:32

is we also ask people for their judgments

37:36

on how likely they thought somebody was to

37:38

share their concepts. And based on

37:40

that, we wanted to know

37:42

are people well calibrated to the likelihood

37:45

that somebody is aligned with them? And what we found out

37:47

from that is that they are not. When

37:50

you are activating a concept, when you are

37:52

saying a word, you generally expect

37:54

that someone else will share that concept, will

37:56

activate the same concept when you say the

37:58

word. you

38:01

think that's more common than it actually is.

38:03

Most of the time, as we talked about,

38:06

someone else is not activating the same concept, even for

38:08

the same word in the same context. So

38:11

that was useful. The second

38:13

analysis where we ask people for features

38:15

over the super set of words. So

38:17

we ask people to generate a bunch

38:19

of features for seal

38:22

and for penguin and for all of the

38:24

other words that

38:26

we have in our animal sets. Then we

38:29

ask a separate group of people to rate

38:32

whether or not each feature applied

38:34

to each concept. And that's

38:36

how we get those plots that look like kind of

38:38

word clouds. Something

38:40

interesting in those is that tells us something

38:43

about why we have distinct clusters. So some

38:45

things stand out like people disagreeing about whether

38:47

or not penguins are heavy or light. But

38:51

something else that was really interesting to us in

38:53

those plots is that for

38:55

the politicians, there's

38:57

not just information in terms of what

39:00

features are contentious and what features are

39:02

agreed upon. There's also information

39:04

in the distribution of features. So some

39:06

of the politicians have quite

39:09

a lot of features that are in the middle, where there's

39:11

a lot of disagreement about a lot of features. An

39:13

example of someone like that is Biden. People

39:16

really are not on the same page. Most

39:18

of the features are towards the middle. You

39:20

have very few features that everybody agrees do

39:23

or don't apply. For

39:25

Obama, Obama is often held up

39:28

as an example of like a

39:30

polarizing politician. There

39:33

are very few features in the middle, which

39:35

was surprising to me when I first thought.

39:38

Biden, everybody agrees, is

39:42

honest that he's intelligent. He's

39:44

hardworking. People are on the same page about that.

39:47

They also agree on

39:49

the features that do not apply to him.

39:51

There's very little in the middle. So what

39:54

this means is that although

39:56

people feel very different ways

39:59

about Obama, that

40:02

disagreement about whether

40:04

or not you like Obama is not

40:06

originating from people disagreeing

40:08

about what Obama is or isn't. People

40:11

are generally on the same page and have a

40:14

more aligned in terms of their conceptual understanding of Obama.

40:17

That is super fascinating. Think

40:19

of the information deficit hypothesis, right?

40:21

That our disagreement lies in that

40:24

we don't have all the facts. You

40:27

just need some more of the facts and then

40:29

we both see this the same way. Yeah, I

40:31

think that's a great demonstration that it's not that

40:33

simple. One reason why we

40:35

might disagree is because we're activating different concepts.

40:38

We disagree about what's true, but

40:41

that's not the only reason we can disagree. We can

40:43

be on exactly the same page and still have disagreement

40:45

for other reasons. That is amazing. So, from

40:47

here you're correctly, in

40:49

your study, most people have a pretty

40:53

similar, as similar as you

40:55

can get considering all the things we discussed, concept

40:58

of... Well, I'll put

41:00

it this way. They

41:03

agree on what Obama is. Yes, it isn't.

41:07

But when you ask... But they don't agree on...

41:10

Considering this is what Obama is, they don't necessarily

41:12

agree on how to feel about that. Yeah, that's

41:14

right. And whereas

41:17

Biden is interestingly

41:21

inverted in a way, we don't all agree on what

41:23

Biden even is. Yeah, that's right. Yeah,

41:26

that's exactly right. We have not today

41:28

used this method to chart changes in

41:30

alignment or opinion over time, but this

41:32

is a method that you absolutely could

41:34

use to do something like that. You

41:36

could use this to see

41:38

if people's concepts become more or

41:40

less aligned over time. You

41:43

could also use this potentially to see

41:45

if some kind of intervention got people more

41:47

on the same page. It has

41:49

potential use beyond what we've used it

41:51

for in this paper. Your paper gives me

41:54

a new material for this

41:56

because I love the idea that, oh, we

41:58

can very much... agree

42:00

completely. We can both have PhD-level

42:02

understandings of an issue, but

42:05

completely disagree. We agree on what is,

42:07

but we don't agree on anything else.

42:09

I think that's incredible. I love it.

42:12

This feels like the haymaker to the

42:14

information deficit I bother. Yes.

42:18

Okay. Before we go into the final segments of

42:22

this episode, to sum up, kids were not

42:30

aware of the fact that the work

42:32

has supplied ample evidence to suggest that

42:34

there is tremendous variation in our internal

42:36

concepts of reality itself. We

42:38

may share words and we may share the

42:40

same dictionaries and textbooks, but thanks

42:43

to all the variations in our

42:45

experiences and differing levels of ignorance

42:47

and expertise and a vast array

42:49

of variable cultural influences, the

42:52

likelihood, even if we share the

42:55

same language, if we share the same country, the

42:57

same hometown, the

42:59

same household, the

43:02

likelihood that your concept of something

43:04

like, say, penguins is

43:06

identical to mine is

43:09

extremely low. Also,

43:11

her work suggests you can think of a

43:13

word and a concept it refers to as

43:16

a sort of atom-like

43:18

entity. There's a nucleus

43:20

of shared ideas surrounded by

43:22

an electron cloud of variations.

43:25

Some words have more variations than

43:27

others, but no word

43:29

is without a lot of variation.

43:31

So that means even for words

43:33

with broadly shared definitions, there's

43:36

a near zero chance that any one of us

43:38

shares an identical copy of that core

43:41

concept that the word

43:43

generates when conjured

43:45

up and contemplated by any

43:47

one particular brain. And here's

43:49

the most important aspect of this research for me.

43:52

The evidence suggests we

43:54

are all mostly unaware of this

43:57

variation and not just unaware. believe

44:01

the opposite is true. We

44:03

erroneously believe most other people

44:05

share our ideas, our

44:08

concepts and models and mental imagery, our

44:11

semantics. And as

44:13

the paper says, quote, this

44:15

points to one factor that likely

44:17

interferes with productive political

44:20

and social discourse. This

44:23

presents such an existential thing for me.

44:26

It's like to accept what you've got

44:28

here, your research and like your expertise

44:30

is to accept that we're living in

44:33

a grand illusion. It's a best

44:35

guess. Yeah, but like it's all

44:37

these separate subjective realities working

44:40

together. Right. And

44:43

there's this grand illusion that we're

44:45

all living in the same shared

44:48

space, but we are very divided

44:50

by our conceptual understandings. And

44:53

yet we still seem to manage somehow. How

44:56

people work together to create the technology for us

44:58

to be having this conversation. Somehow

45:02

they created the academic institution in which

45:04

you are employed. Yeah. What do

45:06

you what are your thoughts on like, if there's

45:09

so many concepts that you found just in

45:11

this one paper, one

45:13

one brain is interacting with another brain and

45:15

they do not have identical concepts about the

45:17

things they're discussing. How are we managing to

45:20

get anything done if we're all living in

45:22

these private universes? So that's

45:24

a really, really great question that

45:26

I love thinking about. And I

45:28

don't think we have a full answer.

45:31

We're talking, we're using a learned

45:35

symbolic system, which

45:37

is English, which is allowing us

45:39

to at least crudely align our

45:42

ideas with one another. We

45:45

don't know exactly to what

45:47

degree there may be variability, but

45:49

every time we get closer to

45:51

looking at it, like in this paper, we look at

45:53

just like concrete now in

45:55

our paper, we find there's way more variability

45:57

than we'd expect. So what follows?

46:00

from that is that I should expect that we are not

46:02

as aligning as well as it feels like we're aligning

46:04

when we're talking. And

46:06

yet we're able to get stuff done. You're going

46:08

to record this podcast episode. We're going to leave

46:10

this conversation with new ideas that we didn't have

46:12

before that we got from one another. They

46:15

may not be the ones that the other one intended for us

46:17

to get, and that's an important thing to appreciate. It

46:20

could be the case that not being

46:22

perfectly aligned in terms of the

46:24

way that we think about the

46:26

world could benefit us in some

46:29

ways. One of the things

46:31

that I really, really love about

46:34

humans as a species is that we're

46:36

very weird. We have curiosity.

46:39

Other animals, other primates, other species

46:41

have curiosity. But human curiosity

46:45

is fundamentally different

46:47

in that we take it to extremes.

46:50

If you think about what we spend

46:52

doing in a day alive, it is

46:54

indulging our curiosity. We find out stuff

46:56

that we don't need, obviously, for

46:58

getting food or for reproducing. We spend a lot

47:00

of time doing that kind of thing. We're

47:03

willing to pay money for information sooner rather

47:06

than later. I'll pay $5 for an episode

47:08

of something to see how a story resolves

47:11

rather than waiting to see it free a month

47:13

later. We are very, very curious. Our

47:16

curiosity, the way that it works,

47:18

is we tend to want to

47:20

build on the ideas and concepts that

47:22

we already have some basis in. When

47:25

you're little, you don't know anything. Everything

47:27

is great. Anything you find on the carpet is very

47:30

entertaining for a baby. That's

47:32

because they don't have any base knowledge.

47:35

Once they start learning stuff about the world, the first

47:38

stuff they learn biases them to want

47:40

more information on those same topics. A

47:44

degree that is not true in

47:46

any species that I know of

47:48

specialize such that we

47:50

have people that know just

47:52

about the costuming

47:54

of Victorian

47:56

era blacksmiths. We

47:59

have people that specialize in blacksmiths. very, very specific things. I

48:01

mean, you have people that know all about

48:03

baseball statistics just from the 1950s from New

48:05

York. That is a byproduct

48:07

of the way our curiosity systems function.

48:10

And what that means

48:12

is two things. Number one,

48:14

individually, we're not very useful for

48:16

surviving in the world. One person

48:19

is not good at surviving in

48:21

the world by themselves. But two, as a

48:24

population of people, we

48:27

have way more breadth than other species in terms

48:29

of what we know as a group. So if

48:31

you have a bunch of different members of a

48:33

population that all specialize in different things, you

48:36

bring them together. Our strength is

48:38

in the variability in terms of our knowledge

48:40

and our concepts. We can do incredible

48:42

things. We can make spaceships. And we can shoot them

48:44

off to the moon. We can send a person

48:46

to the moon. We can build laptops. We

48:48

can do science. We can have this podcast.

48:51

Our strength as a species, I would argue, is because

48:53

of the variability in our knowledge. And

48:55

part of the variability in that knowledge is

48:57

also the variability in our concept. So it

49:00

may be a feature instead of a bug.

49:16

That is it for this

49:18

episode of the You Are

49:20

Not So Smart podcast. Celeste

49:22

Kidd's website is kidlab.com. That's

49:25

with two d's, k-i-d-d-l-a-b.com. Her

49:27

Twitter account is at Celeste Kidd,

49:30

C-E-L-E-S-T-E, K-I-D-D.

49:34

And over at

49:36

Berkeley, it's psychology.berkeley.edu/people

49:38

slash Celeste dash

49:40

K-I-D-D. For links

49:42

to everything that we talked about, including

49:44

the research paper, which is titled Latent

49:46

Diversity in Human Concepts, head

49:49

to youarenotsosmart.com. Or check

49:51

out the show notes inside your podcast player. There will

49:53

be links to stuff in there too. My

49:55

new book, How Minds Change. You can find links

49:58

to How Minds Change in the show notes. right

50:00

there in your podcast player. And

50:02

you can go to the homepage for how mine's changed. We

50:05

can find a round table video with a group of persuasion

50:07

experts featured in the book. It could

50:09

be a sample chapter, download a discussion guide, sign

50:11

up for the newsletter, read reviews. You

50:14

can also check out some of the podcasts that I've been on promoting

50:16

it. And I should

50:18

note that I did the audio

50:20

book for it and it was really fun and

50:22

I enjoyed doing that. I don't promote that enough.

50:25

For links to all the past

50:28

episodes of this podcast, go to

50:30

Stitcher's SoundCloud, Apple Podcasts, Amazon Music,

50:32

Audible, Google Podcasts, Spotify, or youarenotsosmart.com.

50:35

Follow me on Twitter at David

50:37

McRaney. Follow the show at Not

50:39

Smart Blog. If you'd like to

50:42

subscribe to the newsletter, it's over

50:44

on Substack. The title of the

50:46

newsletter is disambiguation. We're

50:49

also on Facebook slash YouAreNotSoSmart over there.

50:51

And if you'd like to support this

50:53

operation, help make it better, help pay

50:55

for transcription and other features, help pay

50:57

for me to fly to places and

50:59

interview people, go

51:02

to patreon.com/ youarenotsosmart.com.

51:05

Pitching in at any amount will get you the show

51:07

ad free, but the higher amounts

51:09

get you posters, T-shirts, sign books, and

51:11

other stuff. The opening

51:13

music that's Clash by Caravan Palace.

51:16

And the best way to support the show is just tell everybody

51:18

you know about it. If any episode

51:20

really landed for you, send

51:22

them a link to that. And check back

51:24

in about two weeks for a

51:26

fresh new episode. Time

51:39

for a quick break to talk about McDonald's.

51:41

Mornings are for mixing and matching at McDonald's.

51:43

For just $3, mix and

51:45

match two of your favorite breakfast items,

51:47

including a sausage McMuffin, sausage biscuit, sausage

51:49

burrito, and hash browns. Make it even

51:51

better with a delicious medium iced coffee.

51:53

With McDonald's mix and match, you can't

51:56

go wrong. Pricing and participation may vary,

51:58

cannot be combined with any other. offer

52:00

a combo meal. Single item at regular price.

Unlock more with Podchaser Pro

  • Audience Insights
  • Contact Information
  • Demographics
  • Charts
  • Sponsor History
  • and More!
Pro Features