Episode Transcript
Transcripts are displayed as originally observed. Some content, including advertisements may have changed.
Use Ctrl + F to search
0:00
The idea of a machine
0:02
that can read your mind sounds more like
0:04
science fiction than actual science.
0:07
After all, what could be more private and
0:09
inaccessible to the outside world
0:11
than what goes on in your own head. But
0:13
in recent years, scientists have been
0:15
coming closer to making this fantastic identical
0:17
seeming idea into reality. They've
0:20
developed brain scanning tools and methods
0:22
that can interpret brain activity and
0:25
from that activity. decode
0:27
many aspects of what people are thinking
0:30
in essence reading their minds. So
0:32
how does this technology work? How
0:35
do scientists translate patterns of brain
0:37
activity into thoughts? What
0:39
kinds of thoughts can they decode? How
0:42
advanced are these methods, and what
0:44
are the limitations? What research
0:46
questions can they help scientists to answer?
0:49
what practical and moral questions does
0:51
this research raise? And where might
0:54
it be going in the future?
0:57
Welcome to speaking of
0:59
Psychology. The flagship podcast
1:01
of the American Educational Association that
1:03
examines the links between psychological science
1:06
and everyday life. I'm Kim
1:08
Mills. My
1:11
guest today is doctor Kenneth Norman,
1:14
a professor of psychology and neuroscience and
1:16
Chair of the Psychology Department at Princeton
1:19
University. In his lab,
1:21
the Princeton Computational Memory Lab,
1:23
he and his colleagues develop new methods
1:25
to analyze brain scans. They
1:27
use those methods to study learning and
1:29
memory by decoding people's
1:31
thoughts as they learn and remember. He
1:34
has published more than one hundred research
1:36
papers and his work has been funded by the National
1:38
Institutes of Health and National Science
1:40
Foundation among others. Doctor
1:42
Norman also teaches an undergraduate class
1:45
called FMRI decoding, reading
1:47
minds using brain scans. And
1:50
he has one several awards for his mentoring
1:52
and teaching. Thank you for joining me today,
1:54
doctor Norman. I'm looking
1:55
forward to talking about this fascinating
1:58
research. I'm very happy to be
1:59
here. Thanks. I
2:01
mentioned in the introduction that the whole topic
2:03
can sound like science fiction or even
2:05
a parlor game for that matter to people
2:07
who aren't familiar with it. So let's start
2:10
by getting some basic grounding in the
2:12
science. Can you explain in
2:14
a fairly nontechnical way how
2:16
this When we say you're
2:18
working on reading minds. What does
2:20
that mean and what are the tools that you're using?
2:23
The core idea is that different thoughts
2:25
correspond to different patterns
2:28
of neurofiring in our brain. So if
2:30
we want to be able to decode people's thoughts, we
2:32
need to be able to pick up on, for example, pull
2:34
the difference in the pattern of neural firing
2:36
associated with you thinking about a
2:38
strawberry or a frog or a
2:41
tree or what have you. And and so the
2:43
way we do this is with MRI
2:46
machines. And so when people talk about functional
2:48
MRI, basically, we're talking
2:50
about tuning an MRI machine
2:53
to detect brain
2:56
activity. And the way we do that is
2:58
by tuning it to detect levels of
3:00
blood oxygen. And
3:03
basically, the idea is that parts of the brain
3:05
that are more active use
3:07
up more oxygen from the blood. So
3:10
if we've tuned the MRI machine to
3:12
detect that, we can get a sense of which
3:14
parts your brain are more active than other
3:16
parts of the brain. Right? And and so
3:18
it is a very
3:20
indirect measure of neural firing.
3:22
Right? We're not picking up on the electrical
3:24
zapping of neurons. we're
3:26
picking up on this kind of
3:29
blood flow correlates of
3:31
neural zapping. And so
3:34
it's really an open question
3:36
whether the signal
3:38
we're getting out of this specialty
3:41
tuned MRI machine is gonna be
3:43
too blurry to detect
3:46
the sort of nuanced differences
3:49
in neurofiring or whether it will
3:51
be result enough to be
3:53
able to do that. And so that's sort of the game that
3:55
we've been playing has been trying to figure
3:57
out whether this imperfect
3:59
Carlin Neurofiring can
4:02
pick up on these
4:04
relatively subtle neural patterns. And
4:07
to cut to the chase, the answer is, it
4:09
doesn't surprisingly well. Right?
4:11
And so the way that we
4:13
do brain decoding is basically ask people
4:16
to think about one thing
4:18
so we can ask them think about frogs.
4:20
Right? And we take a bunch of
4:22
snapshots of their brain activity with the
4:24
MRI machine while they're thinking about frogs. And
4:26
then we could ask
4:28
them to think about lizards, and we could take
4:30
a bunch of snapshots of their brain activity
4:33
when they think about lizards. And then we
4:36
feed those snapshots, the
4:38
frog snapshots and the lizard snapshots
4:40
into a computer
4:42
program that has
4:44
basically been optimized to try to
4:46
find the differences in
4:49
the patterns of brain activity, sort
4:51
of which brain areas are more or less active
4:53
when you're thinking about frogs or
4:55
lizards. Right? That's
4:57
where a pattern classifier is.
5:00
Right? It's a machine learning algorithm
5:02
that is basically
5:05
trying to find opportunistically
5:08
any difference in
5:10
the patterns that you feed it. And so
5:12
if there's a reliable difference in the fraud
5:14
pattern and lizard pattern, it'll
5:16
find it. Howard Bauchner:
5:17
So if you and I were both put into an
5:19
MRI machine and we're told to
5:21
think about frogs, would our patterns
5:23
look look essentially the same?
5:25
They would look kinda
5:28
similar. Right? But
5:30
not identical because you and
5:32
I might have had different life experiences with
5:35
frogs and wizards. Right? Then and the way that
5:37
our brain represents
5:39
things is a function of our personal experience.
5:41
But but the ideas that our
5:44
experiences will will have
5:46
been similar enough that
5:49
there should be some transfer between
5:52
migraine patterns your brain patterns. And
5:54
concretely, what that means is that
5:56
if if you
5:58
train one of these pattern classifier algorithms
6:01
on the frog versus lizard extinction
6:03
in my brain. Right? It'll
6:05
do the best job at decoding my
6:08
frog versus lizard thoughts. Right?
6:12
But it'll still possibly
6:14
be above chance, right,
6:17
at detecting your strong
6:19
versus lizard thoughts. I mean, the most striking
6:21
example to me of of similarity across
6:25
people's brains was
6:27
former post doc in my lab who's
6:29
now faculty at Johns Hopkins. Janice
6:32
Chen, when she was here at Princeton
6:34
ran a study where She
6:36
scanned people while they watched an
6:38
entire episode of the Benedict Cumberbatch
6:41
Sherlock TB series, and then
6:43
she had them just recall it. in
6:45
the scanner while their brains were being scanned.
6:47
And she did this for a bunch of people.
6:50
And what she said, which was totally
6:52
amazing to us at the time, is
6:54
that You could train one
6:57
of these pattern classifiers to
6:59
decode which scene of the
7:01
TV episode a
7:04
person was watching. So she she
7:06
trained it to the code based on my brain,
7:09
which seemed to the opposite it was. And
7:11
then she showed that it did an incredibly good
7:13
job at transferring
7:15
to other people's brains. Right? So
7:17
the code that's
7:19
being used to represent a particular
7:22
scene in this TV episode, which seems
7:24
like a fairly fine grain thing. appears
7:26
to be common across people, which
7:28
is not what we expected.
7:31
FMRI
7:31
has been around for several
7:33
decades, at least since the early nineties,
7:35
What's changed in recent years to
7:38
make your work possible? There are
7:40
two
7:40
things that could change. Right? One thing
7:42
is sort of the the quality of the
7:44
signal coming out of the machine, and the other
7:46
thing is how we analyze the data.
7:48
And so both of those things have changed, but
7:50
but the Main innovations
7:52
that have made this kind of thought
7:54
to coding possible are on the analysis
7:57
side. Right? We're much more
7:59
sophisticated in
8:01
how we kind of chew on and analyze the
8:03
data than we
8:05
used to be. some of it is just
8:07
like computers are faster and
8:11
people have developed kind of
8:13
better pattern classification algorithms. But
8:15
but part of it is we've just sort of conceptually,
8:17
we approach it in a different way.
8:19
And and so just to
8:22
illustrate that, say
8:24
that I want to figure out which out of
8:26
all possible animals you're
8:28
thinking of. Right? There are a lot of
8:30
different animals. Right? There are hundreds
8:32
or thousands right, of of different kinds of
8:34
animals. And so one way to approach that
8:36
problem is I could try to
8:38
train a separate thought decoder
8:40
for every kind of animal.
8:42
Right? So I could have you think about bears
8:44
and then think about not bears. Right?
8:47
And we try to find differences in
8:49
those bear snapshots and not bear
8:51
snap thoughts, and then we have a bare decoder. And then I
8:53
could do the same thing with sharks
8:55
and whales and beavers
8:58
and, you know, marmets and
9:00
what have you? And it's you can get sort of one
9:02
decoder for every kind of
9:04
animal, but it's very laborious. Right? We'd
9:06
have to train you know, thousands
9:09
of decoders and no one wants
9:11
to sit in the brain scanner.
9:13
Right? For as long as it would paint to train
9:15
thousands of decoders, But the
9:18
alternative approach is
9:20
instead of training one decoder per animal
9:22
type, we can think that
9:24
animals have different attributes. Right?
9:26
They can be big or small. They can be furrier not
9:29
furrier. They could live on land or water.
9:31
Right? They could be dangerous or not
9:33
dangerous. Right? And
9:35
The idea is that we could train a different
9:37
decoder for each of those attributes. Right?
9:39
So we could sort of take brain snapshots while you're
9:41
thinking of big animals and small animals.
9:44
right, or land animals or
9:46
water animals. Right?
9:48
And the the idea is
9:50
that the number of
9:53
dimensions along which
9:55
animals vary. Right?
9:57
Is,
9:59
you know, it's more than three.
10:01
Right?
10:02
But it's less than a thousand.
10:05
Right? So
10:05
if you think about, like, to gain
10:07
twenty questions. Right? The reason that
10:09
it's twenty questions and not a
10:11
hundred questions. Right? It's because you
10:13
don't need a hundred
10:15
questions to figure out what someone's
10:17
saying. Right? And and that's another way of
10:19
saying. The the number of dimensions, what what animals
10:22
vary, it's probably on
10:24
the order of tens or a
10:26
hundred or something. So instead of
10:28
training a thousand decoders or ten thousand
10:30
colors, one for every animal, you can
10:32
train ten decoders or one hundred decoders,
10:34
like one for every dimension. Right? And
10:36
the idea is that if you have, like,
10:38
ten of these dimension
10:40
specific animal decoders,
10:42
right, you can ask someone to think of something,
10:44
and then you feed
10:47
that data into each of these dimensions specific
10:49
animal coders. And I can think, oh, Kim is
10:51
thinking about a
10:53
big ferocious land animal
10:55
that's furry and brown or
10:57
something. Right? And then I
10:59
that gives me a pretty good sense of
11:01
maybe it's a bear. Right?
11:04
And so the the
11:06
core principle, again, the the conceptual
11:09
innovation is that if
11:11
we come up with the
11:13
right set of underlying
11:15
dimensions and trying to decode or freeze those
11:17
dimensions, then you can decode
11:19
the whole space. Right? I
11:21
I could, you know, the the
11:23
system that I just described to you is is
11:25
like a general purpose animal
11:27
decoder. Right? It'll
11:29
tell you what
11:32
the animal that Kim is thinking of, you
11:34
know, sort of where it sits along each of
11:36
the relevant dimensions. And that probably
11:38
gonna be enough for me to guess what Amazon
11:40
is. Right? And and people have applied
11:42
that strategy to,
11:44
for example, Like, which
11:46
of all possible nouns?
11:49
Right? Someone is thinking of. Right? And
11:51
the idea there is
11:53
again, the number of dimensions you
11:55
need or, like, you know, questions
11:57
you need to ask to pinpoint what down
11:59
someone's like,
11:59
yeah, people figure out it's it's maybe a
12:02
couple hundred. And
12:03
so you can think of any
12:06
noun as sitting as like it's sort of
12:08
a point in a couple hundred
12:10
dimensional space And
12:12
so we can look at your brain
12:14
activity and figure out where your
12:16
thoughts are in that couple hundred dimensional
12:18
space and that gets us very close to
12:20
figuring out what now in your thinking of.
12:21
What are some
12:22
of the practical uses that researchers
12:25
are exploring for this? One that's
12:27
gotten a lot of media attention is
12:29
using it to communicate with locked in
12:31
patients, people who are not able to
12:33
communicate with the outside world,
12:35
but may still be conscious and
12:37
thinking Is that research still
12:39
ongoing? And are there other
12:41
practical uses people are interested
12:43
in? Mike,
12:44
colleagues, Martin Monty,
12:46
Adrian Owen, there and several others
12:48
have worked in that particular
12:50
situation. The the very clever
12:52
strategy they came up with
12:54
was if you have a locked
12:56
in patient, they tried
12:58
to come up with mental
13:00
activities that they thought would activate
13:02
very different parts of the brain. Right?
13:04
So it is that that the part of
13:06
the brain that's activated by
13:10
thinking about, like, playing tennis
13:12
Right? And all the movements involved in playing
13:14
tennis is very different
13:16
from the part of the brain that's
13:19
activated by thinking about
13:21
the layout of your house. Right?
13:24
And so they wanted
13:26
to come up with instructions that would
13:28
elicit thoughts that were really different from
13:30
each other. Right? And then they
13:32
would ask when these locked in patients to, like,
13:34
imagine playing tennis or imagine
13:36
the layout of your house, and they would see
13:39
these differences. Right?
13:41
Basically, the the playing tennis
13:44
pattern would come to life when
13:46
this person who can't move
13:48
or speak was asked to think about
13:51
tennis and vice versa for
13:53
thinking about the lay of the house. So
13:55
that's the strategy they've used there and
13:57
they've used such a very good effect and to show that
13:59
people, you know, they didn't know
14:01
whether these people had
14:03
the ability to follow instructions or do things like
14:05
that, and they they showed they could, which is obviously
14:08
incredibly important. That's a very
14:10
far cry from having
14:12
one
14:13
of those patients, for example,
14:15
be able to type with
14:17
their brain. But the idea is that
14:19
this is sort of a general theme
14:21
of how we've made progress in
14:23
the field is that you don't need
14:25
to do extremely fine
14:28
grained decoding. to be
14:30
able to get insight into
14:32
what people are thinking. So you you
14:34
can sort of set up a scenario
14:36
where these very
14:38
sort of
14:38
crude differences are actually
14:41
informative about what's going on. And
14:42
so another example
14:45
of something that we're trying to do in the
14:47
practical domain that
14:49
leverages these sorts of crude
14:51
differences is neurofeedback.
14:55
Right? So the idea there is
14:57
that we can
15:00
take someone while they're in the scanner,
15:02
right, and try to decode
15:04
their thoughts and
15:06
then use that to train
15:08
them to do something better. Right?
15:10
And so an example of this
15:12
that I really like is Meghan DeBette
15:14
in court, who is a grad student here at
15:16
Princeton in my lab, was very
15:18
interested in how to train people
15:20
who do a better job of paying attention.
15:23
Right? And so the idea is you're doing some
15:25
boring task and your thoughts drift, right,
15:27
or you're a truck driver,
15:29
right, and you're spacing out.
15:31
Right? And that could be very dangerous. Right?
15:33
And and you you might space out
15:35
and then line yourself, like, in a
15:37
ditch. Right? because you weren't
15:39
paying attention to the probe because you got tired. So
15:41
it'd be really good to find a way
15:43
to train people to do
15:45
a better job and paying attention. Or the very
15:48
least, sort of notice when they're
15:50
starting to space out. Right?
15:51
And so the task Megan designed
15:54
involved having them do this sort of
15:56
very boring button pressing task
15:58
where we showed them a display where there
16:00
were faces on the screen,
16:02
and they were also sort of
16:04
in a ghostly kind of way superimposed
16:06
on those faces, pictures
16:08
of scenes. Right? So people are looking at
16:10
these composite displays of
16:13
faces and scenes And we would tell them,
16:15
like, just pay attention to the
16:17
faces.
16:17
Right? And press a button whenever you see a
16:19
female face and ignore the scenes.
16:21
And we'd have people do this boring button
16:23
pressing task. Right? And so Meghan would be
16:25
analyzing their brain activity
16:27
and using one of these decoding
16:29
algorithms she
16:30
could figure out the moment
16:33
that their attention started
16:35
to drift to the scenes that they were
16:37
supposed to be ignored.
16:39
Right? And the reason to use faces
16:41
and scenes is because we knew
16:43
just like playing tennis
16:45
and thinking about the layout of your house,
16:47
faces and scenes elicit really distinct
16:49
patterns of neural activity. And that
16:51
gives us like a very high
16:53
degree of sensitivity to the exact
16:55
moment where people started
16:57
to process the scenes that they were
16:59
starting to ignore. And then because
17:01
we're doing this decoding, in the
17:03
moment. Right? In real time, while people are
17:05
in the scanner, we could change
17:08
the display. The moment
17:11
that we detected this in
17:13
attentiveness. And what Meghan decided to do very
17:15
cleverly is that
17:17
the moment she detected
17:19
with this real time brain analysis that people
17:21
were starting to attend to the scenes,
17:24
she made the faces less
17:26
visible. She made the task
17:28
harder. And so the idea there is is
17:30
you're trying to sort of amplify
17:32
the attentional apps. Right? So the
17:34
moment that your brain activity starts to
17:37
float toward what you're not supposed to
17:39
be doing, she made it really
17:41
salient to people that they were messing up.
17:44
By making the task, they were supposed to be doing
17:46
really hard. So the idea is
17:48
that if we do that, we're
17:50
gonna get people to
17:52
notice before they would have otherwise,
17:54
right, that they're spacing out.
17:57
Right? And the hope there is that we would
17:59
make them more sensitive, right, to these
18:01
potential lapses, and they'd sort of learn to detect
18:03
that better, and they'd be less likely
18:05
to have their attention sort of float
18:07
all the way off in the future. and
18:09
she demonstrated in a major neuroscience paper that was
18:12
published in twenty fifteen that
18:14
training people in this kind of closed loop
18:16
setup where you amplify their attention
18:19
lapses makes them better able, right,
18:21
to sustain their attention over time.
18:24
You know, which was a big advance,
18:26
and I'll just mention as
18:29
as sort of way of building on that, we started
18:31
to run studies. This
18:33
is my grad student and men in, and
18:35
we're collaborating with eventuline
18:37
who's a depression researcher at
18:40
UPenn to sort of
18:41
apply the same technique.
18:45
to help people with
18:47
depression learn to sort of
18:49
unstick their thoughts from
18:51
sad mental states. Right?
18:53
So one of the
18:54
most salient symptoms of
18:57
depression is once a
18:59
depressed person starts thinking about
19:01
something sad, they have a hard
19:03
time unsticking their
19:05
thoughts, right, from this
19:07
sad mental state. They sort of ruminate
19:09
and ruminate on these
19:11
negative things. And so
19:13
the modification we made to Meghan's task
19:15
is very simple. Right?
19:17
It's basically the same task I just described
19:20
except we made the faces sad.
19:22
Right?
19:22
And then we told people to
19:25
attend to these pictures
19:27
of emotionally neutral scenes and
19:29
make simple judgments about the scenes,
19:31
like, is it an indoor scene or an
19:33
outdoor scene? Right?
19:33
But then we just help them ignore
19:36
the faces. But what we see when
19:38
we press people in the scanner and have
19:40
them do this task is that their
19:42
thoughts start to drift toward the sad faces. And
19:44
once that happens, they get sort of stuck
19:46
on this sad face. And so
19:48
what Anne Mennen did in this
19:51
study is basically the
19:53
second that we detected with
19:55
these brain decoding algorithms that
19:57
they were attending to the sad faces,
19:59
we made the sad
19:59
faces really visible. And so again, the idea is to
20:02
make it really salient that
20:04
their thoughts had sort of
20:07
rolled away from the
20:09
faith that the scenes toward
20:11
this ad faces with with the goal
20:13
of making them more sensitive
20:15
to the sort of moment when
20:17
this lapses happening with the idea that
20:19
they could use that to get better at catching
20:21
themselves before, you know,
20:23
their mental state has
20:25
gotten all the way into
20:27
this sort of pit of sad thoughts that
20:29
it's hard for them to get out of.
20:32
Right? And so we're running experiments
20:34
now to see if that
20:37
training process basically helps
20:39
them
20:40
instead unstick.
20:41
So it sounds like it's becoming
20:43
both a diagnostic and
20:46
a therapeutic tool in in a sense. And I'm
20:48
wondering then are you moving
20:50
toward, say, identifying people
20:53
with maybe disordered thinking
20:55
or violent thoughts and
20:57
then being able to maybe replace
20:59
those thoughts with healthier
21:02
concepts. Yeah,
21:03
I mean, two things there.
21:06
One
21:06
of them is that diagnosis
21:08
is tricky
21:10
just
21:11
because, you know, I mean, this is something
21:13
again, I mainly do basic
21:16
research on learning a memory, so I'm getting a little bit
21:18
out of my wheelhouse now, but I think that I
21:20
can say with some degree of confidence
21:22
that
21:23
one of the changes in how people
21:25
think about mental health in
21:27
different conditions, right, is that
21:30
they're very complicated. Right? Like
21:32
depression is not just one thing and it
21:34
overlaps a lot with anxiety and overlaps with a
21:36
lot of other disorders. Right?
21:38
And so the ideas that the
21:40
people who have depression, right,
21:42
that's diagnosed in some way, it's a heterogeneous
21:45
group. and it overlaps with a lot of
21:47
other groups and that makes it sort of
21:49
hard. People are working very hard to do this
21:51
diagnosis. Right? But it
21:53
makes it that's one of the most
21:55
challenging things to do with
21:57
brain data, right,
21:59
is diagnosis. And
22:01
so doesn't mean it's impossible. It
22:03
just means that it's a hard
22:05
problem, right, that that people are working
22:07
on. But III think that
22:10
this space that neurofeedback
22:12
belongs to of
22:15
therapy. Right? or
22:18
just promoting learning more broadly,
22:21
right, where you've, you
22:23
know, got a group and
22:26
they've got some particular way
22:28
of thinking. Right? Like,
22:30
in depression, these sort of sticky negative
22:33
thoughts and
22:33
you want to help
22:36
them learn to
22:38
control these negative thoughts and they will pull themselves
22:40
away from that, then I
22:42
think that these sorts of brain decoding tools are gonna
22:44
be very, very useful.
22:46
I think they give us a
22:49
really powerful window into
22:52
how a person is
22:55
thinking in a particular moment.
22:57
Right? And in
22:59
ways like this neurofeedback setup
23:01
that I
23:01
just described, we can try
23:04
to choreograph
23:05
experiences for
23:08
them that will
23:09
help them learn to do things
23:12
differently. And this applies
23:14
both to clinical
23:15
populations, but also like
23:19
education. Right? The idea is
23:21
that, you know, what it
23:23
means to learn a subject in the
23:25
course you're taking is
23:27
you
23:27
learn to organize your
23:30
thoughts, right, in a way
23:32
that adheres to
23:34
the of ground truth
23:36
of how things are and sort
23:38
of getting scrambled up. But to compromise
23:40
this, like, say, you're taking a course in computer
23:42
science. Right? You don't know If
23:44
you think about computer science, right, and what it
23:46
means to learn computer science is basically
23:48
to learn sort of which concepts go
23:50
together and which concepts
23:52
don't.
23:52
So you learn that
23:55
if then statements, right, are
23:57
a way of sort of controlling
23:59
the flow of sort of
24:01
what happens in a project. And for
24:04
loops and why loops are also a
24:06
way of controlling the
24:07
flow. But you know, variables
24:10
are something different. Right?
24:12
And so they're all these
24:14
new terms. You have no idea what you mean
24:16
and you learn sort of like these
24:18
terms, like if then and while and for
24:20
loops go together and these other
24:23
terms don't. Right? You sort
24:25
of learn what what coherence and
24:27
what's different.
24:28
Right? Or if you're
24:29
learning about animals, right, you learn
24:31
which animals, right, are dangerous and
24:33
which animals aren't.
24:35
The idea is that we can use
24:37
these different brain
24:40
decoding measures to sort of get a window
24:42
into what concepts
24:45
people think are similar,
24:47
what things people think at
24:49
a particular moment, go together and
24:52
which concepts people think don't go
24:54
together. Taking
24:54
this computer science example,
24:57
again, if we've got two concepts that
25:01
go together and we look at
25:03
some computer science students' brain and we
25:05
see that the pattern of
25:07
brain activity associated with those
25:09
concepts are really different than
25:11
we know there's some learning to do. Right?
25:13
We wanna, you know, devote extra
25:16
effort to helping them understand that
25:18
these things go together, and so
25:20
we can give them some, you
25:22
know, weapon on how things
25:24
go together and we could see whether that
25:26
lesson is successful by scanning
25:28
their brain after the lesson to
25:30
see if those concepts that should
25:32
go together, elicit similar
25:35
patterns of brain activity. So you
25:37
don't have
25:37
to take tests anymore. Right? Your professor puts
25:39
you in an FMA our eye
25:41
and says, oh, you didn't really learn this. Right.
25:44
So so
25:44
the reason that I was using
25:46
this computer science example is that a
25:48
a post doc in my
25:51
lab who now works at the
25:53
computer company, Snap. Mayer
25:55
Meshelim did
25:56
exactly the
25:57
study. So he took computer
25:59
science David, students at Princeton were
26:02
taking introductory computer science,
26:04
and he scanned
26:06
them multiple times over the
26:08
course of the semester.
26:10
and he looked at
26:12
the brain patterns evoked by different computer
26:14
science concepts and sort of how similar or
26:16
different they were. And
26:18
basically, what he showed is that
26:20
he could figure out
26:22
how well a student
26:25
knew a concept by
26:27
basically comparing the similarity
26:29
of patterns in the student
26:31
to the similarity of patterns in the
26:33
teaching assistant. And if this
26:35
student had similar patterns,
26:38
when the
26:38
teaching assistant had similar patterns and
26:41
different patterns, on the teaching pattern, you know, it's
26:43
just a different pattern of brain.
26:45
That predicted that they
26:47
would do well on tests of
26:49
those concepts. Right.
26:50
Right. So so what you just
26:52
said, you know, you could use a
26:54
a brain scanner instead of the
26:57
test is a drill. Of course, it's,
26:59
you know, comparison and
27:01
much more expensive. Right? And -- Right.
27:03
-- it it, you know,
27:05
is much more cost. cost
27:08
effective to just give the person
27:10
AAA paper and pencil test.
27:12
I I think that the point I wanted
27:14
to make is that there are a
27:16
lot of circumstances with
27:19
brain decoding where,
27:21
you know, it would just be
27:23
easier to ask the person
27:26
to say what they're thinking,
27:28
right, or to give them
27:30
a normal exam question,
27:32
right, rather than trying to do this fancy
27:35
brain decoding thing. And I
27:36
think that the situation
27:40
in which these brain decoding
27:42
methods really shine is,
27:44
you know, there are
27:45
a lot of scenarios where there are
27:47
interesting things going on and
27:49
not
27:50
feasible to just ask
27:52
the person what they're thinking. Like,
27:54
for example, we know
27:57
from hundreds of studies that there are
27:59
really important
28:00
things having to do with learning
28:02
that happen when you're asleep.
28:04
So
28:05
studies have shown that if people
28:08
learn, you know, they they study
28:10
something for a test and then they
28:12
sleep and then they wake
28:14
up, Right?
28:14
They actually,
28:16
like, forgetless. Right? If
28:18
they had certain kinds
28:21
of sleep during the interval between studying
28:23
and testing. So something is
28:25
happening, right, when
28:25
you're asleep. Right? Your memories are
28:28
getting strengthened or maybe they're
28:30
getting reshaped. And it's
28:32
this incredible deep
28:34
sort of cool puzzle to try to
28:37
understand what
28:39
exactly is happening? But you can't ask
28:41
someone what they're thinking
28:43
about when they're asleep -- Right. -- because they're
28:45
asleep. Right? You know,
28:47
maybe you could wake them up Right?
28:49
And ask them, like, what were you
28:51
dreaming about? Right? And sometimes people didn't
28:53
tell you, but you only get the sort of fading wisps
28:55
of people's thoughts. But
28:58
but you
28:58
know, one really cool
29:01
application
29:03
her for
29:04
brain scanning might might be to
29:06
decode
29:06
what people are thinking when they're asleep.
29:09
Right? So if the idea is, you know,
29:11
and people who
29:13
built theories of what
29:14
might be happening during sleep is basically that that
29:17
your brain is composing
29:19
for you a kind of playlist
29:22
right,
29:22
of things that it thinks
29:25
it's important for you to learn
29:27
about. Right? So the idea
29:27
is when you're awake, stuff
29:30
happens and and some experiences get
29:32
marked as being important. Right?
29:34
And they get put in this playlist and then
29:36
your brain sort of loops through them
29:38
when you're asleep And as a
29:41
result of this looping
29:43
through, these things are marked as being important.
29:46
You more about these
29:48
things. Right?
29:48
And we wanna know what's on
29:50
the playlist. So that's
29:52
that's a great example of something that
29:54
we can do with brain decoding that
29:56
you can't do just with asking.
29:58
And so on the
29:59
Shapiro, who's a former
30:02
grad student in my lab, who's now a
30:04
professor at Upan
30:06
had a brain imaging study. It
30:08
actually, I guess, in this study, she wasn't
30:10
looking at when people are asleep. She was looking
30:12
at sort of what people think about when they're
30:14
just kind of spacing out. Right?
30:16
The idea is this sort of looping
30:20
through concepts
30:20
that were marked as
30:22
important happens when you're spacing out, when
30:24
you're awake, in addition to when you're asleep,
30:27
and she wanted to get some
30:29
insight into, like, what's on the
30:31
playlist, right, of
30:33
things that you think about
30:35
when you're Yeah. And how does that relate to
30:37
the learning experience you just had? And
30:39
she used brain decoding and and
30:41
got this really cool result. It
30:43
sort of makes sense that the
30:46
concepts
30:46
that
30:47
people struggled
30:49
with the most when
30:51
they were trying to learn this
30:53
new thing that she was teaching them about, were
30:55
the ones that appeared the
30:58
most on their sort of mental
31:00
playlist when they were spacing out.
31:02
Right? Which is
31:03
very adaptive. Right? You don't want
31:05
to spend your time thinking about stuff you
31:08
already know. you want to spend your time working through the
31:10
stuff that you don't
31:12
know very well. Right? And she was
31:14
able to get very tangible
31:18
clear evidence for this
31:20
idea using
31:22
brain decoding. Let me
31:23
ask you a a long term question,
31:25
which is whether the goal
31:27
is some kind of a universal thought
31:30
decoder so that there'll be an
31:32
absolute lexic con of of what we
31:34
know thoughts consist of?
31:36
And and is this science fiction or is
31:38
this something that's in the realm of the
31:40
possibility? It's sort of like decoding
31:42
the human genome. Howard Bauchner:
31:44
I
31:44
mean, right, it's a very exciting possibility, and
31:46
I think we're very close to
31:48
having some kind of a universal
31:51
thought decode. and, you
31:53
know, uses the principle I
31:55
described earlier, which is that,
31:57
you know, we think that thoughts lie
31:59
in these we call sort of low
32:01
dimensional spaces, which is just
32:02
like saying that you can ask
32:05
a couple
32:07
hundred questions to sort of
32:09
zoom in on what
32:12
particular thing people are thinking about and you
32:14
can make a decoder, you know, for each
32:16
one of those questions. And
32:18
then we can do that now, and that's
32:20
a universal thought decoder.
32:23
But it doesn't work
32:25
anywhere near perfectly.
32:27
It's like a
32:27
very blurry thought
32:30
decoder. We can
32:33
tell
32:33
where
32:34
your thoughts fit
32:36
in like this three
32:38
hundred
32:38
dimensional space. Right?
32:41
But there's like a
32:43
huge cloud of uncertainty around
32:45
our estimate of what you're thinking about,
32:48
right? And with
32:49
functional MRI that has to do, it's
32:52
just intrinsic
32:54
limits
32:54
on the resolution
32:57
of the technique. I I said earlier, right? We're
32:59
not measuring the zapping of all the neurons
33:01
in your brain. We're measuring
33:03
this blood
33:04
flow thing that's
33:06
loosely coupled, right,
33:08
to the neuros happening. It's
33:10
both blurring space Right? So
33:12
we can't tell exactly which neurons
33:15
are activating, right, just sort of
33:17
which millimeter, by
33:19
millimeter, by millimeter, cubes of
33:21
your brain are most active.
33:22
Right?
33:23
And it's also blurring time.
33:25
Right?
33:25
This blood flow thing
33:27
that we're measuring unfold slowly
33:30
relative to the very precise zapping of
33:32
neurons. Right? And
33:32
so that blur
33:35
is not something we're going to be able
33:37
to Thanks.
33:38
You know, my my
33:39
favorite example of this when I teach is there is
33:41
I think of, like, two thousand and
33:43
eight or somewhere around then. This
33:46
newsweek has mind that's like, mind reading
33:48
is possible. Right?
33:49
Which sounds terrifying. And then the the
33:51
subheading is like, people can
33:54
now tell with seventy percent
33:56
accuracy, whether you're thinking
33:58
about players or a
33:59
wrench.
34:00
Right? And, like,
34:01
that's
34:02
Cool. But that's,
34:05
like, not the mental picture
34:07
that comes to mind when you read mind
34:09
reading is now possible. And
34:12
I think that should
34:14
be kind of reassuring
34:16
to people who are
34:18
worrying, for example, about abuses
34:21
of this sort of
34:24
technique. And if
34:26
I had,
34:27
like, one practical
34:30
point to make, right, is that
34:32
there's just this enormous gap
34:34
between perfect decoding and
34:36
above chance decoding.
34:39
Right?
34:39
And so when we published papers in
34:42
scientific journals saying that we're doing mind
34:44
reading, well, I mean, this is a good above chance to
34:46
go to, we
34:47
have some insight
34:49
Right? But but for example, if
34:51
you everyone I think now
34:53
is using these speech
34:56
to text things on their
34:58
phone. Yeah. you talk
34:58
to your phone and it transcribes it. And the
35:00
idea is that if the speech to
35:03
text thing is wrong even
35:06
like once every
35:08
fifteen seconds, right, which would
35:10
be, like, ninety nine percent
35:12
accuracy in the words it's
35:15
transcribing, it's super annoying. And
35:17
that goes back to what I was saying. If, you know, people
35:19
are thinking when can we have,
35:21
like, people who are
35:24
locked in just kind of type their thoughts with F And,
35:26
you know, I think people could and
35:28
are actually working on
35:30
developing successors
35:32
to F MRI, right, that might be
35:35
more precise. Right?
35:38
Less noisy. But with
35:40
current techniques, we're not even
35:42
close to, you know,
35:43
ninety percent thought to coding
35:46
accuracy or, you know, we're we're
35:48
above chance. So
35:49
so I think, you know,
35:51
in some, we already have a
35:53
universal thought to Coder. It just
35:56
doesn't work super well
35:58
if people tried to use it as like a
35:59
product, they'd be
36:01
really annoyed. Well, plus then you need
36:03
to have
36:03
an MRI machine, right,
36:06
which is hundreds of thousands of dollars and it weighs a lot and,
36:08
like, you're not gonna have one in your living room
36:10
like a television set. That's right.
36:12
But
36:12
there are other you
36:14
know, techniques for non invasively measuring brain
36:16
activity, like there's a thing called EEG,
36:19
right, which is electroencephalography,
36:21
which is measuring you
36:23
know, electrical fluctuations on the scalp,
36:26
which is even bluer
36:28
than MRI,
36:28
but is much
36:31
cheaper. lot of the commercially
36:34
available brain computer interface
36:37
headsets use EEG.
36:39
Right? And given the
36:41
ideas
36:42
that that you're not
36:44
gonna be able to type with your brain
36:47
with EEG, very feasible
36:50
with
36:50
VG to detect,
36:52
like, how attentive you are.
36:54
like, are you in a focused or unfocused,
36:57
intentional state? Or even sort of relatively
36:59
crude, brain decoding things, like,
37:01
are you thinking about
37:04
a face or a house. You
37:06
know,
37:06
and people in my lab have used EG. So
37:08
Nicole Refidi, who is a undergraduate
37:11
in my lab, way
37:14
back when used
37:17
EG to sort
37:19
of measure out how
37:21
much competition there was between
37:24
different memories. Right? So
37:25
sort of how hard people were
37:28
working to try to
37:30
remember foreign
37:32
language vocabulary. Right?
37:33
And the idea is that that's very useful because
37:35
that that tracks pretty well how
37:37
well you've learned something.
37:40
Right? So the idea is that once you learn something,
37:42
your brain zips right to it.
37:44
Right? And wrong things don't
37:47
come to mind. But when you're still kind of
37:50
uncertain and you're just beginning
37:52
to learn, for example, a foreign language,
37:54
lots of wrong things do come
37:56
to mind. Right? So if we
37:58
have this sort of cheap and
37:59
easy EG Carla,
38:02
right, of the extent to
38:04
which wrong things are coming to mind,
38:06
that tells you how well you've learned something. And the
38:08
idea is that there
38:09
could be maybe some
38:12
closed loop slash card
38:14
system that uses this
38:16
EEG correlate of how
38:19
quickly and easily the memory is coming
38:21
to mind to to sort of know
38:23
which flashcards to show you. So if it detects there's a
38:25
lot of competition from this
38:27
eG signal, then it'll keep showing
38:29
you that flash card
38:32
But if it sees that your brain is zipping to the right
38:34
thing, which we can do with EG,
38:36
then it'll know that that memory's,
38:38
you know, it's cooked. Right?
38:41
You don't need to show that flash card
38:43
anymore. So so that's another
38:45
important lesson is that there's a lot
38:47
you can do with core screen
38:49
to brain decoding. And the
38:51
depression
38:51
neurofeedback thing I mentioned
38:54
earlier, that's
38:56
using core screen brain decoding. Right? Are you attending to the face of
38:58
the scene? But we think we can do
39:00
really powerful things
39:04
with that simple distinction. And so I think
39:06
that we've been playing with brain
39:08
decoding for a couple
39:10
decades
39:11
now. And, you
39:13
know, I I think at the beginning, we have this idea of, like, is there anything's possible?
39:15
And and we'll just see. You know, we'll use
39:17
all of our fancy machine
39:20
learning decoding algorithms, and
39:22
we'll see what we can decode. And now I
39:24
think with appropriate humility, we
39:26
know the kind of limits on the
39:28
technique, and we can try to tailor
39:30
the applications of the technique to the
39:33
limits that we understand. Well,
39:35
Dr. Norman, I want to thank you
39:37
for joining me today and tell about
39:39
your amazing research. It's really quite fascinating. Thank
39:42
you. You're very welcome. It was fun talking with
39:44
you. You can find
39:46
previous episodes of speaking of psychology
39:48
on our website at WWW
39:50
dot speaking of psychology dot org or
39:52
on Apple, Stitcher, or wherever you get
39:55
your podcasts. If you have comments
39:57
or ideas for future podcasts, you can email us at
39:59
speaking of psychology at APA
39:59
dot org. Speaking
40:02
of psychology is produced by
40:04
Lee Weinemann, Our
40:06
sound editor is Chris Condeian. Thank you
40:08
for listening. For the American Educational
40:11
Association, I'm Kim Mills.
40:27
he
40:30
a
40:41
he
Podchaser is the ultimate destination for podcast data, search, and discovery. Learn More