Episode Transcript
Transcripts are displayed as originally observed. Some content, including advertisements may have changed.
Use Ctrl + F to search
0:00
The introduction in English starts
0:02
around minute 3. Hello,
0:11
I'm from
3:00
the Volta Intrutsum. Today
3:02
we are diving into an enlightening
3:04
conversation with Tom Chatfield, a British
3:06
author and tech philosopher. Tom
3:08
is the author of several books
3:11
focused on improving our thinking in
3:13
today's tech-dominated world, including bestsellers like
3:15
Critical Thinking and How to Think.
3:18
He also teaches these skills to
3:21
a wide range of audiences, from
3:23
schools to corporate boardrooms, and he
3:25
has recently designed a successful online
3:28
course on Critical Thinking and Decision
3:30
Making for the Economist's Education. In
3:33
his most recent book Wise Animals,
3:35
Tom explores our relationship with technology,
3:38
examining the lessons that our ancestral
3:40
past may hold for our present
3:42
challenges. In this very
3:44
thought-provoking conversation, we discussed Tom's advice
3:47
for how to think more critically
3:49
in today's complex world, we talked about
3:51
strategies to combat the influence of cognitive
3:53
biases in our mind, a term that
3:55
many of you probably know from the
3:58
work of the late Daniel Kahneman. and
4:00
the importance and also difficulty of
4:02
challenging our assumptions. We
4:05
also talked about how to better think
4:07
collectively and the importance of creating trust
4:09
in order to be able to have
4:11
open conversations and also
4:13
some techniques Tom advises in order
4:15
to generate deep discussions and good
4:17
decision making in all contexts. In
4:20
the final part we turn our
4:22
focus to Tom's latest book and
4:25
I asked his view on two
4:27
big impacts technology is currently having
4:29
in our society. One of them
4:31
is the change that mass interactive
4:33
media is bringing to traditional democratic
4:36
structures, exacerbating polarization and eroding public
4:38
trust in institutions. And the
4:40
other one, which is probably very close
4:42
to home to those of you who
4:45
have young children, is the troubling rise
4:47
of what many experts refer to as
4:49
the epidemic of mental illness among children
4:51
and teenagers driven by the increasing role
4:53
of social media in their lives. It
4:56
was a fascinating conversation, so let me
4:58
leave you with Tom Chatfield. Tom
5:06
Chatfield, welcome to Coreaty C'est In Graz. Thank you
5:09
very much, and it would be great to be
5:11
with you. I'm a big fan of your work.
5:14
I think I knew it before finding your book
5:16
on critical thinking. Then I
5:18
came across your work in a course you
5:20
put together for The Economist on critical thinking
5:23
and future skills, and then your new book,
5:25
Wise Animals, which we'll also be talking about.
5:28
Let's start. I'm very curious about this. What happened
5:30
so that you ended up writing books on critical
5:32
thinking, technology, and the like? What
5:35
happened? What's your background? How did you end up here? So
5:39
I guess I've always wanted to be a
5:41
writer since I was very young and I
5:43
started out writing fiction and poetry
5:45
and drama and things like that, but
5:47
I also did my Masters and my
5:49
Doctorate and taught a bit at Oxford
5:52
in literature and a bit of philosophy. I was
5:56
very geeky. I also love
5:58
computers And maths and things
6:00
like that. that and it became
6:02
very clear to me that all
6:04
the stuff I cared about Mueller
6:06
did the culture. The idea is
6:08
the morals these big debates would
6:11
being mediated through new forms of
6:13
technology. That technology was touching upon
6:15
all aspects of human life and
6:17
and it also I think became
6:19
equally clear to me that a
6:21
lot of the time the weight
6:23
technologies talked about and discussed is
6:26
not very rich, is not very
6:28
philosophically informed, is not very thoughtful,
6:30
Is quite shallow so she like
6:32
I became almost obsessed with the
6:34
idea what the question? What
6:36
does it mean to think about technology?
6:39
Well, What? Does it mean
6:41
to come to grapple with the modern
6:43
world using all the tools that philosophy
6:45
and social science and literature and so
6:47
on have given us and soak. In
6:50
my work I I wrote my first
6:52
book about video games one of my
6:54
obsessions or began to collaborate with. Academic
6:57
publishers and social scientists and I
6:59
I can have some my way
7:01
towards critical thinking. As. This
7:03
interdisciplinary place where you write
7:06
and talk about. What? It
7:08
means to think well and rigorously. And.
7:10
To do so together and I guess that
7:12
it struck a chord for me. I'm very
7:14
passionate about it and for me. It
7:17
unlocks the digital world for
7:19
people. It. Takes the resources
7:21
we have at our fingertips today and
7:24
and if you give people should help
7:26
people develop thinking skills you know then
7:28
is equips people to make the most
7:31
of the potential sophie of digital age
7:33
rather than be manipulated. So yes, I've
7:35
almost become obsessed with this like as
7:37
I've written quite a few books about
7:40
it. I've taught variously, office on courses
7:42
I work with companies. I sort of
7:44
do everything I can really to explore
7:47
this theme of well with how can
7:49
we be more. Thoughtful. And
7:51
rigorous and the way we engage with with
7:53
the modern world and the skills we need
7:56
to thrive in it. Interesting because they're your.
7:59
Path. Is not very. And like mine although
8:01
Ellerbeck runs are different, mine is in economics
8:03
but for me critical thing he was also
8:05
the natural place where I landed although I
8:07
didn't think of that's in the beginning there
8:09
I think because of what you just described
8:11
it it is both kind of the met
8:13
a skill which is. At the
8:15
basis of thinking about everything. And.
8:18
It's increasingly important ones. Technology gets more
8:20
developed than do and you have an
8:22
overload of information. You have a I,
8:25
you have social media, you have fake
8:27
news so Sinhalese, it was already important
8:29
and it's gets even more important than
8:31
yeah, Absolutely. And to some degree I
8:34
think you've hit the nail on the
8:36
head. Their that sending people to focus
8:38
on the features of technology, walk and
8:40
technology Do what's it's capabilities and that's
8:43
great. That's. Fascinating. But even
8:45
more important is the question,
8:47
Well, one of the features
8:49
of a human minds what
8:51
our strengths and weaknesses and
8:53
I think cognitively we both
8:55
very brilliant and very vulnerable.
8:57
And particularly the work of
8:59
psychologists like Danny Karma Man
9:01
and a lot of others
9:03
you don't Great workarounds that
9:05
have biases and she rustic
9:07
Spider Economics Cylon. There's just
9:09
been such rich recent research
9:11
illuminating both what makes On
9:13
Nibble. On how we can overcome
9:15
this vulnerability is on. When I think about
9:17
ai technology the great question from is not
9:19
what on the technology do. Not.
9:21
What does it mean to be obsessed
9:24
with? it's? it's features. But. How
9:26
can we try to make sure
9:28
that we are robust against manipulation
9:30
that we take a close interesting
9:33
reality in what's really going on
9:35
to win? Not distracted and deceived
9:37
and bewildered and disempowered. So when
9:39
I write books when I was
9:42
on courses, they have these two
9:44
aspects on the one hand, building
9:46
up. Ah, Skills: A
9:49
Logical reasoning through. The.
9:51
Scientific Method. Through.
9:53
Research. That on the other
9:55
hand, understanding. Of under abilities.
9:58
Resisting. Paste. This
10:00
bias. Mental. Show cops
10:02
manipulation. On. It's two things
10:04
for me, the go together to
10:07
defined critical thinking and in the.
10:09
Twenty. First century focusing on the on
10:11
the first and was you've been asked this
10:13
question but of have thought about it quite
10:16
awhile. Who did you find critical thinking as
10:18
the sauce killer high school path? I think
10:20
I guess a soft in some ways I
10:23
like self because it suggests to me that
10:25
it's and that is not as much as
10:27
a science and that it's as much a
10:29
kind of attitude of mind. It's a mindset.
10:32
I think it begins with humility and what
10:34
I mean by that is just the realization
10:36
that individually our minds are. I will use
10:38
a very small. A very limited and
10:41
the quest to become less deceived
10:43
is something we do together that
10:45
we do incrementally. The great danger
10:47
is that we moved to quickly.
10:50
Towards. Generalizations and false
10:52
conclusions to be overestimate our knowledge caught
10:54
him has this great saying he says
10:56
in the route so many errors
10:58
in cook mission is the idea. What
11:01
you see is all there is.
11:03
some the have a look cause he
11:05
got stuff at your fingertips and
11:07
so all of us of course we
11:09
do this. We know a tiny
11:12
amount. And. We assume that tiny
11:14
amount can answer all questions, and the
11:16
great battle is to work with other
11:18
people to get out of that mindset.
11:20
And. Instead to become an investigator's of reality
11:23
together. So for me or that as
11:25
a lot of science in there. It's
11:27
fundamentally a sub still because it
11:30
it begins and ends with self
11:32
knowledge and weaves almost self discipline
11:34
with the discipline to. Not.
11:36
Just leaked conclusions to does
11:38
does or to control your
11:41
own cognition and and directed
11:43
it mindfully. In the service
11:45
of of reality of truth of understanding.
11:48
We're uncommon in been divorced and other
11:50
researchers were were behind his does conclusions
11:52
They develop that mother which which I
11:54
think is widely know know those of
11:56
us to a system in all range
11:58
between system one and system doing. With
12:00
system one it sort of that the
12:03
automatic wonder the one that's based on
12:05
heuristics and the system to is the
12:07
one hopefully we are both using now
12:09
which had we are singing deliberately about
12:11
something and and consciously and then from
12:13
system one we get. Biases.
12:15
Which is when it's not only automatic
12:18
but it's also our reasoning. It's is
12:20
also by as I wonder what's what's
12:22
your view and I'm on my we
12:24
have these biases because it's relatively easy
12:27
to know why we have these automatic
12:29
reasoning Because it's much more efficient in
12:31
terms of resources. It's much faster, so
12:33
much faster to not have to sink
12:36
consciously about something. It is as just
12:38
workouts automatically based on past patterns. But
12:40
the bias is so. This does systematic
12:42
deviations from rationality for me. I'm not
12:45
done. Not that obvious. Where. Where
12:47
we get them from That that's right and
12:49
them. I was lucky enough in a to
12:51
note that economy a little bit into. He
12:54
collaborated with me. And my my
12:56
critical thinking skills testing for his death Actually
12:58
a I say yes yes I did some
13:00
little bits of kind of. Work with
13:02
him in America know lot but I was
13:04
lucky enough to speak to him about this.
13:06
and of course like his work into the
13:08
schism done or in his mother's. Is
13:11
putting you know a new analysis on
13:13
very old ideas and the way for
13:15
me as think about a bias is
13:17
that a bias is a miss Foreign
13:19
Service stick and what I mean by
13:21
that is a heuristic as a rule
13:23
of thumb. And. Mental shortcut.
13:25
That's. Really important when you're evolving
13:27
when you're trying to survive as
13:30
an animal. You. Need to
13:32
be able to act in a sauce decision.
13:34
Appropriate way to. if you see something that
13:36
looks a bit scary or threatening or dangerous,
13:38
you gotta run away from it. You can't
13:41
spend your time analyzing it is something, smells
13:43
bad, luck, some familiar you want to
13:45
avoid it and then ask questions later. And.
13:48
So it's always been the case.
13:50
That fast. Efficient, fairly
13:52
reliable actions are keep us alive
13:55
and then slow considered analysis is
13:57
what we do afterwards to him.
14:00
There is to develop strategies over time
14:02
collectively and humans a supremely good at
14:04
that. But of course the conditions we
14:06
live in now. Look. Nothing
14:08
like the conditions we evolved
14:10
him, and so a heuristic that's
14:13
quite useful one hundred thousand years
14:15
ago that for example, if
14:17
someone or something looks i'm
14:19
familiar, be cautious that now can
14:22
lead us into all kinds
14:24
of problem because if you're
14:26
advertising something, you go for a
14:28
truck dismissed familiarity to create
14:30
a halo and or a
14:32
halo effect. A beautiful person drugs,
14:35
a beautiful car with a symmetrical
14:37
face, or conversely, a stereotype.
14:39
A very in a crude generalization about
14:42
someone who looks different to you encourages
14:44
you to treat them as an enemy
14:46
or to think of them as different
14:48
it to you and in in negative
14:51
ways and that maybe that would have
14:53
kept you safe one hundred thousand years
14:55
ago. but of course it's a very
14:58
very poor way that interacting with fellow
15:00
human beings in the modern world and
15:02
so. I guess the key is
15:04
that a lot of the time Mental short
15:06
a dude and useful. Choose what you have
15:09
for lunch. Choose what you were. You know,
15:11
choose what to do today. How to have
15:13
fun. But. And I'm going
15:15
to quote Done A Again on this
15:17
is of the trouble is when faced
15:19
with a difficult question a lot of
15:22
the time we answer an easy question
15:24
instead without noticing the substitution. So.
15:26
He's a hard question who would be
15:28
in the best? Next. President United
15:30
States. Who would be the what's the
15:33
best plan for the future of the
15:35
European Union? What's a good tax scheme
15:37
is a very hard questions. Very.
15:40
Difficult questions, Very slow to investigate.
15:42
He's an easy question. Which
15:44
of these two people? Looks. More
15:46
attractive and like a strong leader
15:49
which of these people as the
15:51
more symmetrical face which these people
15:53
is more familiar to you. These
15:56
are very easy questions and unless
15:58
we try really hard. We.
16:00
Can to. Swap. With easy
16:02
questions for the hard questions. Unanswered.
16:05
Those instead and that was very
16:07
efficient. In evolutionary terms, it's very
16:09
dangerous in the modern world, so
16:11
it's incredibly difficult. Even. For
16:13
the smart people, doesn't really matter
16:15
how smart you are, incredibly difficult
16:17
to slow down and interrogate your
16:19
own emotions and in a way
16:22
you have to begin with the
16:24
fact that all of us will
16:26
have a strong instant feelings and
16:28
the challenges then to negotiate with
16:30
them. Rather than pretend we can
16:32
become perfectly rational, which is. A
16:34
meaningless idea. And. Was that a
16:36
better rustic know they sort? So. Do.
16:39
You think that's a better restrict because. We.
16:41
Live in larger groups. It's because
16:44
we leaving more diverse society. So
16:46
what? I guess It's a bad
16:48
humor. partly because it leads us
16:50
into Era. So for example, the
16:53
question of which tax scheme will
16:55
be better at preventing poverty or
16:57
incentivizing work. That's an empirical question
16:59
about complex systems and it doesn't
17:02
have an intuitive answer. My gut
17:04
feelings about tax worthless. You know,
17:06
because the question is mathematical and
17:08
economic? An abstract. My gut feelings
17:11
about running. Away from lines of probably
17:13
quite good and my gut feelings about
17:15
people, I know whether I can trust
17:17
someone I know very well as perfectly
17:20
good. And so the trouble is, there
17:22
are situations in which our gut feelings
17:24
are valid and useful, and that broadly
17:26
is when we know a person or
17:28
situation well when we have relevant expertise.
17:31
But outside of that. Our.
17:33
Gut feelings are awesome. Worse than
17:35
useless. My gut reaction to attack
17:37
scheme to a politician to a
17:39
charity appeal to wear money should
17:41
be spent in all of those
17:43
areas. Our intuitions worse than useless
17:45
and so we absolutely have to
17:47
try to. Pause. And
17:49
come up with a context that helps guide
17:52
our intuitions at Under Up and he we
17:54
can do this so for example we can
17:56
try come up with a better question. So
17:58
rather than saying okay, what you. The like
18:00
about tax to like the idea of adding
18:02
very low tax of hard work is just
18:05
get to get rewarded. Or did you got
18:07
there have a very heavy tax is not
18:09
a good thing. Instead you could say to
18:11
people okay here's a visualization of different tax
18:14
game Since gonna show you with visuals the
18:16
distribution of wealth in a soft it's gonna
18:18
do we a graph is gonna show you
18:20
and then you might for example look at
18:23
this graph of the cubbies visuals of sight.
18:25
Well actually this game really penalizes people here
18:27
or this scheme really penalize people. Do this.
18:30
So we can try and come up with better
18:32
frame things. That. Allow our intuitions to
18:34
function better when allow us to think better.
18:37
And that's a big question, right? Ultimately,
18:39
when it comes to technology, Will.
18:42
Result: Official intelligence or just a
18:44
graph of key question for me
18:46
as does this technology. Elevator
18:49
Cognition and encourage us
18:51
to. Make. A better, a
18:53
more reality based decision. Or
18:55
does it try and. Manipulate.
18:57
Us and pushes towards a decision
19:00
not on the basis of evident
19:02
so reason, but on the basis
19:05
of politics or propaganda or oh,
19:07
exploitation. The difficulty I think that
19:09
as you mention a thing though.
19:12
The. Rick Adelman describes his that you will reject
19:14
or never perfect of course of that's why
19:16
are they are shortcuts but they are more
19:18
or less but records or they work well
19:21
when when the environment is. Predictable.
19:23
Enough and you have. Time. Enough
19:25
to gain that experience, rights and
19:27
in most of our current environment
19:29
specially those complex once it's quite
19:31
difficult to have that says what
19:33
you're suggesting. it makes lots as
19:35
you that we frame things in
19:37
a way that the cognitive overload
19:39
these lowering the sense that enables
19:41
people to think about that rationally
19:43
exactly current. His really simple advice
19:45
that I think is really good
19:47
you have limited time. Unlimited
19:49
attention. So when you're facing a
19:52
complex or important question and when
19:54
you don't have enough information that
19:56
friend good information to answer accurately.
19:59
pause and seek
20:01
cognitive reinforcements. It's
20:03
a golden rule, slow down and
20:06
look for something or someone that
20:08
will help you make an evidence-based
20:10
decision and this partly also
20:12
means embracing uncertainty. We saw this during
20:14
the pandemic when a lot of
20:17
the time, understandably, a situation
20:19
of fear and horror
20:22
as well as hope and learning, a
20:24
lot of the time, people were terrified by
20:27
uncertainty, politicians and citizens
20:30
and they desperately wanted a simple
20:32
story to hang onto and answer. It's
20:34
this, it's that, this works, this
20:37
won't work and in some ways
20:39
the most important message which was
20:41
emotionally very difficult was it is
20:43
uncertain and the most important thing
20:45
we can do is seek
20:47
more evidence, is act fast and
20:49
preventatively in this area but in this area,
20:51
keep our minds open. It
20:53
may be that we think wearing a
20:55
mask is very important and then it
20:57
turns out it's not that important compared
21:00
to other things. It may be
21:02
we think the virus spreads by contact and
21:04
then we learn it spreads through the air and
21:07
so on. It's incredibly difficult but
21:09
important to keep an open mind
21:11
and to follow the evidence and to try not
21:14
to get sucked into turning it
21:16
into an ideological or tribal
21:18
battle. That's the enemy of good thinking.
21:20
Suddenly, so many questions that
21:23
are so important these days become
21:25
matters of identity and allegiance and
21:28
that is absolute poison for critical thinking
21:31
because it causes people to judge them
21:33
by tribal allegiances
21:35
rather than by evidence
21:38
and open mindedness. And you
21:40
were mentioning uncertainty
21:42
and that's one of the things we have a very
21:45
hard way to get our heads around.
21:47
So actually it's one of the topics
21:49
I cover in the, in one module
21:51
of my critical thinking program causal explanations
21:54
because we deal very
21:56
poorly with randomness and uncertainty. So if
21:58
what you were suggesting, which
22:00
makes a lot of sense, was emphasised by
22:03
both politicians and practitioners during the
22:05
time of Covid. It would
22:07
have been very difficult for that message to
22:09
pass, as it was very difficult. So saying,
22:11
we actually don't know this, so this is
22:13
still an open question. We'll see. So this
22:15
is the most reasonable response
22:18
that we don't know yet. It's a very difficult
22:20
message emotionally. If you want to seem strong as
22:22
a leader, if you want to give people hope.
22:24
So if we think about uncertainty,
22:27
social media makes it really
22:30
obvious just how
22:32
difficult it is to communicate
22:34
uncertainty effectively. If I
22:36
want to go viral on, you know,
22:38
X or TikTok or whatever, saying
22:41
something like, it's complicated, no
22:44
clear answer, we need to seek more evidence,
22:46
I'm not quite sure. That's pretty much the
22:49
least viral thing I can say. We're
22:51
saying, wow, there's this compelling evidence, there's
22:53
this conspiracy, there's this story, we've got
22:56
to do this. These calls to action
22:58
will get a lot of attention. And
23:00
so we have a very direct conflict
23:03
between content
23:05
and ideas that are emotionally arousing
23:09
and the mindset of truth
23:11
seeking. And I
23:13
think a huge challenge for technology
23:15
and leaders today, which is to
23:17
be eloquent and persuasive, but
23:20
at the same time to admit to uncertainty and
23:22
to change minds, to talk
23:24
in a strong way about I'm
23:26
not sure. And again, in terms of advice,
23:29
I think one of the most powerful and
23:31
useful things that you can say, work
23:35
online in your everyday life is, I'm
23:37
not sure. What do you think?
23:41
What evidence is there? What's the other side of the story? And
23:43
it can be very difficult to do
23:46
that because it doesn't look like strength
23:48
or leadership or decisiveness, but it's really
23:50
valuable. Yeah. And on social
23:52
media, do you think there's a way to square
23:54
that circle? Do you see someone doing that? Yeah,
23:57
of course. Interestingly, I
23:59
think. One of the great
24:02
gifts of social media can be the
24:04
trust that can exist between person and
24:06
their audience. When
24:08
you see how a lot of
24:10
young people use things like YouTube,
24:13
you go to follow someone who's
24:15
a great gamer, who's a great
24:17
streamer, who's passionate about a product,
24:19
who's passionate about engineering. In
24:22
these areas, actually, because
24:24
people's reputation and relationship with their followers
24:26
is so important, they will often say,
24:28
it's really complex, let me take you
24:31
on a journey. It's
24:33
fascinating how complex this is. Did you know how
24:36
crazy the history of Minecraft did? Did
24:38
you know how difficult it was to hand-carculate
24:41
how to get a rocket to the moon?
24:43
Did you know that rocket science is
24:46
really complicated, that building, and so on?
24:48
So there's a lot of great ways
24:50
that you can draw people into the
24:52
story of complexity, the story of uncertainty.
24:55
And I think trust is the key in
24:57
areas that people care about and trust people
24:59
in, where people
25:02
have got skin in the game, where
25:05
the communication is about where you're
25:07
accountable to your audience. We
25:09
have a specific audience to write. That's
25:11
right. So if I'm an
25:13
influencer writing about even fashion, and I
25:16
just talk absolute nonsense, no
25:18
one's going to trust me as a good
25:20
advisor. If I'm writing about video games or
25:22
science, my audience are not going to follow
25:24
me if I turn out to be
25:27
wrong. But of course, there's a
25:29
lot of games online that are just played for
25:31
clout, that are just played for impact, that are
25:33
just where the game people is
25:35
playing is not about truth, is not about
25:37
understanding. It's about influence, it's about
25:40
persuasion, and so on. It
25:42
all comes down to storytelling, really. And for
25:44
me, then this is
25:46
helpful, because actually we
25:48
can tell really good stories that are
25:50
also true, if we are
25:53
willing to bring an audience with us. If
25:55
we have confidence that our audience
25:57
are not idiots, and this is another thing, right?
26:00
can't be patronizing. If you
26:02
treat people like idiots, if you have a low opinion of them, you
26:06
fail to understand why people
26:08
arrive at wrong beliefs or why people disagree
26:10
with you. Often, actually,
26:12
there are very good reasons. For example,
26:15
vaccination. A lot of
26:17
people would distrust for the vaccination for
26:20
good reasons. I think misguided reasons, but
26:22
a lot of people in America and
26:25
elsewhere felt that they were
26:27
being lied to or patronized by the
26:29
government, that the government doesn't
26:31
have their best interests at heart when it
26:33
comes to healthcare. That's quite true in
26:35
America. In a lot of countries, people are
26:38
treated with contempt. And until you
26:40
understand why people are deeply distrustful of what
26:42
politicians say, you can't then
26:44
attempt to persuade them or take them on a
26:46
journey. You can even argue with
26:48
them. So if they distrust the
26:50
source of information, you can't even have
26:53
an argument or discussion with them because there's no
26:56
common ground even at the basic level. That's
26:58
right. And the common ground is so important.
27:01
When you want to talk to someone, you're
27:03
not enemies playing a
27:05
zero-sum game. Maybe you are sometimes
27:07
in some fields, but generally
27:10
speaking, starting with the idea
27:12
that other people have good
27:14
reasons for their beliefs that
27:16
are good and compelling to them and
27:18
trying to understand those reasons, that's a
27:20
powerful way of building coalitions
27:22
and finding, you know, saying, look, we all
27:24
want to get through
27:27
this. We're not necessarily enemies.
27:29
There's a common ground here. Now, we
27:31
can't always do that, of course, but
27:34
the ability to build coalitions with diverse
27:36
views, incredibly important in
27:38
politics, in government, in healthcare
27:41
and society, you know, critical
27:43
thinking is not about being a clever person
27:45
who's always right. That's a very
27:48
bad thing to aspire to be, because
27:51
ultimately, you're aspiring to be
27:53
divisive and self-righteous and
27:55
privileging your own cleverness Over
27:57
engagement with other people. You know, I Think for me,
28:00
The great art is bringing as
28:02
many people together as you can
28:04
and is cultivating mutual respect and
28:06
trying to build forms of understanding
28:09
together. The. Or and because you never
28:11
know if a writer not. And actually it's
28:13
interesting because the we got around. Three.
28:15
Or four dishes in our conversation. So far
28:17
as who talk about. Limitations of
28:19
our minds we talked about complexity and how
28:21
the world is both much more complex than
28:24
the world we evolved in and is getting
28:26
much more complex nowadays with technology and just
28:28
no way that you mentioned trust and how
28:30
it makes it much more difficult for us
28:33
to find common ground and I feel I
28:35
don't know if you felt that when you're
28:37
writing your book and then putting together these
28:39
programs. I think that's
28:42
or. I found myself concluding that the
28:44
classics critical thinking is not specially helpful.
28:46
For instance, detective arguments are completely useless
28:48
in the real world. So the the
28:50
classic argument that all men are mortal
28:52
Soviet it is a man. Therefore, he's
28:54
more to visit like the classic is
28:57
active arguments because you live in the
28:59
real world because he doesn't tell you
29:01
anything which is not and aspects which
29:03
is nothing that the raw material so
29:05
to say right? So Smith is then
29:07
tell you anything which is not in
29:09
the reasons to begin with. And the
29:11
real world is about our biases is
29:13
is about having common ground with the
29:16
others and is about dealing with complexity.
29:18
It's that makes it impossible to conclude
29:20
anything on that Detective Macys see, that's
29:22
that's right. But I would put it.
29:24
A different way which is that I
29:27
think these tools a very valuable but.
29:29
They come in. Ah Steve done a
29:32
lot of work so a deduct his
29:34
argument. Tells you what is already
29:36
present in your premises. It spells it
29:38
out. Or that can be
29:40
very useful Suffer example if he said,
29:42
you know I don't trust vaccines because
29:45
they're on natural. Now. You.
29:47
Can spell out deductively certain things
29:49
from that you can say. Okay
29:52
so what you're saying He is.
29:55
Anything that some that's what is untrustworthy.
29:58
Their saw. wearing clothes
30:00
is unnatural. So that's
30:02
bad. Or living in a house is bad.
30:05
Are you saying that? Is that what
30:07
you mean? Or alternatively, if it's natural
30:09
for humans to wear glasses and clothes
30:11
and use antibiotics and go on airplanes,
30:14
then it's probably natural for us to
30:16
make vaccines too. Is that
30:18
what you mean? So you can
30:21
use the tools of deduction to see
30:24
where your premises lead you and then
30:26
investigate whether your premises are or aren't
30:28
good. But you're right that actually...
30:30
And also Tom, sorry to interrupt. And also
30:32
the example you were pointing out is a
30:34
good example of a bad inference. So the
30:37
premise might even be okay, but the inference
30:39
to the leap from the premise to the
30:42
conclusion is wrong. That's right. And of course,
30:44
inference in a way is a form of
30:46
hidden premise. All
30:48
inferences are in a sense
30:51
a premise about what follows and what
30:53
the logic of the situation is. And
30:55
what we're asking people to do, and I
30:58
think this is the crucial point, is
31:00
not, as you say,
31:02
seek perfect confidence through deduction.
31:04
It's to go back and
31:07
reconsider their premises and assumptions.
31:10
And so with critical thinking, today,
31:12
I put a lot
31:14
of emphasis on the groundwork. What
31:16
are your fundamental beliefs? Why
31:19
do you believe them? What are
31:21
other people's fundamental beliefs? Are you
31:23
really sure about those? What are
31:25
your feelings? How are your
31:27
feelings getting in the way here? What else
31:29
is going on? And so you're right, there's
31:31
other stuff, the classical work
31:33
around deduction and induction
31:36
and even abduction. I think
31:38
it's very valuable, but
31:40
I think we need to pay a great deal of
31:42
attention to the stuff that comes before
31:45
that, which is actually often
31:47
just a matter of observation, like thinking
31:49
harder. Okay, so yeah, we're saying unnatural
31:52
things are bad, but what do we mean by
31:54
that? Well, often what people actually mean is, I
31:57
don't trust the motives.
32:00
of some people creating new stuff and selling it,
32:02
like pharmaceutical companies or tech companies. And you can
32:04
then say, well, okay, well, that might
32:06
be a reasonable thing to say, but
32:09
it's not about natural or unnatural. To
32:11
some degree, it's about the grounds
32:13
that you have for trusting individuals.
32:16
And then the question is, well, what evidence
32:18
would persuade you? Because again, you
32:20
drive a car, for example, you live in
32:22
a building, you probably want to
32:24
have a qualified surgeon perform
32:26
an operation on your child, rather
32:29
than someone from 300 years ago, with
32:32
a rusty saw. So
32:34
we get to much more interesting stuff.
32:36
But yeah, it's about pausing and thinking
32:38
and, and not just going off
32:40
on this kind of deductive goose chase,
32:42
where it was you say you pretend
32:44
that you possess certainty. And
32:46
in my experience, and I bet in yours too,
32:49
asking questions works much better than going head
32:52
on against the argument. But it requires some
32:55
training because it goes, I think it goes
32:57
against our natural instincts, which is much more
32:59
to go head on and disagree. But if
33:01
you start asking questions, as in the example
33:03
you just gave, suddenly find
33:05
out that people didn't exactly mean that
33:07
or they were not that confident in
33:09
the reasons themselves, right? Exactly, exactly. And
33:11
of course, in order to ask questions
33:14
in good faith, you need
33:16
there to be, if you like, trust in
33:18
the room. So when I work
33:20
with companies with organizations, a
33:22
lot of the time, the question is one of kind
33:24
of culture and atmosphere. Everyone says, Oh, you can ask
33:26
any questions you like, but often in a meeting or
33:29
a boardroom. In fact, there's a lot of
33:31
fear and a lot of ego. So how
33:33
do you really establish that kind of
33:35
trust? And there are if you
33:38
like tricks, you can do with this. One
33:40
famous one is the example of a pre-mortem,
33:42
which I'm sure you and
33:44
your listeners may have heard of. It's a
33:46
very simple trick. It just says, let's pretend
33:50
that it's two years in the future
33:52
and our big project has gone
33:54
wrong. It's a disaster. And
33:57
now we're all going to have a competition,
33:59
a game. Where we talk about
34:01
why it went wrong. It's a game,
34:03
but it gives you permission to talk about Uncertainty
34:06
and failure you say okay Yeah, no our
34:08
new product went wrong because it turned out
34:10
that nobody liked it it turned out it
34:13
was too expensive It went wrong because our
34:15
competitors released a better one it went wrong
34:17
because we got sued and
34:19
so suddenly you Create
34:22
permission for people to challenge the
34:24
narrative to talk about assumptions. There's
34:26
lots of ways of doing this Yeah, I love
34:28
that too the primortum and there's another one I
34:30
don't know if you've seen that in practice, which
34:33
is it's similar actually it's
34:35
that obligation to dissent So whenever
34:37
there's a meeting you cannot leave
34:39
without someone expressing an opposing opinion
34:41
exactly, right? Yeah, the devil's advocate
34:43
and that's of course one of
34:45
the oldest Philosophical exercises
34:47
unaware of is that
34:50
of arguing both sides? Goes
34:52
all in about to ancient Greece and and
34:54
indeed other disciplines, you know and India
34:56
and elsewhere and Vedic thought even Where
34:59
the idea is that until you know
35:01
the other side of
35:03
the argument? You don't even know
35:06
your side of the argument. So you swap
35:08
you played that was advocate in the modern
35:10
world You might say you stress test an
35:12
idea by subjecting it to critical
35:14
scrutiny. It's a very powerful idea We
35:17
see it in science all the time in
35:19
science, you know, you start off with a
35:21
null hypothesis where
35:23
you're Starting assumption is
35:26
that the thing you're investigating is false
35:28
or not true and then you have
35:30
to try and prove that it's not
35:32
just Chance that there is a real
35:34
effect you discipline yourself to do that
35:36
and it's such an important mentality Because
35:38
everybody wants to say oh, you know,
35:41
you can say anything you like But
35:43
actually unless you put a framework
35:45
in place that really Demands
35:47
that people stress test ideas and speak
35:49
up. It won't happen by magic. Oh,
35:52
yeah Yeah, yeah in most cases by
35:54
the way Have you you probably tried
35:56
one of those debate models in which
35:58
you're forced to define? defend the opposite side.
36:00
And it's a great experience in that I have
36:03
a friend, she
36:05
participated in one of those ones, and
36:08
she had to defend China's dictatorship in some
36:10
other issue. I don't remember which one, but
36:12
then she was so persuasive that she won
36:14
the debate. She had
36:17
a very bad conscience by the end of it. It's
36:19
a great exercise. I do this with children and I've
36:21
written books for children as well. Interestingly,
36:26
when I'm doing this with companies
36:28
or schools or universities, I
36:30
often start with something very low stakes
36:32
to get people into that playful mindset.
36:35
And so I'll encourage people to come
36:37
up with an idea they disagree with,
36:39
but about something unimportant, like a movie,
36:42
like a film, or a food
36:44
they hate, or something like that. Because if
36:47
it's unimportant, if it's
36:49
not ideologically charged, then
36:51
people are very comfortable playing with it. And then
36:53
you take it to something more important. And
36:56
the more I work around this with
36:58
people running workshops and designing courses, and
37:01
I chair organizations as well, and
37:03
so on, the more
37:05
I feel there's the psychological work. I
37:07
do work around meditation and mindfulness and
37:09
breathing and listening. Because actually,
37:12
controlling your emotions and
37:14
finding comfort with discomfort
37:16
is so important. And
37:19
in so many politically and
37:21
economically charged atmospheres, none
37:23
of this happens. And people don't
37:25
want to admit that they're scared
37:27
or angry or uncomfortable or afraid.
37:30
But even very senior people often are.
37:33
And again, working with people on that
37:35
can be a very powerful way of
37:37
giving permission to think, if we don't
37:40
have that permission, that time, that
37:42
space, none of the good stuff
37:44
can happen. Those are two very important points,
37:46
I think. First, if you don't have that space, none
37:48
of the good stuff happens. And that's the one that
37:50
you made previously, that if you don't create
37:53
that space on purpose, it won't happen
37:55
by itself, or very seldom will it
37:57
happen by itself. Absolutely. And you know,
38:00
It can be really hard to
38:02
do and organizations and as government's
38:04
indeed, you know to do both
38:06
of those things to get permission
38:08
and to build the right kind
38:10
of. Spaces. And
38:12
again, this is where we can get back to
38:14
the evidence about cognition. There are
38:16
things people are bad at like
38:18
probability. Would. Just. Not very
38:20
good at that intuitively, but there are
38:23
things people do that like ranked ordering
38:25
for example. So if I give people
38:27
a whole bunch of numbers and say
38:29
you know which the preferred outcomes which
38:31
looks better for the organization a lot
38:34
of the time people will be very
38:36
last verse without be overly persuaded by
38:38
large numbers will They'll find it very
38:40
hard to compare different kinds of statistical
38:42
so on. But. It you say
38:44
to people. Rankin order these different
38:47
outcomes in terms of preference. People
38:49
will often be very good at
38:51
coming up with the first, second,
38:53
third, fourth, and fifth choice. So.
38:55
You can then. Tap. Into.
38:58
Valid intuitions. But. The
39:01
key question we keep coming back
39:03
to his a lot of the
39:05
time on social media or in
39:07
technological contexts. The design of
39:09
the system is all about. Emotion
39:12
and impact. Rather
39:14
than evidence and cognition.
39:17
And. So something as simple and city
39:19
as email. The. Way an organization
39:22
uses email is often set
39:24
up to force people. To.
39:26
Send like a hundred messages a day.
39:28
Really fast. Really?
39:31
Stressed to prove they're working. At.
39:33
Some really, really bad way
39:35
of using human minds and
39:38
human time. But. Lots
39:40
of organizations just do it by default. is
39:42
sending emails it to in the morning, or
39:44
is sending a hundred emails to prove that
39:46
you're doing your job. That's. Gonna
39:49
get. Really bad results. You can get
39:51
really bad. Think it lots of. Fear
39:53
and anxiety every time you send an
39:56
email. You. Create an email for someone
39:58
else to answer. Much. better
40:00
sometimes to have. So Amazon do this quite well
40:02
actually, whenever you think of them as a company.
40:05
They have very good meetings and
40:07
presentations. Arguments expressed not
40:09
in PowerPoint, like one side of
40:11
a sheet of paper, discussions where
40:14
a team has to make an argument and
40:16
a counter argument, which is not about individuals
40:18
showing off, and emphasis on
40:20
ideas and evidence, a ban on lots
40:23
of emails, lots of attachments, and insistence
40:25
and so on. So these
40:27
protocols, how you communicate, how you share
40:29
information, how you run your meetings, have
40:32
a huge effect in terms of the
40:34
quality of the thinking you get and
40:36
potentially the errors you avoid. It's
40:40
amazing that which is natural,
40:42
we use a lot of the same examples
40:44
and use cases because I also love that
40:46
example of Amazon in which rather than having
40:48
presentations, you have a document and that
40:52
policy they have and they're not the only ones that you get
40:55
into the room and you spend the first 15
40:57
or 30 minutes reading the document
40:59
rather than talking straight on without
41:01
reading what's actually the matter under discussion,
41:04
which is something that happens in
41:06
most organizations, of course, precisely because
41:08
you had to spend all that time writing emails
41:10
and answering to emails. Often you didn't
41:12
have time to fully read the email about which
41:15
that meeting is, so you end
41:17
up going to the meeting without having prepared and that
41:19
system forces you to prepare already
41:21
inside the room, which I think is a great overview.
41:23
That's right, that's right. And actually, I
41:25
see an analogy with some schools that
41:28
I visited, some religious schools where
41:30
you start the day with a couple
41:32
of moments of silence. You
41:34
start a lesson with a moment of silence
41:36
that before a question is answered, people just
41:39
sit with it for a
41:41
minute and small pauses
41:43
and silences can be
41:45
incredibly powerful just for
41:47
30 seconds. 30 seconds before
41:50
anyone speaks, we're just going to sit
41:52
with this question for 30 seconds. Even
41:55
that as an intervention can be
41:57
transformative sometimes. And again, it's mandatory,
41:59
right? Yeah, not optional. Yeah, not
42:01
optional. Again, you have to make it mandatory. Interesting.
42:04
So that's one I hadn't seen before Some
42:09
work and read read books by
42:11
a friend called Robert pointing who wrote a
42:13
book called pause and
42:16
he's I've taught of business schools with him
42:18
and so on and he very
42:20
much uses ideas like this about the
42:22
power of pauses and The
42:25
point of a pause is that
42:27
it's only by pausing that you can think
42:29
twice When we're
42:31
thinking critically, we're thinking not once but twice
42:35
We're having a thought and then we're saying hang on
42:37
a second. Is my thought Accurate
42:40
and useful and honest or
42:43
is there more I need to know without
42:45
a pause None of
42:47
the other stuff happens Even
42:49
in in conversations even the one we're having
42:52
right now, so it's often useful to pause
42:54
for a few seconds to think But
42:57
it's very unnatural to us. I think so
42:59
that's why for instance I forget the exact
43:01
name of these songs we do, you know
43:03
when we are talking and we are thinking
43:05
about what you want to say And I
43:07
say so Tom, you know and I do
43:09
this These feelers these are
43:11
meant to so I use them and everyone uses
43:13
them kind of unconsciously to sort of feel the
43:16
space Feel the air so that you don't step
43:18
in and the same thing goes for answering questions
43:20
right away So if you ask me a question
43:22
and I spend three or four seconds thinking about
43:24
it. I Fear
43:26
that either you will go on speaking or
43:28
someone else Well in this case not but
43:30
in real life someone else will step in
43:32
and talk so we are sort of trained
43:35
not to do that But that training
43:37
goes against good thinking as you
43:39
were saying. That's right. And sometimes
43:41
we have to unlearn Some
43:44
of the habits I think a lot of it does
43:46
come back to trust and to
43:48
shared values and you can write
43:50
these things down You can have a protocol you can
43:53
just say look We're not going
43:55
to interrupt. We're gonna have a
43:57
two-second pause after someone has spoken or
43:59
again a couple
44:01
of boards and one very simple
44:03
thing I do when the question
44:05
is important is A, ask everyone to read
44:08
it in advance and prepare some thoughts, but
44:11
B, is to go around the table and
44:14
ask everybody in turn to
44:16
just make a contribution and
44:18
everybody else to listen just
44:21
to make sure that the room isn't
44:23
dominated by the most confident. As
44:27
you say, silences can be
44:29
very unnatural and uncomfortable, but
44:31
I think something almost magical can happen if
44:34
you welcome silences and normalize them,
44:37
which is that they can become fertile,
44:40
they can become a place
44:43
where people show each
44:45
other respect, people think twice,
44:47
people generate ideas, people embrace
44:49
a different rhythm of
44:52
talk that isn't just performative
44:54
that isn't just performing confidence
44:57
and decisiveness. Again,
45:00
it's something I try
45:02
and model, I suppose, in a lot of the
45:04
work I do. I don't
45:07
use a PowerPoint, I don't even
45:09
use prepared notes sometimes,
45:12
just so that people look, I want to get you
45:14
thinking and talking and listening, I want
45:16
to get one person to talk to another, and
45:18
then that person to speak back what
45:20
they think they've heard, and just
45:23
to spend some time talking about what they think
45:25
they've heard, what they think they've
45:27
understood, they've misunderstood, clarifying this
45:30
active listening technique. And
45:32
it can be revelatory
45:35
because people realize that
45:37
they don't have much of this kind of
45:39
listening and time and attention in their lives,
45:42
and that if you deploy it
45:44
selectively, it's very powerful. It's
45:47
quite brave not to use PowerPoints,
45:49
I find brave in the sense that you
45:52
have nothing to hold to if you
45:54
suddenly forget what you were saying. But
45:56
also, so this part is obvious, but
45:58
I think sometimes these it creates a sense
46:00
of uneasiness on the part of the audience. If you
46:03
go there without PowerPoints, people
46:05
generally think, okay, he's exactly sure what he's talking
46:08
about. What will this be
46:10
about? Is it
46:12
only him talking? Some people find, in my experience,
46:14
find it strange. Yes, that's right.
46:16
And I think often what
46:19
I need to do for people is tell them quite
46:21
a lot in advance about what I'm going to do
46:23
so that they're confident, they're
46:26
structured there and exercises there and content
46:28
there. It's not in PowerPoints form, it's
46:30
in my head. But
46:32
I do say to people, and I've been giving
46:35
talks and running workshops
46:37
and working around this for more
46:39
than 10 years now, and
46:41
I've got more confident at being, I guess,
46:45
authentic or trying to be and saying, look,
46:47
I'm talking about thinking. So
46:49
what I want to do is get people thinking. I
46:52
want to have everybody in my audience doing
46:54
some thinking, pausing for 30 seconds,
46:56
having some dialogue in pairs, sharing
46:59
ideas. And in some ways,
47:01
I have less and less content and
47:04
more and more because it's the doing. My
47:06
goodness, there's a lot of content out there. The
47:08
world doesn't need more content. The
47:10
world needs more thought. But
47:13
you're right that some audiences want
47:15
a PowerPoint. Some people want
47:17
the comfort of that. And
47:20
then it's finding ways of doing that that still
47:24
have a little bit of thoughtfulness in
47:26
them. You've
47:30
been teaching critical thinking to, I imagine, very different audiences. So
47:32
you wrote
47:52
a book and the book is by definition
47:54
to a general audience. I mean, you
47:56
don't know the specific audience or the
47:58
specific people are reading the book. book but
48:00
it's anyone. Then you're also, as
48:03
I understood doing it at schools
48:06
and you're doing it in companies and
48:08
you put together that course with the
48:10
economists that we talked before. What
48:12
is your experience taught you about the
48:14
type of skills and the type of
48:17
challenges that people face in these different realities? So I
48:19
bet that the general public for
48:21
instance and the
48:23
corporate public have different challenges
48:26
in the sense that the
48:28
first is probably more concerned with societal
48:31
issues, politics, social media and
48:33
the second with more concrete
48:36
problems about improving decision making,
48:38
problem solving, making meetings
48:40
more efficient and whatnot. So I guess
48:43
the challenges themselves will be different, right?
48:45
They're very different but interestingly
48:48
the more I do, the more
48:51
I find that the same fundamental
48:53
kind of exercises and experiences
48:56
that work in schools with
48:59
12 year olds also work
49:01
in boardrooms for chief executives. That's
49:03
quite interesting. And so some
49:05
of the stuff we've talked about. So you do
49:07
the same things, that's interesting. I do the same,
49:09
in some ways I do the same things with
49:11
different content. I get people to
49:14
do active listening, to speak, to
49:16
listen, to pause. I
49:19
get people to argue different sides in
49:21
a debate. I get people
49:23
to try reframing questions in multiple
49:25
ways. I get people to try and
49:28
reflect upon their own assumptions. And
49:31
when it connects psychologically, ironically
49:34
enough I think for senior
49:36
executives or people in very senior
49:39
positions, being given permission to
49:41
play these kind of games and talk about
49:43
their feelings and ideas and think about
49:46
thinking. It can
49:48
be most powerful when it isn't just
49:50
full of business jargon
49:53
and when it isn't just about saying
49:55
what's the strategy for your organization, what
49:57
are your challenges, you know, what are
49:59
your threats and you know
50:01
opportunities and and so on
50:04
but in some ways you know if
50:06
someone brings me in it's because they
50:08
probably want something different to
50:11
a kind of standard business school
50:13
or course so for me
50:15
the you know the holy grail is almost
50:18
you know the fundamental insight that works for everyone
50:20
but of course you use very different language you
50:23
use very different examples you have to
50:25
speak to someone's reality and so when
50:28
I'm dealing with teenagers I often talk
50:30
about social media and AI and work
50:32
when I deal with organizations I'll talk
50:34
about organizational strategies and
50:36
how different organizations run meetings and
50:39
so on but I
50:41
think the fundamental sort of human experience
50:43
of and there's that eureka
50:45
moment when someone does something
50:47
for themselves and it opens up a slightly
50:50
new way of thinking give them a slightly
50:52
new thinking tool and that's always the price
50:54
no matter who
50:56
the audience is. Yeah interesting so
50:58
the topic changes but the method
51:00
stays the same and
51:03
I guess in companies it might even be a
51:06
more added value in a sense because I
51:08
feel that when you work in a corporate
51:10
environment you are working in a with a
51:12
level of abstraction which helps to some extent
51:14
because we need that straight we need concepts
51:17
to think about complex things but at
51:19
some point it starts helping because you're using
51:21
those concepts which are not challenged anymore so
51:24
they are kind of crystallized maybe
51:26
no one really knows what that means what the
51:28
strategy means or the strength means or the or
51:30
the second of the market or the customer whatever so
51:32
suddenly you don't know what it means and then you go
51:35
in I guess and get them to to
51:37
paraphrase what the other said to start
51:39
questioning assumptions and suddenly you get those
51:41
eureka moments. That's exactly right you know
51:43
big organizations will tend to have a
51:46
lot of unexamined assumptions and
51:48
they have to because you know you have
51:50
to assume a lot an organization
51:52
is a huge kind of machine a huge
51:55
structure and it can be a very good and
51:57
a very successful machine and that's partly of
51:59
course the power of having an external speaker
52:01
and an external perspective. It
52:04
gives people, again, permission to challenge and
52:06
change, or just be honest or just
52:08
listen. I think most
52:10
consultants, most people who work with different
52:12
organisations will say the same thing, which
52:15
is sometimes when you just talk to people at length
52:17
and let them express
52:19
the uncertainties and
52:21
try and explain to someone else what's
52:24
going on. That's when
52:26
these little insights coalesce. Again, there's a
52:29
very simple trick here, which I've written
52:31
about and used. Explain
52:33
like I'm five, explain like I'm
52:35
six. Imagine that I'm a smart
52:39
seven-year-old and I know nothing
52:41
about your organisation at all, but I'm really
52:43
interested. You've got to explain to me
52:45
what you do in your job, but
52:47
you've got to make it work for a
52:49
seven-year-old. You can't just say, well,
52:52
I'm a VP of communications and
52:54
I manage a cross-platform strategy, taking about
52:57
conventional and new and emerging media. You
52:59
can't say that. You have to say,
53:01
well, my job is
53:03
to try and tell the world about what
53:05
we do. You have
53:08
to use ordinary language. Actually,
53:11
I come back to, I have a great memory
53:13
of being at a conference with Danny Kahneman,
53:15
among other great people a few years ago.
53:18
My memory
53:20
of him is he sat on the
53:22
back row and every now and then he just raised
53:24
his hand and he said, I'm
53:26
very sorry. I just don't
53:28
really know what you're talking about. What
53:31
do you mean? And there's someone
53:33
very distinguished at the front talking about, for
53:36
example, their research or their
53:38
AI strategy. He'd say, yeah, but why
53:41
are you doing that? What do you mean? You've
53:43
just said your strategy is
53:45
to reinvent workflows by integrating
53:47
AI. What does that mean?
53:50
Why do you want to
53:52
do that? And
53:54
of course, he had such a reputation
53:57
that these questions were taken seriously, but these
53:59
very simple questions. are very powerful. Yeah, it
54:01
reminds me, I don't know if you read about that,
54:03
but it reminds me a lot of what Richard
54:05
Feynman used to do. Oh absolutely,
54:08
yeah. Right, so you get
54:10
people who went to his office and
54:12
they suddenly started making, posing these very,
54:15
very basic questions that almost startled
54:17
the other person, but suddenly
54:19
as they went, the other
54:21
person found out that there was, what would eventually
54:24
usually find out that there was some
54:26
gap in the reasoning, and you wouldn't get
54:28
to that without those questions. Absolutely right. Feynman's
54:30
a wonderful example. I think he was someone
54:33
who physicalized things as well. He
54:35
would play with the ball, throw
54:38
stuff around, make things. Anything that
54:40
helped you see more clearly was
54:42
good. He obviously made great discoveries
54:45
in quantum electrodynamics and the fundamental
54:47
physics, but also made some huge
54:49
contributions to the biological sciences
54:53
and around other ideas that he took an interest
54:55
in. I think having someone
54:57
ask these questions and the only agenda
54:59
is trying to understand better. That's the
55:01
crucial thing, right? That's critical thinking in
55:04
a nutshell. Your agenda is
55:06
just trying to really understand a bit better
55:08
what's going on. You're not trying to look
55:11
clever. You're not trying to score points.
55:13
Maybe at that point in time, you're not even trying to make a
55:16
profit, down the line you are, but
55:18
at that point in time, it's just like, what's really going on? What
55:20
do we really do here? What is
55:22
your job? What are you really doing?
55:25
What are you really worried about? These
55:27
questions are very powerful. You can't spend your whole
55:29
life answering them or asking them, but to
55:32
be able to have those questions in the room
55:35
can be transformative for individuals
55:37
and organizations. Yeah, because you'll
55:39
find out that there were
55:41
some assumptions that were not
55:43
questioned and that were actually
55:45
essential. Always. I was getting
55:48
excited. I just did a lovely example for
55:50
SpaceX, say, who put a huge amount of
55:52
money and effort into
55:54
trying to catch elements
55:57
called fairings that were
56:00
shed by their rockets. They're
56:02
like covers, but very expensive covers
56:04
that would drop down when
56:06
the rocket went up and they'd fall into the ocean
56:09
and be wasted. And they're costing like a million dollars
56:11
a pop. So you're wasting millions of dollars every time
56:13
you fly a rocket, even though you're reusing the main
56:16
units of it. And they had these
56:18
genius ideas about massive robot boats with
56:20
nets and things like that to try
56:22
and catch these fairings and
56:24
reuse them. But the rate was really low. It
56:26
was really difficult and in bad weather and so
56:28
on. And then they took a few steps back
56:30
and said, well, hang on a second, what are
56:32
we doing here? We want to
56:35
reuse these things and not waste them. It will
56:37
save us millions. So we're trying
56:39
to catch them in massive robot nets. But
56:41
that's crazy. That's not working. What if we
56:43
just spent a bit more money and made
56:45
them waterproof? What if we just made
56:48
it so that these things drop into the ocean and rather
56:50
than be lost and get spoiled, they
56:52
just float, the key components
56:54
are protected, and we scoop them up. And
56:57
they did that. And suddenly, they've got
56:59
basically 100% reuse rate, because they just
57:03
took a step back and said, what are we trying
57:05
to do here? We're not trying to build a robot
57:08
space component catcher. We're
57:11
just trying not to waste stuff. So let's just build
57:14
it differently. So it floats. Yeah, that's an
57:16
interesting story. And it resonates with me because
57:18
I think we have difficulty
57:22
with adjusting our behavior
57:24
in a manner that's not incremental. So
57:27
usually what we do is we
57:29
adjust incrementally. So something changes from a situation to
57:31
the other region. Sometimes what's
57:33
required is a complete change. And
57:35
that's very unnatural to
57:37
us, I find. Absolutely. Yeah, in a way,
57:39
it's part of a sort of a sunk
57:41
cost fallacy, which is, you know,
57:44
you put a certain amount of time and effort into
57:46
a solution or a product or an idea. And
57:48
then, you know, your mission becomes
57:50
to put more effort into that you
57:53
can't let it go. Whereas
57:55
actually, the most important lesson might
57:57
be, you've discovered that that doesn't work.
58:00
And what you've learned is you need to try
58:02
a different approach that you're going to get there.
58:05
And of course, science is often like that. You
58:07
put huge amounts of effort into
58:10
trying to make Einstein's
58:12
equations work at the
58:14
level of fundamental particles. You
58:17
really desperately, God doesn't play dice.
58:19
You do not want there to
58:21
be a random or unpredictable component
58:23
in fundamental physics. So you try
58:26
and you try and you try
58:28
to save Einstein's equations. And
58:30
then eventually you have to admit experimentally
58:32
that you take a few steps back
58:34
and, yep, uncertainty is there in
58:36
the fabric of the universe. You
58:39
have to embrace the uncertainty. You can't get rid of it. It
58:42
seems to be fundamental. You have
58:44
to change your whole notion of what is fundamental. And
58:46
there's lots of wonderful writing about
58:49
paradigm shifts in ideas. And
58:51
I guess the essence of that, Thomas
58:53
Kuhn and others wrote about the fact
58:55
that you have incremental discoveries, as
58:57
you say, that update a model.
59:00
But eventually, at some point, a
59:03
new way of thinking comes along when
59:05
the old way of thinking can
59:07
no longer explain and hold all the
59:09
new knowledge. Suddenly
59:12
a whole new paradigm, a whole new way
59:14
of thinking emerges because
59:16
you need that paradigm shift to
59:18
deal with all the
59:21
new things you've learned about the world. And
59:23
it gets unsustainable to keep using the old
59:25
model. That's right. But the
59:27
old model is still good for some point.
59:29
We still use Newton's equations on a large
59:32
scale. We still use Einstein's equations. But we
59:34
also need Feynman and Bohr
59:36
and Schrodinger and others' occasions to
59:39
do different things. You don't just throw away
59:41
the old, but we find new things and
59:43
new ways alongside it. I'm curious
59:45
about the dynamics you do, both
59:47
with the general public and in
59:49
schools and in companies. How
59:52
large a number can you have in
59:54
those meetings in the room? Or
59:57
to ask in another way, what's the ideal number of people for
59:59
the world? that work well. Because
1:00:02
I'm asking this because I find that if
1:00:05
the audience is too large, and
1:00:07
by too large, I mean over 10 people,
1:00:09
it starts getting difficult to
1:00:11
have a single conversation which is organized at the
1:00:13
same time. For me, I guess again,
1:00:16
I almost reframe it, I see
1:00:18
it the other way around. How can I have
1:00:20
something that's valuable for any number of
1:00:22
people? So I
1:00:24
love working with groups of 10
1:00:27
or 20 people. But I also do
1:00:30
classes and master classes in person for people like
1:00:32
200, 300 people, where
1:00:34
I'll get people organized in groups and
1:00:36
get them doing exercises and kind of
1:00:38
trust them with it. And then online
1:00:41
or events wise, I've done talks for
1:00:43
thousands of people, or even broadcasts for
1:00:46
the audiences of millions of people eventually.
1:00:48
And the key for me
1:00:50
is asking, well, how can you give a
1:00:52
meaningful experience to this number
1:00:54
of people? What's the thing you need to
1:00:56
do? And I guess I probably fail quite
1:00:59
often. It's harder with more people. But very
1:01:01
broadly, I try to find a way to
1:01:03
kind of trust people with
1:01:05
ideas, to give them something and then trust
1:01:07
them to make something of it. So
1:01:10
with a very large number of people, you just have
1:01:12
a pause, you have a silence, you just say, look,
1:01:14
I want you to take 10 seconds right now to
1:01:17
think about this, to
1:01:19
reflect upon this point. I want you to
1:01:21
go away and do this. I'm going to tell you this story.
1:01:23
The smaller numbers of people, but still hundreds, you can say I'm
1:01:25
going to split you into groups. And I want you to talk
1:01:27
to each other about this. Yeah, they
1:01:29
do it themselves. Do it themselves.
1:01:31
But you're right that ultimately, you know, with
1:01:33
a couple of dozen people, you can do
1:01:35
something a little more magical. And
1:01:38
you can kind of have a room and have
1:01:40
people really in the moment talking to each other
1:01:43
and working together and all
1:01:45
hearing each other. And that isn't
1:01:47
possible with larger numbers of people. Or
1:01:49
that interestingly, when I do things online,
1:01:52
I find group chat can be very powerful, because
1:01:55
actually, people can't hear many voices at
1:01:57
the same time, you know, it just becomes a shouting
1:01:59
match. But with a group chat, you
1:02:02
can say to people, you know, okay, we're going
1:02:04
to reframe this question, we're going to come up
1:02:07
with as many different ways as we can of
1:02:09
defining a word like fairness, or efficiency, these words
1:02:11
that seem so simple, but they'll actually turn out
1:02:13
to have 30 or 40 or 50 different meanings.
1:02:15
And not the same to everyone. Not the same
1:02:18
to everyone. You say to an organization, okay, so
1:02:20
here's your core values, you're an organization, and you're
1:02:22
all about delivering value for customers. So
1:02:25
each of you is now going to time right down in the
1:02:27
group chat what you think that means in a sentence. And
1:02:29
you're going to get 60 different sentences and some
1:02:31
of those sentences are going to be contradictory. And
1:02:34
this thing that people thought we're all
1:02:36
about making, you know, helping our clients
1:02:39
be more efficient. So we're all
1:02:41
going to write down what we think the word efficient means. And
1:02:43
lo and behold, it means 50 different things.
1:02:47
Yeah. But what I find strange and a bit
1:02:49
annoying there is that I think vagueness
1:02:52
in the corporate world has some appeal.
1:02:54
So if you deliver a
1:02:56
talk talking about excellence or greatness or
1:02:58
whatever other vague words we can
1:03:01
think of, if you do it with
1:03:03
the right rhetoric, so if you do it with
1:03:06
right enthusiasm and with good delivery,
1:03:09
it often persuades people more than something
1:03:11
which explains better and in concrete terms
1:03:13
a concept. In other words, it
1:03:16
seems that vagueness works towards
1:03:19
persuasion in the rhetorical sense. I think
1:03:21
that's right. Vagueness is very safe
1:03:24
because you can't, if I said my
1:03:27
vision for this company is to be
1:03:29
bold and to be brave and
1:03:31
to be beautiful. What does that mean? Like
1:03:34
you can't call me on it. And I said, and I
1:03:36
can't disagree with that. No one, right? If
1:03:38
I said my vision for this company is to deliver
1:03:40
a minimum of a 15% increase in turnover and
1:03:45
profits and to be employing 25% more
1:03:47
people and to be shipping 10% more
1:03:50
product, very obviously you
1:03:52
can call me on that. Also if you're vague,
1:03:55
people are free to project their own ideas
1:03:57
onto it. I think I'll render.
1:04:00
And Feynman is very useful here, particularly
1:04:02
his work in education. He worked on an education
1:04:04
board and took a horror of
1:04:06
everybody in the education board. He read all
1:04:08
the textbooks. Yeah, I read. And
1:04:11
he made a lovely distinction between clarity
1:04:13
and false precision. Clarity
1:04:15
has been clear about what you mean. Clarity
1:04:17
is saying in a school textbook, when
1:04:20
the car drives along the road,
1:04:23
the surface of the road scrapes off little bits
1:04:25
of rubber from the wheel. And
1:04:27
very gradually, the rubber of the wheel
1:04:29
gets scraped away by the road and
1:04:31
the energy of driving. So you have
1:04:33
to replace it with a new wheel.
1:04:37
And that's clear. You're explaining
1:04:39
to some kids about the ideas underlying
1:04:42
friction. Whereas false
1:04:44
precision is saying, over
1:04:46
time, friction causes the erosion of wheel
1:04:48
rubber. That sounds really
1:04:50
scientific, but it doesn't explain anything. It
1:04:53
just sounds good. And Feynman made the point
1:04:55
that clarity is really important. And
1:04:57
that doesn't mean trying to be hyper precise. Again,
1:05:00
to be clear is to say, I
1:05:02
want to see our organization growing
1:05:05
year on year by more than our
1:05:07
competitors. It's nice and clear. Whereas
1:05:10
perhaps false precision is saying, I
1:05:12
want to see a minimum uplift of
1:05:14
2.2% above base
1:05:16
rate on a quarterly basis or something
1:05:19
like that. When talking about the
1:05:21
future in terms like that is meaningless, because
1:05:24
the error margins in measurement and
1:05:26
so on. Sorry
1:05:29
to interrupt, but I think Feynman's point had a lot
1:05:31
to do with clarity
1:05:33
being associated with being
1:05:36
concrete. So talking about concrete stuff,
1:05:38
whereas that false precision
1:05:40
usually employs concepts which might
1:05:42
not be vague, but are
1:05:44
not necessarily known by
1:05:46
your interlocutors. Exactly right. There's
1:05:49
these sort of two dangers really that you're hyper
1:05:52
precise in a meaningless way, or
1:05:55
that you're vague and obscure. And as you
1:05:57
see, true clarity is somewhere in the middle.
1:06:00
clarity is explaining
1:06:02
in kind of simple concrete language what
1:06:04
you're actually talking about. The
1:06:07
values of organizations is
1:06:09
a classic area for kind of vagueness and
1:06:11
hand waving because they're sort of designed to
1:06:13
make people feel good and be vaguely nice
1:06:15
and that's very safe. People
1:06:18
can project on it, makes people
1:06:20
feel warm inside, but actually I
1:06:22
think it can be very, very
1:06:24
undemanding in the worst sense because
1:06:27
we come back to the idea of
1:06:29
science that a theory or an idea
1:06:31
which can't be tested or disproved isn't
1:06:34
worth very much. You know,
1:06:36
if it explains everything, it explains nothing.
1:06:38
If I have a statement with which
1:06:41
no one would disagree, it's probably not
1:06:43
worth saying. I want our organization to
1:06:45
be inclusive and bold. Well, who doesn't?
1:06:47
Does anyone want their organization to be
1:06:49
timid and un-inclusive? Probably
1:06:52
not. I want my organization
1:06:54
to be a leader in its
1:06:57
field. Well, again, do I want my
1:06:59
organization to be a follower in its
1:07:01
field? And that simple trick of
1:07:03
saying, well, is the opposite of this statement
1:07:05
even vaguely credible? Is
1:07:08
this actually anything at all? But of
1:07:10
course it can be quite dangerous to say this. A lot of people
1:07:13
just want a bit of vague,
1:07:18
feel-good noise. Yeah, you
1:07:20
want a round, I don't know if this expression
1:07:22
exists in English, like you want round talk, like
1:07:24
in the sense of not
1:07:27
having angles or edges.
1:07:30
And it's not very different from politics.
1:07:33
It's not exactly the same, but you get whenever
1:07:36
you're talking to a broad
1:07:39
audience, which is diverse, you
1:07:42
end up, I think, having to
1:07:44
use that vague speech. Yeah, it's
1:07:46
a real danger. And it's
1:07:48
a strategy. I
1:07:50
do think that the kind of the
1:07:52
greatest examples of rhetoric, which obviously are
1:07:54
rare, combine inspiration
1:07:57
and persuasion. with
1:08:00
real kind of rational content.
1:08:04
We think back to Martin Luther King's
1:08:06
speeches and things like that. Visionary
1:08:09
rhetoric, but also
1:08:11
specific, painfully specific,
1:08:14
concrete, rooted
1:08:17
imagery and ideas and
1:08:19
asks. That's a very good point. And also,
1:08:21
in a sense, his speech,
1:08:23
to take that example, does
1:08:25
what we've been talking about,
1:08:27
which is question some assumptions which are
1:08:31
vague or imperfect or unrealistic. So it has
1:08:33
clearly that rational part and then on top
1:08:35
of it, obviously, has that
1:08:37
amazing rhetoric. So you have the two
1:08:39
components working with each other. That's right.
1:08:42
And I think we're not always
1:08:44
going to get that. We're talking about
1:08:46
one of the greatest orators in modern
1:08:48
history. But it's okay, I think,
1:08:50
to look at these examples and
1:08:52
realize that it's possible to aspire
1:08:55
to say something concrete and
1:08:58
meaningful and challenging to throw down an
1:09:00
actual challenge to people that we, you
1:09:02
know, rather than just to
1:09:04
play it safe. Yeah. Tom, you have
1:09:06
a new book out called
1:09:09
Wise Animals. Let's talk about it. I
1:09:11
meant to talk about it earlier in our
1:09:13
conversation, but it's your fault. You were too
1:09:16
interesting talking about this topic. So I
1:09:19
only arrived at it now. Well, it's better that
1:09:21
I ask you to pitch the book. You'll do
1:09:23
it better than me. Well, it's a book I
1:09:25
loved writing because in a way, it's just about
1:09:27
a lot of things that fascinate me. It
1:09:30
tells the story of
1:09:32
how human technology
1:09:35
evolved alongside humanity
1:09:37
and how the human story
1:09:40
is entwined with tools and
1:09:42
technological artifacts from before the beginning
1:09:45
of our species, of how
1:09:47
fire and flint tools and then much
1:09:49
later language and then much later literacy
1:09:51
and so on. These are not
1:09:53
just sort of optional extras.
1:09:56
These are absolutely entwined with
1:09:58
our humanity. The
1:10:00
key lesson for me could be really
1:10:02
summed up by the idea that there's
1:10:05
no such thing as a neutral
1:10:07
tool, that the technologies we collectively
1:10:10
make and use and develop have
1:10:13
all kinds of biases and
1:10:15
potentials and properties that push
1:10:17
us towards certain kinds of
1:10:19
action and understanding that shape
1:10:22
who we are and how we understand the world. So
1:10:25
when it comes to things like artificial
1:10:27
intelligence and autonomous systems
1:10:30
and indeed social media, we
1:10:32
need to ask ethical
1:10:35
and imaginative questions about
1:10:37
what we want from these systems.
1:10:40
We need to ask what it means to build
1:10:42
systems that serve the best in us, that
1:10:45
elevate our decision making, speak
1:10:47
to the themes I've been we've been talking about
1:10:50
so far. And the
1:10:52
message for me is that technology doesn't
1:10:54
determine our fate, that collectively
1:10:57
over time, we have control, we
1:10:59
get to shape and
1:11:01
choose between different possible futures. We
1:11:04
get to build technologies that serve different
1:11:06
ideas and purposes. So what does it
1:11:09
mean for technology to serve rather than
1:11:11
subvert democracy? What does it
1:11:13
mean to build systems that help us make better
1:11:16
and more informed decisions rather
1:11:18
than trapping us in
1:11:20
intense emotions and delusions? What
1:11:23
does it mean to build infrastructure and
1:11:26
cities that are beautiful, that
1:11:28
are livable, that are compatible
1:11:30
with the health of our planet
1:11:33
rather than destructive of it? So in a
1:11:35
funny way, it's a bit of a crazy
1:11:37
book. It's a very personal book. But it's
1:11:40
a book about technological humanism
1:11:42
and what it means
1:11:44
to talk about technology richly
1:11:46
in terms of emotion and
1:11:48
cognition and philosophy and aesthetics,
1:11:50
as well as just engineering.
1:11:53
Yeah. And I'm very curious to to
1:11:56
hear your opinion on these topics because
1:11:58
you unlike me, not only thought
1:12:01
about these topics but also done the research
1:12:03
on the history of our relationship to technology
1:12:05
and you have this interesting
1:12:07
expression that we co-evolved with technology which
1:12:09
I think is quite beautiful, especially in
1:12:11
that broad sense in
1:12:13
which you interpret the meaning of technology.
1:12:15
There we go again about concepts, right?
1:12:17
So that's the concept in which the
1:12:21
meaning you give to it includes
1:12:23
also the early tools we've used
1:12:25
and even language. That would be
1:12:27
enough for a whole conversation or even more than one so
1:12:29
we don't have time for it. But
1:12:31
I'm curious to know your view on two big
1:12:33
topics and two big challenges I think we have
1:12:35
nowadays. One is the
1:12:38
challenge digital media and
1:12:40
social media brings
1:12:42
to democracy with tribalism that
1:12:44
you mentioned earlier, populism,
1:12:47
polarization. And the other one has to do with
1:12:50
social media and children and specifically
1:12:52
advice for parents like us who
1:12:55
we have to deal with children
1:12:57
growing up in this age. But let's
1:12:59
start with the first
1:13:01
which is a topic I'm very curious about.
1:13:03
I even wrote a book which is partially
1:13:06
about this, about the impact of fake
1:13:09
news populism, tribalism and
1:13:11
how to deal with it. And to be honest,
1:13:14
I don't know the answer. So I know that
1:13:16
it's clearly a challenge that democracy is facing. So
1:13:18
the old model of democracy was not prepared for
1:13:21
the world of social media so that's clear. I
1:13:24
don't know yet what the future will look
1:13:26
like and especially how the transition to that
1:13:28
future will look like. So if you can
1:13:30
enlighten us, Tom, on
1:13:32
the path and the destination,
1:13:34
that would be very helpful. Well,
1:13:37
the one thing I feel
1:13:39
fairly confident about is that
1:13:41
it's a mistake to focus on technology
1:13:44
in the abstract. That actually
1:13:47
talking about social media doing things
1:13:49
to society as though it's a
1:13:51
sort of force out there that just makes stuff happen
1:13:53
is the wrong way of thinking about it. We
1:13:56
need to see how any technology is
1:13:58
embedded in a particular society. So
1:14:01
technology like social media is obviously
1:14:03
where people can be polarised, can
1:14:06
find voices, can be deceived, can
1:14:08
be manipulated, it can be used by populists, it
1:14:10
can be used as part of
1:14:12
vast stretches of surveillance, it can also be
1:14:14
used for great good and education and so
1:14:16
on. But all of
1:14:18
these things are embedded in particular
1:14:21
societies, in particular ways. And
1:14:23
when we talk to people about this, we
1:14:25
find again and again, that not that many
1:14:27
people just have their minds changed by social
1:14:29
media. It's more
1:14:31
that the things that they already want
1:14:34
to believe or fear or encouraged to
1:14:36
believe, they then seek out in social
1:14:38
media and it maybe amplifies these. So
1:14:40
I think you need to talk about
1:14:43
it in the context of people's
1:14:46
jobs, people's security
1:14:48
of earning, the levels
1:14:50
of equality and exclusion within a
1:14:52
society, its education and so on.
1:14:55
To some degree, you also need to talk
1:14:57
about the regulation of technology companies and the
1:15:00
ways in which monopolies are around allowed
1:15:02
to exist and the ways
1:15:05
in which certain public goods
1:15:07
and assets are or aren't
1:15:09
exploited or undermined. So
1:15:11
ultimately, and this is a bit of
1:15:14
a non-answer, it becomes a question of
1:15:16
sort of governance and what
1:15:18
a society believes itself to be and
1:15:20
what it prioritises and the ways in
1:15:22
which it regulates and the
1:15:24
ways in which social media, ultimately,
1:15:26
I think, social media can be
1:15:28
most useful when it highlights deep
1:15:31
sources of discontent and division
1:15:33
and exclusion. We look
1:15:35
across Europe, we see the rise of
1:15:37
the far right, of xenophobia
1:15:40
and isolationism, we
1:15:43
see a lot of fear about the future, we see a
1:15:45
lot of despondency, we see a lot of disinformation
1:15:47
and misinformation, but of course, for
1:15:50
me, I actually think it's a mistake
1:15:53
to empower governments to legislate
1:15:55
for what is or isn't true. That
1:15:57
seems to me to be very dangerous, it seems to me a
1:15:59
tool. that would probably backfire, that
1:16:02
is in itself tending towards
1:16:04
the totalitarian. What
1:16:06
does it instead mean to
1:16:08
reinforce structures of scrutiny, to
1:16:11
reinforce tolerance on a societal
1:16:14
level, to have an
1:16:16
education system, to have social
1:16:18
safety nets, to have systems
1:16:21
of justice that address
1:16:24
the concerns underlying
1:16:27
these fears? If people worry about
1:16:29
their jobs and their incomes and
1:16:31
their future and their planet and
1:16:33
their society, they're telling a government
1:16:35
something very important. But you
1:16:38
can't fight that battle in the realm of social media.
1:16:40
It's very easy to scapegoat it.
1:16:43
And I think, actually, I almost
1:16:45
end up thinking that the excessive focus
1:16:48
on technology is a red herring, is
1:16:50
a distraction from the deeper issues. So
1:16:53
this is often my answer when it comes to technology,
1:16:55
that we have to be a little suspicious of
1:16:58
the idea that the technology in itself
1:17:00
can either fix things or
1:17:02
make them worse and make them terrible. Interesting.
1:17:05
I have a different view there, I'd
1:17:08
say, because of course we don't
1:17:10
have the counterfactual here. So we
1:17:12
don't have the parallel universe in which social media
1:17:14
was not invented. But I
1:17:17
find the changes that we have had
1:17:19
in society and democracy so
1:17:22
big that I don't find any other cause
1:17:24
than the impact of social media.
1:17:26
And when I say social media, I'm
1:17:28
including also other platforms of internet, and
1:17:30
I'm including WhatsApp and so on. Of
1:17:33
course, again, I don't have the counterfactual, so I cannot
1:17:35
prove this. But I find, for instance,
1:17:37
I don't know what to think of this, but I
1:17:39
find parallel with the invention of the printing press quite
1:17:42
persuasive. As you
1:17:44
know, when Gutenberg invented the printing press, suddenly
1:17:47
what it meant in practice – it was
1:17:49
not actually that sudden, because back then things
1:17:51
took a bit longer to have their effects –
1:17:53
but with a few decades, suddenly books started
1:17:55
being printed out and so the readers of books started becoming
1:17:58
more and more and more and more and more and more
1:18:00
becoming much more in number than they used
1:18:02
to be. So the previous elites that were
1:18:04
basically a few hundred or a few thousand
1:18:06
people that had access to knowledge no
1:18:08
longer had the monopoly of that knowledge and
1:18:10
suddenly you have a lot of people reading
1:18:13
which was not of course the masses but
1:18:15
much more people than before and
1:18:17
that changed, that had a huge impact in
1:18:19
society and it led for instance to the
1:18:21
religious wars. And then after the
1:18:24
religious wars we actually got enlightenment which was much
1:18:26
better than what we had before but we had
1:18:28
to go through the religious wars and I feel
1:18:30
that we are kind of living through. So
1:18:33
here I would agree with you
1:18:35
here but I think the societal
1:18:37
change is the rise of mass interactive
1:18:39
media in general, not social media
1:18:41
in particular. Social media is
1:18:43
very significant but I think… So again the
1:18:45
concept was misleading again.
1:18:47
Yes but I think the analogy with Gutenberg is
1:18:50
very close in some ways. Media
1:18:52
and mass interactive media, the mobile
1:18:54
phone as a technology, absolutely.
1:18:57
Gutenberg printing transformed
1:19:01
the relationship of the majority of
1:19:03
humanity with written history and records.
1:19:06
It had been a province of the elite through
1:19:08
most of history, through most of
1:19:10
geography and it became over
1:19:13
a few centuries the province
1:19:15
of the majority and now the
1:19:18
creation of mediated records is becoming…
1:19:20
everybody does all the time and
1:19:22
not only that but also now
1:19:24
the creation of
1:19:28
records of media is arguably becoming
1:19:30
the province of machines,
1:19:33
of machine agents, of AIs and so on.
1:19:35
So this is huge, yes this is huge.
1:19:38
I just don't want to focus too much on
1:19:40
social media in particular but it's an immense revolution
1:19:42
and I don't think we can
1:19:44
fully understand it. I do think
1:19:47
that social media can point us
1:19:49
towards some of the dangers of course which
1:19:51
is about the spread of
1:19:53
emotionally impactful information at the expense
1:19:55
of truth, of
1:19:58
tribalism, the undermining of… common
1:20:00
bodies of knowledge and understanding, the
1:20:03
empowerment of dictators
1:20:05
and manipulators and surveillance of all stripes.
1:20:07
So yes, these are huge social trends,
1:20:10
but I guess I would want to
1:20:12
focus upon the bigger picture and
1:20:15
the mutual relationship. So as
1:20:17
you say, Gutenberg is interesting partly because
1:20:19
existing religious institutions tried to use
1:20:21
the Gutenberg press and presses
1:20:24
to spread the truth as they saw it, and
1:20:26
then Luther and others tried to
1:20:28
spread their truth greatly to the shock of
1:20:30
the Catholic authorities. You know, it was the
1:20:32
channeling of old conflicts in through new channels.
1:20:35
And I think it's this combination of the
1:20:37
old and the new. So I wouldn't dispute
1:20:39
the which is arguably what we are witnessing.
1:20:41
Yes, it is. Absolutely.
1:20:43
And I guess this is the thing you
1:20:45
see, I don't have, I guess,
1:20:48
a kind of a single answer to
1:20:50
it. You are a proper philosopher, you have
1:20:52
questions not answers. I have questions. This is
1:20:54
what philosophy does. We try to get people
1:20:56
to ask better quality questions. Although ironically enough,
1:20:59
I do greatly enjoy trying to provide concrete
1:21:01
policy advice and, you know, concrete ideas. I
1:21:03
think it's very important to come up with
1:21:05
specific recommendations. Last
1:21:08
question. What do you think
1:21:10
on the impact of social media on mental
1:21:12
health? As you know, there's a big discussion
1:21:14
around these about some people call it the
1:21:17
mental illness epidemic. Jonathan Haidt, we might know
1:21:19
is a social psychologist, he has a new
1:21:21
book out called The Anxious Generation.
1:21:23
I was listening to a podcast with him the other
1:21:25
day. And he has this, I
1:21:28
mean, these troubling numbers that show that mental
1:21:31
health has deteriorated a lot, especially
1:21:33
since 2012, which is basically the
1:21:35
turn when social media really
1:21:38
got big and its impact, especially
1:21:41
girls. And I have two girl daughters, although they
1:21:43
are still too young for that. But I talk
1:21:45
with friends who have children, especially those
1:21:47
who have girls, when they get even
1:21:50
sometimes pre-teens and they want to have their mobile phones,
1:21:52
it's a big challenge. They don't they actually don't know
1:21:54
how to manage it, whether they should give it to
1:21:56
them, they should monitor, then sometimes they want
1:21:58
to do things in a certain way. they want to
1:22:00
be more restrictive. But her
1:22:03
friends, their friends are not, or their parents'
1:22:05
friends are not being equally restrictive, so they
1:22:07
have no option then to adopt the same
1:22:09
behavior. It's a big challenge, I feel. As
1:22:12
a parent, I struggle with this. I
1:22:14
think it can be very, very
1:22:16
disconcerting to battle with
1:22:18
your children from a young age over
1:22:21
something that you feel may be making
1:22:23
them less safe, making them less well,
1:22:25
that may be powerful in ways that
1:22:27
they are ill-equipped to deal with. One
1:22:30
of the great challenges, I think, of
1:22:32
parenting and education is to try and
1:22:34
help young people grow into
1:22:38
happy digital citizens who can use
1:22:40
technology well, who can find balance.
1:22:42
I think the evidence is perhaps
1:22:44
more equivocal than Heitz argument suggests,
1:22:46
and there's real evidence there. I'm
1:22:48
fairly familiar with it. He makes
1:22:50
a very strong case, people like
1:22:52
Talakow and others have pointed out that some
1:22:54
evidence are on. Oh, you heard the
1:22:56
same podcast then. Yeah, around global
1:22:59
suicide rates and country
1:23:01
to country and so on, means that
1:23:03
the big picture is
1:23:05
unclear. I think very
1:23:07
broadly, there's a very strong
1:23:09
case for caution. There's
1:23:11
a very strong case for schools and governments
1:23:14
helping parents to put more
1:23:17
limits and barriers around use
1:23:19
of technology and to understand it better, to be less
1:23:21
afraid. I think what every parent
1:23:24
wants is to have good conversations
1:23:26
with their children about what they're doing. To keep
1:23:28
children safe, of course, you need
1:23:30
to know what they're doing, and they need to
1:23:32
trust you with that. I think
1:23:35
one of the simplest and greatest dangers is just
1:23:37
children having lots of private time on
1:23:40
devices, unscripted by their parents with
1:23:43
unfettered access to the internet. Not
1:23:45
because phones are inherently evil or
1:23:47
bad, but because the
1:23:49
amount of danger,
1:23:52
violence, manipulation and so on that can come
1:23:55
at a young person through the screen of
1:23:57
a phone is enormous. For me, perhaps the
1:23:59
most powerful. point that height and others have
1:24:01
made is that very broadly it's
1:24:04
easy today to give children far
1:24:07
too little risk and
1:24:09
independence in the real physical world and
1:24:11
far too much risk and independence in the digital
1:24:14
world. This balance is wildly
1:24:16
out of kilter that children should
1:24:18
be spending more time exploring
1:24:22
on their own physically taking some risks
1:24:24
doing stuff in the world, in
1:24:26
their towns, in their
1:24:28
neighborhoods and should be spending much less time
1:24:31
going into risky unsupervised areas of
1:24:33
the internet and being left alone
1:24:35
for hours with
1:24:37
unfitted access. But how do you do
1:24:39
that in practice? Because I haven't
1:24:43
reached that phase yet, but is there a
1:24:45
way in which you can give like partial
1:24:47
autonomy or partial privacy to kids? Because if
1:24:49
we look at the time in which we
1:24:52
both grew up, obviously there
1:24:54
was much less that we could do with the
1:24:56
internet, but we had full privacy. At least I
1:24:58
have full privacy in practice. I include whatever I
1:25:00
want in some part because
1:25:02
my parents didn't exactly understand at that
1:25:04
point how computers and internet work. But
1:25:07
now this as I see it, you can either give
1:25:09
kids a mobile phone or give them
1:25:11
a mobile phone and see what they are doing or give
1:25:14
them a mobile phone and not see what they are doing.
1:25:16
So you cannot give them
1:25:18
like partial privacy. So you can either monitor
1:25:21
them or not monitor them. Maybe
1:25:23
I mean naive but I don't think you are. I
1:25:25
think it's incredibly difficult and I would just be lying
1:25:27
if I said I had a perfect answer. My
1:25:29
imperfect answer is that a mobile phone is
1:25:31
a physical object. And
1:25:34
very broadly speaking, you know,
1:25:37
children can have it for some time, for some
1:25:39
apps, but not
1:25:41
all of the time and not for all apps
1:25:44
and not for all services. It gets harder as
1:25:46
children get older. Yeah. What's your concrete advice? So
1:25:48
in terms of age, let's say I
1:25:50
was a parent, my
1:25:53
oldest daughter was turning 12 and
1:25:55
she wanted a mobile phone with access to social media.
1:25:57
Would you advise that? Would you say I wait? until
1:26:00
she was 14 or 15. I would try
1:26:02
to suggest a mobile phone with no access
1:26:04
to social media. I would try to suggest
1:26:06
doing a lot of stuff on shared screens
1:26:08
and shared spaces, even on tablets. I play
1:26:11
video games with my kids and so on, who are 8
1:26:13
and 10. But already, you know,
1:26:15
it can be a losing beat. Things
1:26:17
like that. And I think trying
1:26:20
to have very clear time boundaries and
1:26:22
space boundaries around it. I don't like
1:26:24
the idea of private screens in bedrooms.
1:26:26
I don't like the idea of there
1:26:29
being just, you know, a lack of
1:26:31
variety in time. I think
1:26:33
it can be really hard to resist peer pressure. I
1:26:36
just try to keep talking to my kids about
1:26:38
what's going on and taking interest in it and
1:26:41
having conversations. But most importantly, I
1:26:43
think the ideal for me is
1:26:46
working with them to find kind of rules
1:26:48
and boundaries to understand that it's a contract,
1:26:50
that you're not enemies, that you're just trying
1:26:52
to find a way to balance
1:26:54
the positives and the negatives and make sure
1:26:57
that if there's bad or
1:26:59
worrying or upsetting stuff, they come to you. But
1:27:01
also that the home and the family can
1:27:04
be a place of safety. Because one of the great dangers
1:27:06
of a phone for all of us can be that there's
1:27:08
no time and no space in which we
1:27:10
are off and safe.
1:27:13
Perhaps the best concrete advice, which is hard to take,
1:27:16
is to model the behaviors yourself that you want to
1:27:18
see. Is to yourself,
1:27:20
you know, put your phone away, turn it
1:27:22
off, put it in a box, don't sleep
1:27:24
with it in your bedroom, you know, use
1:27:26
different technologies. And I try and I often fail to
1:27:28
say, you know, drive out the bad with the good,
1:27:31
so to speak. I try to buy great video games
1:27:33
for my kids. You know, I
1:27:35
buy the Zelda games, the Mario games,
1:27:38
Minecraft is a wonderful game, try to
1:27:40
celebrate that rather than games
1:27:42
that are very exploitative based on, you know,
1:27:44
kind of freemium models. They try to
1:27:46
get them kind of really on Kindle and using interactive
1:27:49
books and doing coding rather
1:27:51
than just being on YouTube
1:27:54
the whole time. But I also get it
1:27:56
wrong. And I just desperately hope that
1:27:59
I can provide enough balance
1:28:02
and love and support and options and
1:28:04
opportunities that my
1:28:06
children will come to me when
1:28:09
they are worried or sad and
1:28:11
that their home and their life will
1:28:13
have enough space and time
1:28:15
and safety in it that they
1:28:17
can do the learning they need to do. Yeah.
1:28:20
Tom, I'm mindful of your time so I'll let you
1:28:22
go. It was a great conversation. Thanks. I mean, I
1:28:25
wish you all success for all
1:28:27
your endeavors, especially your book Wise Animals
1:28:29
so everyone who's listening, I encourage you
1:28:31
to read. I mean, as you understood
1:28:33
already, Tom has a very sharp mind
1:28:35
so that's my recommendation. Thanks, Tom. Thank
1:28:37
you so much. It's been my very
1:28:39
great pleasure. Thank
1:29:05
you.
Podchaser is the ultimate destination for podcast data, search, and discovery. Learn More