Episode Transcript
Transcripts are displayed as originally observed. Some content, including advertisements may have changed.
Use Ctrl + F to search
0:00
Hey there! Did you know Kroger always gives
0:03
you savings and rewards on top of our
0:05
lower than low prices? And when you download
0:07
the Kroger app, you'll enjoy over $500 in
0:09
savings every week with digital coupons. And don't
0:12
forget fuel points to help you save up
0:14
to $1 per gallon at the
0:16
pump. Want to save even more? With
0:18
a boost membership, you'll get double fuel points
0:20
and free delivery. So shop and save big
0:23
at Kroger today! Kroger, fresh
0:25
for everyone. Savings may vary by
0:27
state. Restrictions apply. See site for details.
0:30
Get ready for family fun that's out of
0:32
this world at Big Air Trampoline Park
0:35
inside Fieldhouse, USA at the Polaris Mall.
0:37
With over 40 attractions included in
0:39
the admission price, there's something for
0:41
everyone. From birthday parties to toddler
0:44
time, and even daily deals like
0:46
Mega Jump Mondays and Wednesdays, 2-for-1
0:48
Tuesdays and family fun packages on
0:50
Thursdays and Sundays, Big Air
0:52
Columbus does it all. Big
0:55
Air Columbus, where the fun
0:57
never ends. Visit bigairusa.com/Columbus for
0:59
details. Hello
1:02
and welcome to Intelligence Squared, where
1:04
great minds meet. I'm head
1:06
of programming Connor Boyle. Coming up
1:08
on the podcast, Madumita Murgia, the
1:11
writer whose work focuses on technology
1:13
and society, is here to discuss
1:15
her book, Code Dependent, a study
1:17
of how technology and AI, designed
1:20
with idealistic intent, is creeping into
1:22
our everyday infrastructure to have a significant
1:24
effect on our lives, and not always
1:26
for the better. Madumita is AI editor
1:29
for the FT, and joining her in
1:31
conversation for this episode is Carl Miller,
1:33
co-founder of the Center for the Analysis
1:35
of Social Media at the Think Tank
1:38
Demos, and author of The Death of
1:40
the Gods, the new global power grab.
1:42
Here's Carl with more. Madu, very warm
1:44
welcome to you. Hi, lovely
1:46
to be here. So the premise of the
1:49
book, what made you really want
1:51
to focus on the kind of
1:53
human consequences I suppose of AI?
1:56
So I've spent over
1:59
a decade at this point. point writing
2:01
about AI, which feels
2:03
mad because I think my entire working
2:05
career has been 11 years.
2:07
So basically, from very early on
2:09
at Wired, I was
2:11
fascinated by these technologies. And back then, it
2:13
was very much sort of sci-fi-ish,
2:17
you know, the future of technology
2:19
kind of stories on brain machine
2:21
interfaces and so on. But
2:24
over the years, it's evolved into something
2:26
that's kind of embedded so much into
2:28
our daily lives. And because I've
2:30
been writing about it over that period, I
2:32
kind of felt like the stories I
2:35
was writing were evolving too. And
2:37
we're going from kind of, oh, look at
2:39
this amazing, crazy fringe thing to, you know,
2:43
the DWP is using AI technologies to
2:45
decide who should get benefits, which is
2:48
that feels like such a mundane application,
2:50
right? Or insurance companies are using this
2:52
to price, you know, what
2:55
your, you know,
2:57
a premium should be for car insurance, so
2:59
that the whole story changed as the technology
3:01
evolved. And for me, I found
3:03
that what I was writing and what I was
3:06
reading about AI around the world,
3:08
it often was kind of so focused
3:10
on the magic of the
3:12
technology itself or how it works or what
3:14
it could do, because of
3:16
the sort of sci-fi aura
3:18
around it, that we were not
3:21
noticing how it was transforming
3:24
our everyday lives in kind
3:26
of often hidden ways. So when
3:28
I set out to write it, I wanted to do
3:30
something that no one else was doing, which was looking
3:32
at just the lives of ordinary people,
3:35
not the demigods that we
3:37
put on pedestals who are building these
3:39
systems, who are fascinating, of course, you
3:41
know, the innovators and the entrepreneurs, but
3:44
really to move out of the bubble of Silicon
3:46
Valley and look at where it's reached. You
3:49
know, the far flung corners unexpectedly
3:51
like Argentina or Kenya
3:53
or rural India, where, you know,
3:56
people are using these systems or
3:58
Are being subjected to these systems. Then times
4:00
I wanted to go and find the story
4:02
than kind of tell tell it in a
4:04
very human way. Also for audiences who didn't
4:06
know. That they cared about technology over maybe
4:09
thought that they don't But the kind. of
4:11
be like wow this is a part
4:13
of that one's life Now on said
4:15
that was that the motivating Clinton's. Am
4:18
in at the impulse that I started out
4:20
with. And you paint me but
4:22
really vivid seems throughout the book com
4:24
a pen portraits of power to bite.
4:26
So much strikes me at times like
4:28
some like a travelogue my feet away
4:30
that you get you bring. The. Cognitive
4:33
I'm the physical place into the
4:35
book and into discussing what was
4:37
that will deliberate as well so
4:39
they to kind of really try
4:41
and introduce the reader to beat
4:43
to be actual places be that
4:45
a barrier in Argentina or all
4:47
right I'm a law enforcement contacts
4:49
in Amsterdam where these technologies will
4:51
actually having impacts. Yeah and the
4:53
the reason for that is I see
4:55
like often you know if you read
4:58
what's out there that a lie. you
5:00
could be forgiven for thinking this is
5:02
older than in either in California or
5:05
and sort of very developed western nation
5:07
and and sonny time to talk about
5:09
be impact unit on people I felt
5:11
like I needed to talk about where
5:14
they were living, what that looks like
5:16
and how. Their. Communities and
5:18
the culture plays into
5:20
an inert at how
5:23
a implemented. And on I think.
5:25
You can talk a lot about, you
5:27
know, ah, theoretically about impacts of the
5:29
Eye on people. No ethical issues around
5:31
A, but nothing brings it to live.
5:34
In a better than going and meeting somebody
5:36
and kind of talking to a community about
5:39
what did this mean for you, How did
5:41
it change your life? And and
5:43
and also I you know. To go to
5:45
places that you wouldn't expect a i to be and
5:47
they wanted a kind of bring that home to the
5:49
to my writing. And whoop whoop will meet
5:52
some of these people in a moment.
5:54
That but overall it. will
5:56
move was your sense that as he stepped
5:58
away from the kind of clinical land and
6:01
the venture capital pitches and
6:03
the kind of the
6:06
ethical frameworks and into
6:08
these real settings. With your
6:10
sense that it has all got so much messier. So,
6:13
you know, as AI kind of
6:15
tangled with real life, you had
6:17
unintended consequences, you
6:21
know, a kind of human ambitions,
6:23
and sometimes, you know, like brazenly authoritarian
6:26
applications as well, but it all just
6:28
becomes so much different to what
6:30
it might look like on paper or might
6:33
look like indeed in Silicon Valley. That's
6:35
exactly, that's the heart of the whole
6:37
book, I think, which is that
6:40
you can, you know, code something on
6:42
a computer and expect it to behave
6:44
in a certain way, but partly
6:47
because of the nature of AI itself,
6:49
which is that, you know, it's a
6:51
predictive engine, it's not just pure Q&A,
6:54
black and white. There are so many gray
6:56
areas in terms of how they kind
6:58
of output, whether that's images
7:01
or words with generative AI
7:03
or decisions with sort of
7:05
more statistical AI system, you
7:07
know, it's messy, it's
7:10
not black and white. And in particular,
7:12
as you say, when it's kind of
7:15
introduced into a human context where traditionally
7:17
humans have been in charge or are
7:19
the experts, you find
7:21
that, you know, actually unexpected
7:24
things happen, even
7:26
people with good intentions, you know,
7:28
who implement these systems, it
7:31
ends up having, you know, really harmful
7:33
results. So, you know,
7:35
you mentioned Amsterdam, that for me was
7:37
a really perfect example of, you
7:39
know, a city that was hoping to
7:42
do good, they introduced an AI system
7:44
to predict future criminals
7:46
amongst children, you know, which sounds
7:48
dystopian, but their goal was to find
7:50
families that needed help, that
7:52
needed guidance and to come in early
7:55
and kind of help these children, mostly
7:57
boys, to kind of stay on
7:59
the right track. and to support them. But
8:02
really what ended up happening was it felt like they
8:04
had a target on their backs. You
8:06
know, they became much more aware
8:08
of and entangled with the police system.
8:11
You know, it was far more punitive
8:13
and corrective than and coercive than kind
8:15
of protective, which is what was intended.
8:18
And so I think there are lots
8:20
of these examples of the messiness of
8:23
human nature and human society that
8:25
we can't predict until we start ruling
8:27
AI out and that
8:29
we need to be aware of as well. So just
8:31
staying with Amsterdam, because that was such a
8:33
fascinating case study. Was the moral
8:36
of the story there that the
8:39
prediction can become kind of self-fulfilling?
8:41
So in the very act of
8:43
these models being used to shine
8:45
a spotlight on some people than
8:47
others, actually made it more
8:49
likely for them to have more interactions with the
8:51
police and more kind of experiences
8:55
with enforcement authorities than
8:57
other people that hadn't been kind of
9:00
nominated by the algorithms
9:02
in the same way. Exactly that. Yeah.
9:04
So, you know, they were coming up
9:06
with these lists and these were lists
9:08
of mostly boys, majority
9:11
Moroccan immigrants in Amsterdam. So,
9:14
you know, in some ways a very
9:16
sort of specific type of list of
9:19
young people. And while,
9:21
you know, as I said, they were
9:24
supposed to support these families, many of
9:26
which, you know, were single parent family,
9:28
single mother family. Really, you know,
9:30
it became a
9:32
self-fulfilling prophecy, essentially, where
9:34
children who maybe were in a little
9:36
bit of trouble, there was some truancy,
9:39
maybe had committed some sort of low
9:41
impact, as they call them, crime,
9:43
started to feel like this was their
9:45
destiny. And they were constantly being pulled up
9:47
by police. They were recognised in public by
9:50
police who would kind of call out their
9:52
names. And all
9:54
of this fed into them feeling like they'd already done
9:56
something bad. And so they might as well do. do
10:00
more of it, which is if you've ever been
10:02
a teenager, it kind of feels like the obvious
10:06
outcome of that. And also
10:08
in some cases they became targets for drug gangs
10:10
and things like that because they knew that these
10:13
boys had kind of could
10:15
get into trouble anyway and the police were
10:17
watching them and they sort of used that
10:19
to kind of force them into
10:22
committing crimes that they maybe otherwise wouldn't have.
10:25
So yes, so I think often trying to
10:27
prophecy something through a predictive system makes
10:29
you feel like you have a fixed path. And
10:32
for something that's so changeable
10:34
as human behavior, especially like as
10:37
a child or a young person,
10:39
we all know how much you can
10:41
change from between 16 to 20 sex. It
10:45
did feel like very punitive for
10:47
these children. So that's
10:50
kind of one of the unintended consequences I
10:52
think of using AI to make what
10:54
looks like a mathematical prediction about
10:56
somebody's life. Whilst we're on
10:58
the topic of young people's lives,
11:00
especially being shaped by these predictions,
11:03
tell us a bit about Argentina and
11:06
pregnancy predictions and what you saw there.
11:08
Yeah, that was so in some ways
11:10
related, right? But in such a completely
11:12
different part of the world. So
11:14
the system that algorithm at the
11:16
heart of that story was one
11:18
that was attempting to
11:21
predict teenage pregnancies
11:23
amongst families. And
11:25
this was in Salta, which is
11:27
a small town in the north
11:30
of Argentina, near the borders of
11:33
Bolivia. And the
11:35
system was kind of developed
11:38
in conjunction with Microsoft by a
11:40
local bureaucrat who had worked as
11:42
a data scientist. And he really felt there
11:44
was a problem there. There
11:47
was rising teenage pregnancies. And in
11:49
many cases, these girls ended up
11:51
having to start
11:54
working at a young age to support their
11:56
families and this kind of cycle of poverty
11:58
perpetuated. Sure
12:01
that existed that yeah, social workers and
12:03
others had been trying to address often.
12:05
It was. Found in
12:07
the Barrios amongst in a socially
12:09
socially economically disadvantaged populations who tended
12:12
to be. From. The
12:14
local indigenous population rather than the
12:16
kind. Of European. Ah,
12:20
Immigrants am uncertain in there that there
12:22
was a lot of kind of a
12:24
snake and socio cultural. Issues associated
12:26
with these teenage pregnancies. Also
12:29
also remember and at the time
12:31
until and twenty twenty abortions were
12:33
illegal in Argentina it's a very
12:35
Catholic. Community as well. so. All
12:37
of this played into the fact that he
12:39
felt an algorithm to predict these pregnancies would
12:41
be the best way to tackle them because
12:44
then they could. Similar to Amsterdam, Target
12:46
Lethal said public resources towards the
12:49
families and help them You know.
12:51
To. Prevent what he saw as this
12:53
negative outcome. I'm an unhealthy families
12:55
to get jobs will help the
12:57
gulf to be educated and but
12:59
having kind of joke painted a
13:01
picture for you about the sort
13:03
a small town highly catholic that
13:06
the the socio economic divide between
13:08
indigenous. And. You. Know in not
13:10
non indigenous populations and so that you can
13:12
see why does Could. Play.
13:14
Out to be in a be Lady Negative:
13:17
A. Unit in terms of the
13:19
consequences Am and you know. What
13:21
are you gonna do Turn up to a family
13:23
and say you're fourteen year old of gonna be
13:25
pregnant Like how the holiday going to cope with
13:28
that? So I traveled up there i met with
13:30
this form a bureaucrat and to really understand boy
13:32
his motivations the how did he expect a role
13:34
something like this out how did he speak to
13:37
the family and what I found that was it
13:39
wasn't you know that said of dystopian dark outcome
13:41
that you might expect in a new he was
13:43
going around telling people they were going to get.
13:45
Pregnant it was. It was
13:48
more something much more said. It
13:50
may be been all but but
13:52
equally disappointing I think, which is
13:54
that nothing. Happened with for these
13:56
people. their lives were not improved in
13:58
any way com the the algorithm
14:00
was sort of suspended when
14:03
he left government. Nobody did anything with
14:05
all the data that was collected. And really
14:07
the only people who seemed to benefit were
14:10
Pablo, the bureaucrat who's now gone on
14:12
to be a startup founder, selling this
14:14
idea to other Latin American and African
14:17
governments as a way to sort
14:19
of improve socioeconomic conditions. And
14:21
possibly Microsoft, that it gave them a bit
14:24
of a taste of working with public authorities
14:27
and kind of some experience in that area. So really
14:29
the people it was meant to benefit never saw
14:31
any benefits from it, along with
14:33
the kind of very problematic issues of trying
14:36
to profile girls who were trying to get
14:38
pregnant. So I think, again, lots of unintended
14:41
and social consequences there that
14:43
could have been much better anticipated,
14:45
I think. I think you're walking
14:47
through one of those neighborhoods with
14:49
an activist who says, the
14:52
problems are blindingly obvious. Yeah. We
14:55
need sanitation, we need nutrition, we
14:57
need education. We don't need an
14:59
algorithm to pinpoint individuals. In fact,
15:01
entire neighborhoods are vulnerable and it's
15:03
clear why they are. Exactly. And
15:05
she was not even an activist. She was literally
15:08
an older woman who had been living
15:10
as part of the Barrio community for
15:12
many years who felt a
15:14
personal responsibility to kind of
15:17
look out for her community. And yeah,
15:19
she walked with me through this neighborhood
15:21
with one of their local counselors. And
15:23
it was so clear to her what needed
15:26
fixing, right? The rains were coming,
15:28
the roads were in bad condition, young people
15:30
needed a place to gather. And
15:33
none of that was being addressed. Instead,
15:36
politicians are looking for shiny solutions
15:38
for what you really just need
15:41
human, kind of you need more
15:43
social workers, more support. So yeah,
15:45
I think for me, that was
15:47
the kind of contrast where
15:50
the promise of technology sometimes
15:52
it just doesn't match up to what you
15:54
actually need in reality. In
15:57
this particular use case Madhu, where we've got, which
15:59
I, No doubt we'll see more
16:01
of in the future the use
16:04
of these kinds of analytics technologies
16:06
to profile individual risks in
16:08
a way to allow more targeted use of public
16:11
funding. Is there a
16:13
philosophical objection to this around the
16:16
way in which it kind of
16:18
actually removes individual autonomy and choice
16:20
and kind of turns people into
16:22
like floating probabilistic clouds of risk?
16:26
I think that yes, so
16:28
philosophically I think if
16:31
you know that you have a sweeping social issue,
16:33
we should be looking at it from sort of multiple
16:37
perspectives, whether that's social
16:39
workers, governments, educators,
16:42
all coming together to figure out how to fix
16:44
something rather than just bringing in data scientists
16:47
to target individual families. But
16:49
I think the other issue is also maybe
16:52
statistical systems can help under-resourced
16:56
say local councils
16:58
or other public institutions to figure out how
17:00
to deploy resources. I can see why that
17:02
could be attractive, but it's so important to
17:05
kind of figure out how you include the
17:07
actual communities there. So in both cases, the
17:09
biggest issue for the people I spoke to,
17:11
so in the case of Amsterdam, I spent
17:14
time speaking with one of the mothers of
17:16
two boys who were on this list. Her
17:19
issue was being completely excluded from the
17:21
discussion, right? She didn't know why her
17:23
children were on the list. She felt
17:25
like she was being told it was
17:27
her fault, that she
17:29
had this whole host of lawyers and
17:31
other sort of public workers
17:33
coming in and kind of making her feel
17:36
like she was an up to scratch at
17:38
her job as a mother and really cut
17:40
her out of the whole decision making. And
17:42
if you're trying to kind of strengthen children's
17:44
futures, how can you cut their mothers out?
17:48
And so I think so much of this is we
17:50
need to figure out if we're going to
17:53
deploy it in, you know, via government, how
17:55
do you include the people that it's Predicting
17:57
things about so that they feel.
18:00
That they have a voice and from
18:02
agency and. Actually, with Diana the mother
18:04
in Amsterdam, you know when she eventually
18:06
kind of wrote to the Mayor and
18:08
she got a new you know that
18:11
they provided a counselor for her i'm
18:13
from kind of worked with her rather
18:15
than against her. She actually found that
18:17
it was helpful ultimately to to to
18:19
be kind of assisted by this counsellor
18:21
from. The city to kind of get her
18:23
family back on track and she kind. Of now
18:26
is of the opinion that it's not all bad,
18:28
but it's really about you know how you frame.
18:30
The thing is, it is it a target on
18:32
your back? Have you done something bad that you
18:35
meant to be punished for? Or is this genuinely
18:37
meant to be like a restorative justice. Saying
18:39
In which case, you know you need to
18:41
include these voices and. You. Know
18:43
that that you're using. Ai Systems on
18:45
what some some of the first people
18:48
you write about all the people actually
18:50
during the training, the kind of labeling
18:52
and cleaning the data. So
18:54
what to tell us a bit about this
18:56
kind of global precarious I think is how
18:59
you how you describe them and a to
19:01
meet up? Just a tremendous amount of. Cognitive.
19:04
Elbow grease the actually goes into
19:06
training the money makes it makes
19:08
me feel that like automation is
19:10
a bit of a misnomer sometimes
19:12
as to assume the hobbies Mozart's
19:14
be created Absolutely. I think that
19:16
you know we were all you
19:18
know. Maybe theoretically you. Intellectually aware that
19:21
an Ai systems are built on they
19:23
can. I think we know that you
19:25
need huge amounts of data, but that
19:27
makes it feel very clinical and detached.
19:30
What is the data? Well, it's behavior.
19:32
It's human creativity if
19:34
words. That training Ten Cpt.
19:36
That's that's. Probably. Word that you
19:39
and I may have written or spoken. Am
19:41
said the this isn't enough Data
19:43
isn't disconnected from the reality of.
19:46
Like walk through the are and
19:48
what we make an and in
19:50
many of these cases this is
19:52
you know, driving data all and
19:54
voice data in us or alexa
19:56
all m. images for an instagram and
19:58
so on in the inner Many
20:00
of these data labelers were labeling snippets
20:02
of text for chat GPT as well.
20:05
So this is all, you know Content
20:09
generated by us and that somebody
20:11
is sitting and labeling because without that
20:13
AI systems just don't recognize what they
20:15
are You know images need
20:17
to be labeled videos, you know need to be
20:20
analyzed And there's somebody
20:23
who has to do them and you need you know,
20:25
this has to be done at a huge
20:27
scale So you need cheap labor ultimately to
20:29
do it. And so people have
20:31
turned to the developing world where you know, you can
20:33
pay a Living wage, but
20:35
that living wage is much more affordable
20:37
than doing it in, you know the West And
20:41
people want digital jobs because the job
20:43
the alternatives that they have are things
20:46
like manual labor cleaning domestic work
20:48
Construction work and so on so
20:51
for me, you know, I wanted to
20:53
firstly show You know, what
20:55
is the work and who are the people
20:57
that are doing the work of building? You
21:00
know the bedrock of AI systems before
21:02
we even get to training the systems
21:04
and and deploying them, right? But then
21:06
also to figure out, you know, what
21:09
does this labor market look like? And
21:12
to try and you know, I'm not an economist
21:14
So for me again, it was from a journalistic
21:16
viewpoint of like how is this
21:18
changing people's lives? Is it for the better or
21:20
worse or what are the gray areas in
21:23
between right? And so I
21:25
went to Bulgaria I went
21:27
to Nairobi and to Buenos Aires To
21:30
look at three very different markets again To
21:33
see kind of what was the real
21:36
impact on the lives of the you
21:38
know Low-income populations that had been recruited
21:40
into this and I think
21:42
for me the results were Were mixed
21:44
and unexpected in what way? Well, so
21:46
I think you know If
21:49
you report on this from you know, from a
21:51
kind of news perspective, you know Of
21:54
course, there's the question of are they being paid enough?
21:57
And they're doing the same work digital work
22:00
someone would do in the US or the UK.
22:02
So why should they be paid any differently,
22:04
right? And then of
22:06
course, there's the people running these companies say,
22:08
oh, it would distort the local labor market,
22:10
you can't pay them too much. Because if
22:12
they're living in the slums of Kibera and
22:15
Nairobi, you can't suddenly be paying them more
22:17
than everybody else, it would change all the
22:19
local pricing, etc. I'm not sure that was
22:21
true. You know, I think that there
22:23
does need to be a complete sort
22:26
of redefining of like, what do
22:28
data laborers get paid, because they
22:30
are part of a pipeline of technology,
22:32
which is worth billions, if not trillions
22:34
of dollars coming out the other end.
22:37
And one of the lawyers I spoke
22:39
to in Kenya, who's fighting on behalf of
22:41
some of these workers, you know, she, she
22:43
kind of compared it to factory workers making
22:45
Gucci shoes, you know, they might be in
22:47
Bangladesh or the Philippines, who have no idea
22:49
that they're being what they're being paid is
22:52
going into making a shoe that will ultimately
22:54
be sold for $3,000 somewhere else. So I think that
22:57
needs to be much more transparency of
23:00
like, what does this technology ultimately
23:02
cost? And how can people doing
23:04
data work benefit from the upside
23:06
of this huge opportunity, you
23:08
know, in AI technology, but but
23:11
also, you know, on the flip side, I didn't
23:14
think it was as easy as just saying,
23:16
we are, you know, these people in the
23:18
developing world in Africa, and Asia, and Latin
23:20
America need to be paid more, and they're
23:23
being paid $2 an hour. And that's terrible.
23:25
I think it did make a hugely
23:27
positive difference to many lives, like I,
23:29
the people I spent time with, in
23:32
Nairobi and Sophia, you know, they were able
23:34
to put their children through
23:36
school, you know, pay for their
23:39
parents medical costs, really, you know, their
23:41
lives were different to when they were
23:43
working in construction or unemployed or doing
23:46
domestic work. And so
23:48
there is something to be said that, you know,
23:50
they can do these jobs flexibly and at home
23:53
while learning digital skills and being part of this
23:55
kind of new AI revolution. So
23:57
that was, you know, like a bright spot for me.
24:00
But I think I concluded that it's not
24:02
enough to just give someone a job. It's
24:05
not charity, right? They're working in exchange
24:07
for money. We still need to push
24:10
for making sure that
24:12
they benefit from the
24:14
AI kind of explosion that will come
24:16
in business and industry. And currently, their
24:19
wages are hugely depressed when you look at
24:21
how much money tech companies are making at
24:23
the other end. So
24:26
yeah, I think that's a problem to be solved.
24:33
Intelligence Squared is a tight-knit team doing
24:36
big things. And it means we're always
24:38
looking for tools that can help streamline
24:40
managing tasks. That's why I want
24:42
to talk to you for a minute about
24:44
NetSuite. NetSuite provides cloud-based software to get things
24:46
moving. Maybe your business has been humming, but
24:48
you can feel things are falling behind a
24:50
little bit. Or perhaps your
24:52
team is getting snowed with manual tasks and
24:54
closing those books is taking forever. If
24:57
this sounds like you, you should
24:59
know these three numbers. 37,000, 25,
25:02
1. 37,000, that's the number
25:04
of businesses which have upgraded to NetSuite
25:06
by Oracle. 25, NetSuite turns 25 this
25:08
year. That's
25:10
25 years of helping businesses do
25:12
more with less, allowing them to close
25:14
their books in days, not weeks, and
25:17
drive down costs. And 1,
25:19
because your business is one of a
25:21
kind. So you get a customized solution
25:23
for all of your KPIs in one
25:25
efficient system with one source of truth.
25:27
It means you can manage risk, get
25:29
reliable forecasts, and improve margins. It's everything
25:31
you need to grow all in one
25:33
place. NetSuite is now making an unprecedented
25:35
offer to make more of
25:37
that kind of thing possible.
25:40
Right now, you can download
25:42
NetSuite's popular KPI checklist, designed
25:44
to give you consistently excellent
25:47
performance absolutely free at netsuite.com/squared.
25:49
That's netsuite.com/squared to get your
25:51
own KPI checklist. netsuite.com/squared. NetSuite.
26:00
That seemed to be maybe one of the
26:02
areas where you use kind of
26:04
power as a way of trying to understand the
26:06
whole situation and to reflect that,
26:09
you know, whilst it's actually
26:11
primarily poor people who are cleaning
26:14
the data, poor communities
26:16
often are most shaped by AI
26:19
applications at the moment. Poor
26:21
people that might have data practically gathered
26:23
on them, they have more touch points
26:25
with social services and other
26:28
kind of public institutions.
26:32
Whilst the models and the companies themselves
26:34
obviously represent some of the greatest concentrations
26:36
of wealth and power that we've ever
26:38
seen. Was
26:42
that something you saw more
26:45
broadly, I guess, across the various reporting that you
26:47
did and the places that you went? Yeah, I mean,
26:49
I think that was definitely an
26:51
overall thread to the story, which
26:53
was that, you know, when AI
26:56
is developed, implemented, regulated, it's
26:58
meant to be for the benefit of
27:00
many. But often the people who see
27:02
the benefits are those who are already
27:05
advantaged or in a position
27:07
of majority or kind of, you
27:09
know, in a position of privilege in
27:11
society. And often the harms that
27:13
we talk about with AI bias,
27:15
or, you know, the harms of,
27:17
you know, using deepfakes against women
27:19
on the internet. This is all
27:22
experienced by minorities or
27:24
kind of disadvantaged communities already. So, you
27:26
know, the inequities are just widening is
27:28
what I was seeing over and over
27:31
again. And while this
27:33
happens, the entrenchment of power
27:35
continues to scale up and increase, right?
27:38
We've gone from social media companies having all
27:40
of the data and kind of concentrating power
27:42
in that way through their algorithms
27:45
to, you know, maybe less
27:47
than half a dozen AI
27:49
companies that have the data,
27:51
the infrastructure and the know-how
27:54
to build these technologies. They're the only people
27:56
who really know how to test what's inside
27:58
them. and really
28:00
to kind of who control how they'll be
28:02
implemented. And so I think I
28:05
definitely saw that sort of widening inequity. I
28:07
often I felt that the harms were seen
28:10
primarily felt by not just
28:12
like socioeconomically disadvantaged people, but in
28:14
the case of say gig workers,
28:16
it was often migrants, you know,
28:19
who, who come to a new country and
28:21
who didn't have really any option, but to
28:23
work as say an Uber driver or
28:25
a delivery, you know, food delivery
28:27
courier. So it was often these people
28:31
who are being harmed by AI systems.
28:33
Well, was there anywhere that you
28:36
found where it kind
28:38
of struck you more that, oh my
28:40
gosh, AI really is improving lives and
28:42
the risks are being managed in ways
28:44
which really lean us towards being more
28:46
optimistic. I mean, healthcare, like
28:49
med tech, was that an area where
28:51
you thought like maybe, you know, it's
28:53
quite clear that the positives are outweighing
28:55
the negatives at the moment? Yeah,
28:57
I think for me, like the spots
29:00
of optimism are scientific
29:02
innovation and healthcare to
29:05
areas where it just, you
29:07
know, this technology is ripe for innovation.
29:10
And I think, you know, I haven't written
29:13
very much in this book about scientific innovation,
29:15
but I, you know, in a
29:17
former life was studying to
29:20
be an immunologist. And
29:22
so I'm really interested in the kind of crossover
29:25
between health, technology and
29:27
kind of pushing forward the frontiers of
29:29
science. And there are some really
29:32
amazing examples we've seen with AlphaFold, for
29:34
example, that has come out of Google's
29:36
AI arm DeepMind, which is based
29:38
in London, a system that
29:40
can predict the structure of proteins, which, you
29:42
know, which you can, you know, any protein
29:45
in the world, which helps them
29:47
to kind of develop new materials, whether that's in
29:49
pharmaceuticals or energy and so on, which I thought,
29:51
which I think is kind of really exciting.
29:54
And healthcare in particular, I write about
29:56
in my book, specifically the story of
30:00
an Indian doctor who works in a
30:02
very rural part of Western India, just
30:04
on the border, you know, a
30:07
few hours from Mumbai. And she
30:09
mostly works with local tribal populations. And
30:11
she's helping to train an AI system
30:13
that can help diagnose tuberculosis.
30:16
For me, this was just fascinating because,
30:18
you know, it's a really kind of
30:20
widespread illness, but it's a treatable and
30:22
a curable one. Yet people
30:24
are dying from it because of lack of access.
30:26
So there it just feels really
30:28
kind of a no brainer, right? If you can train
30:30
an AI system that can
30:33
go out into the kind of innards
30:35
of the country in mobile vans without
30:37
doctors to screen people, you know,
30:39
the downsides are pretty low. Either it's going
30:41
to tell you you might have it and
30:43
you don't, but yet you'll see, you know, at
30:46
least you're being seen to. Or it tells you you
30:48
don't have it, but in many cases you wouldn't have known
30:50
anyway. And it's just, you
30:52
know, it's a way to kind of
30:54
bring medical expertise to so many
30:56
people who currently just have nothing. And
30:59
I think that there are really
31:02
promising scientific results to show that
31:04
AI systems are really good at
31:06
being able to analyze scans for
31:08
kind of various cancers and other
31:10
illnesses, including COVID. And if we
31:13
can implement that in, you know,
31:15
a structured way across our healthcare systems
31:17
here in the NHS, we're struggling with
31:19
a huge lack of radiologists. Anyone
31:21
who's kind of had
31:24
to live within our healthcare system
31:26
can see how it's creaking at the edges.
31:29
You know, this, I think this can, it's
31:31
more than a band-aid. I think it
31:33
can really kind of change how we
31:35
receive care. So for me, that's a
31:37
huge opportunity. Do you think
31:40
that the medical profession has wider
31:42
lessons that maybe other
31:44
industries should learn? I'm just reflecting that
31:46
there may be an addition to the
31:48
kind of technical promise that AI has.
31:51
It's also in medicine, like landing
31:53
in a context
31:55
which has a lot more
31:57
kind of institutional bounding around it,
31:59
you know, like. It's really clear
32:01
how clinical data should be used
32:04
and the privacy of patients is
32:06
really well defined and the responsibilities
32:08
of a doctor to the patient
32:10
are also well defined. It
32:12
just feels like there's a lot more guardrails
32:15
that will shape AI use in
32:17
medicine than for instance predictive policing
32:19
where it just feels like it's
32:22
a completely open landscape. In
32:26
that openness where there aren't really any
32:28
clear understandings, it's there we'll see the
32:30
overreach, we'll see the
32:35
misaligned incentives and we'll see good
32:38
intentions being
32:40
swamped by the messiness of
32:43
application. Yeah, no, I think that
32:45
you're right. There
32:47
is the expectation of patient privacy and the protection
32:49
of data and things in healthcare which could be
32:52
really good lessons but also it's the human side
32:54
of it. We all understand
32:56
instinctively why we value human doctors.
32:58
Of course it's because they can
33:00
diagnose us and tell us how
33:02
to get better but it's also
33:06
having a person who can break news
33:08
to you that's really difficult or helping
33:10
you cope with something
33:12
that even positive
33:14
health news like pregnancy, I remember
33:17
having a really weird interaction with
33:20
a GP when I first found out I was
33:22
pregnant. It was a lovely happy thing but the
33:25
sort of delivery and the interaction left
33:27
me feeling really cold. So I think
33:29
we all inherently understand
33:32
why we value humans in the medical
33:34
process and I think one of the
33:37
things that Ashita Singh who's the doctor
33:39
that I write about said
33:41
which I think is really kind of
33:43
applicable beyond is she never saw this
33:45
in any way as a threat to
33:48
her or a replacement for her because
33:51
she knows her value as a doctor in
33:53
this community and what she can do but
33:55
she saw it as one of the many
33:57
tools in her toolkit alongside x-rays
34:00
or CT scans or whatever else,
34:03
you know, this was another great tool to
34:05
help kind of give her confidence or to
34:07
give less experienced doctors more confidence in the
34:09
news they were delivering and to kind
34:12
of increase access to care. And
34:14
I think that's how anyone implementing these
34:16
systems should see it as, you
34:18
know, we, you know, we can never
34:21
replace the expertise of humans that we've
34:24
built up, whether that's social workers who
34:26
kind of understand the communities they've been
34:28
embedded in, or, you
34:30
know, you know, criminal justice, you
34:32
know, defense lawyers or whatever it is
34:34
in any area of kind of human
34:36
expertise, we need to kind of, you
34:40
know, those up that we need to preserve
34:42
that alongside the systems and allow them to
34:44
kind of enhance what you do, and
34:46
bring that to more people rather than thinking of
34:48
it as a way to just kind of cut costs
34:50
or replace. And, and yeah,
34:52
I think healthcare is a way
34:55
for us to understand why humans matter. Well,
34:57
zooming out in the last
34:59
10 minutes that we have
35:01
to try and extract some
35:03
of the overarching themes, let's
35:05
talk about big tech. So
35:08
one thing I was really struck by in
35:10
all your stories is the seeming
35:13
like absence of public
35:15
sector AI capability. Like
35:17
there doesn't seem to be barely a government
35:19
in the world that can train or
35:22
deploy like models. And
35:24
so that is that changing the
35:26
role of the
35:28
AI companies in public life, it seems
35:30
like they're all over the
35:33
world in 1000 different ways, like
35:35
creeping closer and closer to government,
35:37
and more and more intimately into woven
35:40
into into the business of the state.
35:42
Absolutely. And I think for me,
35:45
the structure of the book was also done
35:47
to reflect this where there is spectre that
35:49
comes in across the book, they kind
35:51
of appear in these unexpected
35:54
places. They opt to insult or you know, well,
35:56
didn't really I
36:00
think that was the point to say we might
36:02
not realize it, but they're getting closer and
36:07
closer to providing the basic infrastructure
36:10
to government. So I spoke to
36:12
a Mexican data
36:14
activist, Paula Ricciote. She's
36:17
a political scientist activist. And
36:19
she was talking about how during COVID,
36:21
the Mexican government was reliant on
36:23
Google for their own data collection
36:25
of what was happening in the
36:28
country during the pandemic. Because
36:30
the infrastructure was provided by the
36:33
company, and they couldn't kind of
36:35
develop anything without them. And
36:37
similarly, you have AWS and Azure and
36:39
Google as well in India providing much
36:42
of the healthcare infrastructure. Again, I spoke
36:44
to social, not
36:46
just activists, but kind of researchers looking
36:48
at the relationships between companies and governments
36:50
who said, it's so deeply embedded
36:53
that we don't think our governments could provide much
36:55
of the services they're doing without these companies
36:58
anymore. And I think we've all
37:00
become increasingly aware of how reliant we are
37:02
even personally on these systems for
37:05
interpersonal communication, for the work that
37:07
we do in our relationship with
37:09
our state. And
37:11
this was why the stories reflect
37:14
that increasing scale. And
37:16
yes, this is about power. And
37:20
I think big tech is becoming increasingly
37:22
sort of playing the role of what
37:24
we would usually look to the state
37:26
for. And I
37:29
mentioned before about them having all
37:31
of the knowledge and resources required to
37:34
build future AI systems as well, which
37:36
also puts them in a position, just
37:38
in terms of the
37:40
knowledge that they have, where governments
37:43
are going to be reliant on them to provide this
37:45
stuff. I wrote
37:47
years ago now in 2019
37:49
about how increasingly we were
37:51
seeing less academics, independent academics
37:54
at universities working on AI
37:56
problems, particularly, I don't
37:58
mean AI ethics problems. or social impact
38:00
of AI, I mean actual AI development,
38:03
academics who are funded to build these
38:05
systems and understand how they work and
38:07
try and break them. And
38:10
now here we are, five years
38:12
later, that problem has only increased.
38:15
Meredith Whitaker, who's the president of
38:17
Signal and has written a lot
38:19
about this, the concentration of power,
38:21
where we see now that you
38:23
really can't be an independent
38:26
academic funded by public grants and
38:28
have the infrastructure you need to
38:30
build a large language model. So
38:32
many, many academics we see now
38:34
do these dual roles where they work
38:36
part-time for companies, part-time. And it's good
38:39
because you're learning from the companies,
38:41
you're spreading this back out into universities, but I
38:43
do think it says something about the direction of
38:45
who's going to hold
38:47
these... Never mind the companies
38:49
and the regulatory angle, but even just the
38:51
systems to account. Where is
38:53
the accountability if nobody understands how they
38:56
work outside of people
38:58
inside profit-making institutions?
39:02
And then we come to regulation. So yes, I
39:04
do think that we're seeing the companies
39:06
really kind of almost quasi-states at this
39:09
point. Many of them have more data,
39:11
money and power than many states do. And
39:15
well, let's talk about data. So as states,
39:19
as companies become increasingly like states
39:21
or step into supply services, AI
39:24
services to states, is
39:27
there a kind of grand swap happening? Because
39:29
it seems as they move
39:31
their AI capabilities into public
39:34
services around the world, the big thing
39:36
that states have is data
39:38
from citizens, kinds of data
39:40
that the AI companies, as
39:43
basically ads and platform
39:45
providers could never have dreamed of before, especially
39:47
health data, but civic data. Is
39:50
that the great swap that's currently happening? Are
39:52
public sector authorities around the
39:54
world basically swapping that data
39:56
on citizens to
39:59
power? necessarily to
40:01
power the
40:03
models and the training which is needed
40:06
in order to actually deploy AI in
40:08
the way that they want. So
40:10
I think that it is going to
40:12
be very difficult for any company or
40:14
government independently, at least in the West.
40:17
I think China is different. They
40:19
hold a huge amount of
40:21
data that is crossed up and
40:23
connected both on local and global
40:25
levels. And they can build their
40:28
own technology, state-owned technology as
40:30
well. But if we are going
40:33
to procure AI systems in
40:36
Western governments, of course, you're
40:38
going to have to find somebody to
40:40
do that for you. And
40:42
we've seen that with the NHS, there have been many
40:44
attempts over the years to kind of use
40:47
that data in a way that can help people.
40:50
At the moment, Palantir has won a big
40:52
contract here in the UK. This is the
40:55
American data company that works
40:57
a lot for defense departments
41:00
around the world. It was initially
41:02
funded by a CIA grant.
41:04
They work in those areas and they
41:07
are now powering much of the infrastructure
41:09
for the NHS as kind of data.
41:11
So we are seeing through channels of
41:15
procurement, tech companies come in. But
41:18
I do think that the
41:20
work of the next few years
41:22
is figuring out how can governments
41:24
benefit from that expertise and
41:27
help citizens while also protecting
41:30
very valuable data in
41:32
a way that we can
41:34
all see some benefit in
41:37
them. I
41:39
don't feel hopeless about this. Yes,
41:41
we will have these five big
41:44
tech companies, cloud companies, involved in
41:46
many ways because they're forming the
41:48
infrastructure and backbone of
41:50
AI now or of the internet. But
41:53
I think there are ways to kind of
41:55
keep our data safe and that will
41:57
be the work that governments will have
41:59
to do. do moving forward. Well
42:01
final question Madhu, tell us a story
42:04
about counter power because we
42:06
haven't really spoken much about that but all
42:08
these other people aren't passive recipients of
42:10
all this are they and the book's
42:12
full of the inspiring
42:15
fascinating kind of examples of people
42:17
both with tech and without kind
42:19
of finding ways of you
42:21
know reclaiming autonomy and
42:24
independence and collective action.
42:26
So let's end with one of those. Yeah
42:28
well I'm glad you say that. So
42:30
I actually wanted it so the final
42:32
three chapters are actually around this theme
42:34
of resistance in
42:36
small and big ways. You
42:38
know the gig workers and how they fight
42:40
back you know by kind
42:42
of these little tricks that help them to kind
42:45
of compete against the algorithm and
42:47
twist it so that they can
42:49
kind of get the best job which you know which
42:52
are like wonderful and inspiring but kind
42:54
of on the highest level I spent
42:57
time with Maya Wang who is a
42:59
Chinese activist and she works for Human
43:01
Rights Watch but she's had to leave
43:03
Hong Kong where she was based and is
43:05
now in the US and she
43:07
was one of the or she was
43:09
the woman who uncovered
43:12
the data system
43:14
the algorithmic system that underpinned
43:17
the policing in the Xinjiang
43:19
state in China. So you know
43:21
Xinjiang region in China where
43:23
the you have a high concentration of
43:26
Uyghur Muslims and it's
43:28
been reported widely now around the world that
43:30
there are these education or re-education
43:32
camps as the Chinese government calls them
43:34
where you know many Uyghur Muslims who
43:37
are trapped in a sort of dragnet
43:39
of surveillance are put into to sort
43:41
of teach them to be Chinese to
43:43
teach them the language. In
43:45
many cases you know people in these camps
43:47
have disappeared and you know they've
43:50
been called out as you know huge human
43:52
rights issue and Maya was able to
43:54
find the app that was being used
43:56
by the police in the Xinjiang state
43:59
founded you know by trolling the internet
44:01
and essentially decoding
44:04
what exactly they
44:06
were surveilling, what are the various variables
44:08
that they are tracking about all of
44:10
these families, many cases they haven't done
44:12
anything, they're just talking to relatives in
44:14
other countries and so on, and
44:17
finding a way to trap them in this
44:19
dragnet. And she is
44:21
really genuinely risking her
44:23
life to expose
44:26
how this ecosystem works
44:28
to decode this and
44:31
for her it's the
44:34
work that she feels compelled to
44:36
do. And
44:39
we've met quite a few times and talked about
44:41
does she feel she's actually making a difference against
44:43
this huge powerful institution
44:45
which is the CCP and
44:48
is it worth it? And she
44:50
said she does question this herself on
44:52
many days but for
44:54
her she feels that it feels
44:58
like it's just her but it never
45:00
has been. Whenever you have oppression and
45:02
surveillance and curtailment of human rights you
45:04
might think you're on your own but
45:06
there's actually a whole boatload
45:08
of people rowing in the same direction.
45:11
And so that for me was kind of the
45:13
most inspiring takeaway of this whole
45:15
thing that there are automated systems that
45:17
are curtailing our individual agency and kind
45:19
of increasing opacity of how things operate
45:22
in the world around us but we
45:24
can have a voice in
45:26
it and we can do that's what we should
45:28
be doing over the next few years finding our
45:31
voices. Well Madhu thank you
45:33
this has been totally fascinating and
45:36
thanks everyone for joining us we haven't been alone
45:38
either. So thanks
45:40
for tuning in the book again is
45:42
code dependent living in the shadow of
45:44
AI and it's available
45:46
from your local bookshop. I'm Carmilla and
45:49
you've been listening to Intelligence Squared. Thanks
45:52
for listening to Intelligence Squared. This
45:54
episode was produced by Isabella Somes and edited
45:56
by Tom Hall. If you want to keep
45:58
up with everything going... on at Intelligence
46:01
Squared, sign up to the newsletter,
46:03
head over to intelligencesquared.com to get
46:05
the heads up on all our
46:07
live events coming up. Members can
46:09
also peruse over 20 years of
46:11
our back catalogue featuring some of
46:13
the world's great minds. That's all
46:15
over at intelligencesquared.com. Intelligence
46:27
Squared Podcast is the home of deep
46:29
dive discussion and lively debate. Join me,
46:32
Connor Boyle, four times a week as
46:34
we take you to the heart of
46:36
the issues that matter. From politics to
46:38
the environment, science to tech, plus a
46:40
whole lot of creativity and fresh thinking
46:43
from some very bright sparks. Think George
46:45
Monbiot, Bernie Sanders, Laura Ridgewood, Tim Spector,
46:47
Reni Edelodge, Naomi Klein and many more.
46:49
On the podcast where great minds meet,
46:51
join us to meet a few more
46:53
at intelligencesquared.com wherever
46:56
you get your podcasts.
Podchaser is the ultimate destination for podcast data, search, and discovery. Learn More