Episode Transcript
Transcripts are displayed as originally observed. Some content, including advertisements may have changed.
Use Ctrl + F to search
0:00
This message comes from NPR
0:02
sponsor Stripe. Tap to pay
0:05
on iPhone and Stripe can
0:07
help you grow your business's
0:09
revenue and reach through accepting
0:11
more in person contactless payments
0:13
right from an iPhone. To
0:15
learn how visit stripe.com/tap iPhone.
0:17
This is
0:20
the TED Radio Hour. Each
0:22
week groundbreaking TED talks. Our
0:24
job now is to dream big. Delivered at TED
0:27
conferences. To bring about the future we
0:29
want to see. Around the world. To
0:31
understand who we are. From those talks
0:34
we bring you speakers and ideas that
0:36
will surprise you. You just don't
0:38
know what you're gonna find. Challenge you. We
0:40
try to ask ourselves like why is it
0:42
noteworthy. And even change you. I literally
0:44
feel like I'm a different person. Yes. Do
0:47
you feel that way? Ideas worth
0:49
spreading. From TED and
0:53
NPR. I'm Anush
0:56
Zamorodi and today on the show
0:59
what's in a face? Do
1:02
you have any idea how long a face will
1:04
stay in your mind? Like once it's
1:06
in there is it there forever? I actually
1:09
don't know but I
1:11
mean as an example I have
1:13
come across faces that I remember
1:15
seeing from when I was younger
1:18
than 10. So. Oh wow. Yeah
1:20
so I have come across like
1:22
school teachers or classmates or people
1:24
I remember in my neighborhood
1:26
when I lived there as a child and
1:29
it's a lot harder to explain where I've seen
1:31
them. Oh so you're saying it's more
1:33
just a sense of familiarity? Yeah
1:36
and it's almost like this intuition or
1:38
this kind of pain and I know
1:41
like I am so confident I've seen
1:43
that face before. That happens instantly.
1:47
This is Yennie Suh and
1:49
she is what's called a
1:51
super recognizer. So super recognizer
1:53
firstly I find that term
1:56
very cringy. Oh no. Basically
2:01
what it is is super recognises
2:03
people who are maybe on the
2:05
top 1 to 2% who
2:07
are very good at remembering faces
2:10
and I'm told that it's about 80% or so of
2:14
the faces that we see we remember so
2:16
it's very high in comparison to the average
2:19
person. Yenny
2:22
knew she was always good at recognizing
2:24
faces but about five
2:26
years ago she realised that her
2:28
ability was really unusual.
2:30
I just turned on the TV
2:32
and I happened to come across this
2:35
show about people with different abilities
2:38
and I saw this gentleman was
2:41
based in the UK and he was a police
2:43
officer and they made
2:45
him do a test where he was standing
2:47
in a really big train
2:49
station with lots of people going through.
2:52
You've got to try and find my
2:54
four actresses who have hidden themselves away
2:56
in the crowd or be wandering around.
2:59
And he was shown photos of I
3:01
think a handful of different faces and
3:03
he had to pick out the
3:06
people that he'd seen on the photo but the
3:08
trick was they would you know put a wig
3:10
on them or they'd be wearing a different hat
3:12
or glasses. Black leather jacket,
3:14
blue jeans, brilliant.
3:17
And he was able to I think pick them
3:19
all out. When I saw that
3:22
I just got a few spots and I
3:24
just had this really strong confidence that somehow
3:26
I'd be able to do those tests.
3:28
Is it the lady in the black
3:30
jumper cream top and blue jeans? Brilliant.
3:33
Yenny took some tests online and
3:35
she did really well so
3:38
she got in touch with a researcher in
3:40
Australia where she lives who confirmed
3:42
that she was indeed a super
3:44
recognizer. I ended up visiting their
3:46
lab in Sydney and they put
3:48
some sort of sensor detector so
3:50
they saw where my eye movements
3:53
how it worked when I was exposed to
3:55
a face and it's not
3:57
that I pinpoint on one feature. I
4:00
would not focus on the eyes or nose
4:02
or mouth or the shape of the face.
4:04
It's just the whole, the face of
4:06
the whole leaves kind of an imprint
4:08
in my head. So
4:12
have you ever found like your
4:15
ability useful then? Like,
4:18
other than like, you know, fun
4:20
party trick? Yeah, I mean, when
4:23
I was in uni, I worked at
4:25
a clothing store and we I
4:28
ended up catching a shoplifter because we
4:31
had a team meeting and there was
4:33
a particular shoplifter who would repeatedly
4:36
steal the highest priced item
4:38
in the store. And they
4:40
had this CCTV footage of her. It
4:42
was just this really grainy last night
4:44
photo. And they showed it to us
4:46
during our team meeting in the morning and they stuck it
4:48
on the wall and I looked at it and I was
4:50
like, all right, I don't know if I'll be able to catch
4:52
that person. I didn't really think much about it. An
4:56
hour or so into my shift, that exact
4:58
person walked in and I just knew straight
5:00
away it was that person, even though the
5:02
photo was really grainy. I just knew, what
5:04
did you do? We had security
5:06
guards in our store. So I just had to
5:09
we were wearing walkie-talkie type of things. And I
5:11
just told them, yeah, she's here. She's just walked
5:13
into the store. So maybe you guys should go
5:15
have a chat with her. They
5:18
ended up catching her and then they had
5:20
to call the cops in. And so that
5:22
was my one crime fighting experience.
5:27
From what I understand, a lot of
5:29
super recognizers work in or work
5:31
with law enforcement. Is that or
5:34
in some kind of security capacity, is that
5:36
not something that you sort
5:38
of thought, well, you know, I could actually make money
5:40
off of this? I
5:43
mean, at one point, I think
5:45
I did consider it. But I
5:47
think it's still very new. And
5:49
the research in this area is
5:51
still developing. I know countries
5:54
like the UK, like their
5:56
police enforcement have started recruiting
5:59
officers who. have that ability but
6:01
I always thought it was a little
6:03
bit creepy that I don't know if
6:06
creepy is the right word but I
6:08
always thought that you know it would
6:10
be perceived as being a bit like
6:12
I was a stalker or something. Yenny
6:16
sees and processes faces in
6:19
an extraordinary way. The
6:21
technology is quickly passing her
6:24
superhuman abilities. Most
6:26
of us already use facial recognition
6:28
to unlock our phones and tag
6:30
people in photos. Governments,
6:32
law enforcement, and companies can use
6:34
cameras and algorithms to collect
6:36
and identify us. But
6:38
where will we draw the line? Today,
6:41
what's in a face?
6:44
Ideas about the promise and peril
6:46
in turning the human face into
6:49
an everyday digital tool for anyone
6:51
to use. I
6:57
was actually literally just today talking
6:59
to a facial recognition vendor. What
7:02
about? So they're in the middle
7:04
of filing a patent where artificial
7:06
intelligence or machine learning system will
7:08
look at your face and
7:11
determine how you feel. This
7:14
is Bloomberg tech columnist, Parmi
7:16
Olson, which will allow them
7:18
to analyze the faces
7:20
of stock market traders and bond
7:22
traders to get a sense
7:24
of where the market is moving based
7:26
on the emotions shown on the faces
7:29
of these traders. In
7:31
a way that sounds maybe a little bit innocuous, if
7:34
not a very odd way, potentially
7:36
a disastrous way to determine where the market
7:38
is going. I don't know that that would
7:40
work. But I think
7:43
the question is, well, what happens
7:45
when all these different vendors and
7:47
stakeholders have access to our faces
7:49
and can maybe get to a
7:51
point where they want to start
7:54
drawing inferences about
7:56
us based on our faces now,
7:59
even if. making market decisions
8:01
based on the minute facial
8:03
expressions of day traders sounds
8:06
far fetched. Perme says
8:08
the basic technology behind it is
8:10
not. So these
8:12
systems are essentially trained
8:16
on millions and millions of actual
8:18
photos of people and
8:21
the more data it has, the more accurate
8:23
it can get. And
8:26
I think the concern is that
8:28
this technology is so widespread
8:31
and so actually not that difficult
8:33
to build. Some
8:35
of the technology is open source. There
8:38
are billions of images of faces on
8:40
the internet. It's relatively cheap
8:42
to do it. Yeah, and
8:44
it's so cheap that you have written about
8:46
how this software is now used pretty widely
8:49
in retail, even gas
8:51
stations, convenience stores. Yeah,
8:53
I think the main reason
8:56
that retailers want to use
8:58
facial recognition in their shops
9:01
is to actually look for unwanted
9:03
individuals. So
9:05
there was a chain of stores in
9:07
the UK that hired
9:10
a security system, a facial
9:12
recognition security system, to be
9:14
installed. Let's go to
9:16
Alesbury, southern England, and
9:18
to a budget store. Perme Olsen
9:21
continues from the Ted stage. Now,
9:23
Budgens in this particular town has
9:26
been having trouble in the last few
9:28
years with people coming in and stealing
9:31
meat from their refrigeration aisle. So
9:33
a year ago, they installed
9:35
some new technology from a
9:37
company called FaceWatch. And through
9:39
their usual CCTV cameras, FaceWatch's
9:41
computer and software would scan
9:44
every single face that came into
9:46
the budget and match it up against
9:48
a watch list. Now,
9:51
this watch list is processed by
9:53
FaceWatch, and Budgens can also add
9:56
to it if they suspect someone
9:58
of stealing. I called
10:00
up the budget and asked how they thought it
10:02
was working. And the staff member there
10:04
told me that his phone gets
10:06
pinged up to 10 times a day
10:08
with an alert to say
10:11
that someone has walked in the store who
10:13
matches the watch list. So
10:15
if that happens, he might call the police
10:17
if it's an aggressive person or he might
10:19
just say, hey, you're on CCTV. And
10:22
actually it works pretty well, he said. He thinks
10:24
it's helped. But there's
10:26
a few concerns about face watch. So first
10:28
of all, to get
10:30
on the watch list, you don't have to
10:32
be arrested and you don't have to be
10:34
charged by the police. There's no real legal
10:37
due process. And the
10:39
other thing is that to be uploaded onto
10:41
the servers of face watch to be on
10:43
a watch list, you can
10:45
be on it for up to two years and
10:47
you won't be taken off. So
10:51
this is a security company that
10:53
relies on watch lists and
10:56
anyone with clearance, I guess that could
10:58
be a store employee, they could add
11:00
someone to the list and then what,
11:03
that information is shared. Yes,
11:05
that's right. Each store would have their own
11:08
watch list and they would share the watch
11:10
lists with each other. So you'd have an
11:12
even bigger watch list. And
11:14
yes, the people who are on these
11:16
systems, this is a private system. This
11:18
is not something where there's a court
11:20
order or warrant or anything like that
11:22
there. This is totally done privately by
11:24
a business. It's their own private watch
11:26
list that they've put together. So
11:29
for me, one of the underlining problems with
11:32
this kind of mass surveillance is
11:35
that sometimes the algorithms are wrong.
11:38
Right. When I talked to one of
11:40
the people who worked at one of these stores,
11:42
they said that about 25% of the time the
11:44
system was wrong. So
11:46
they would get the alert, get told that person
11:49
had walked in and they'd walk
11:51
around and they'd see actually it wasn't
11:53
that person. And so they had
11:55
to really be careful to trust that
11:57
the system was correct. real
12:00
world when the lighting isn't that good
12:02
and the image might be a little
12:04
bit grainy, not
12:06
surprisingly, the system was getting it
12:09
wrong one out of four times. Wow.
12:11
And I can imagine someone thinking like, okay,
12:13
well, it's a grocery store. But
12:15
if you're talking about a
12:17
situation involving law enforcement, that
12:20
could get quickly escalate, I would think. Yeah.
12:23
So a police officer has
12:26
a body cam with facial recognition
12:28
or a camera on their van
12:30
with facial recognition and they detect
12:32
someone. And if that person has
12:34
increased melanin in their skin or
12:36
they're black, essentially, then it is
12:38
more likely to make
12:40
a mistake in identifying that
12:42
person. And the reason is that the
12:44
database is that these facial recognition models
12:47
are trained on typically
12:49
have way more white
12:51
people than black people.
12:55
And so the system just isn't trained enough
12:57
on black people so it doesn't identify them
12:59
properly and makes more mistakes. And that has
13:01
happened. And it's
13:04
probably going to continue to happen, too. When
13:07
we come back, what are we willing
13:09
to stomach in a face tracking filled
13:11
future? On the
13:13
show today, what's in a face? I'm
13:16
Anoush Zamorodi and you're listening to the TED
13:18
Radio Hour from NPR. Stay
13:21
with us. This
13:31
message comes from Apple Card. You earn
13:33
up to 3% daily cash on
13:35
every purchase. That's 3% on products
13:38
at Apple, 2% on all other
13:40
Apple Card with Apple Pay purchases,
13:43
and 1% on anything you buy
13:46
with your titanium Apple Card or
13:48
virtual card number. Visit apple.co/card calculator
13:50
to see how much you can
13:53
earn. Apple Card issued by
13:55
Goldman Sachs Bank USA, Salt Lake City
13:57
branch. Subject credit approval, terms of credit.
14:01
This message comes from NTR sponsor BetterHelp.
14:03
What's the right amount of socializing for
14:05
you? Maybe you thrive around people or
14:07
maybe you need more alone time. Therapy
14:09
can help you figure out your boundaries,
14:12
how to recharge, and your social
14:14
sweet spot so you can build
14:16
a social life that doesn't drain
14:18
your battery. Try therapy with BetterHelp.
14:20
It's entirely online, designed to be
14:22
convenient, flexible, and suited to your
14:24
schedule. Visit betterhelp.com/NPR today to get
14:26
10% off your first month. This.
14:29
Message comes from Npr Sponsor Square Space
14:32
Kickstart Update written content on any website,
14:34
product description or email with Square Space
14:36
Ai generating instant personalize results that no
14:38
and show your brand identity. explain what
14:41
your site is about, choose your tone
14:43
and under what you need to get
14:45
short or long form text. No matter
14:48
the placement, Square Space Ai makes it
14:50
easier to go live, stand out, and
14:52
succeed online. Use Code Radio Our to
14:54
save ten percent off your first purchase
14:57
of a website or domain. This
15:00
message comes from NPR sponsor Chevron.
15:02
Methane management is a critical part
15:04
of achieving a lower carbon future.
15:06
Chevron is taking action to keep
15:08
methane in the pipe. They're committed
15:11
to evolving facility designs and operating
15:13
privacies, and they've trialed over 13
15:15
advanced detection technologies,
15:17
including drones and satellites.
15:20
That's energy in progress.
15:22
Learn more at chevron.com/methane.
15:27
Before we get back to the show, I
15:29
want to ask you to please consider becoming
15:32
a member of TED Radio Hour Plus. You'll
15:34
get extra advice, stories, and
15:37
expertise from TED speakers every
15:39
other week and no ads
15:42
ever. And you'll
15:44
be supporting public radio.
15:46
Listener support is crucial
15:48
to keeping us going.
15:50
Go to plus dot
15:53
npr.org/TED, or give it a
15:55
try right in the Apple Podcast app. It's
16:00
the Ted Radio Hour from NPR.
16:02
I'm Anoush Zamorodi. And on today's
16:04
show, What's in a Face? And
16:07
how companies will use them to make
16:09
money. We were just talking
16:12
to Bloomberg columnist, Parmi Olson. Each of one
16:14
of our faces has a face print and
16:16
a type of fingerprint, except just showing
16:19
the fingers that corresponds
16:21
with the image of our face. And
16:24
there are lots of databases of people's
16:26
face prints out there on the internet.
16:29
And even if they aren't identifying us by
16:31
name, our faces can
16:33
be tracked, categorized, and profited off
16:35
of. For example, a
16:37
few years ago, Walgreens installed cameras in
16:40
some of their stores that
16:42
identified shoppers by age, gender,
16:44
and then displayed targeted ads.
16:47
Yo, no spark? Like, do you look male
16:49
and around 20 years old? I have a
16:51
face of Sprite. Yeah, it's Sprite. And you
16:53
looked in green last day. 50
16:55
and female, maybe a green tea.
16:58
Subaru has a driver monitoring... Car
17:00
maker Subaru's new vehicles use facial
17:02
recognition tech that create user profiles.
17:15
So there is a casino in London
17:18
which has facial recognition cameras dotted around
17:20
all the different rooms. And
17:22
it uses that so that when high
17:24
rollers walk in to a certain room,
17:26
then the staff get an alert on
17:28
their phone, which gets sent to an
17:31
encrypted chat app they use called Wicker.
17:33
And then they notify each other like, oh,
17:36
so-and-so this hostess should go up because that's
17:38
the high roller's favorite. And we know that
17:40
they like this particular type of food and
17:42
this particular type of drink. And so they
17:45
can actually provide a better service and
17:47
they call it their white glove service.
17:50
And I remember asking the head of
17:52
security, well, are the
17:55
patrons actually a little bit put
17:57
off by that? And he said, not at all,
17:59
not a single... woman thinks
18:01
that that's even the slightest bit creepy, they just
18:03
see it as part of the service. It's
18:06
what they expect. And I think it's
18:08
a nice little allegory for just how this
18:10
kind of surveillance is going to eventually
18:12
come to serve the rest of us as
18:15
consumers, that the convenience will ultimately be
18:17
something that we just take
18:19
for granted and we won't worry too
18:21
much about the price that we're paying
18:23
with our privacy. And I
18:25
think that is just the way it'll go. I
18:29
mean, I guess I can see the
18:31
appeal if you're opting into a luxury
18:33
service. I mean, that just feels very
18:35
different than being tracked while you're walking
18:37
down the street or going into a
18:39
store. I just don't think that
18:41
that is something a lot of us would sign up for. Yeah,
18:44
I would say that facial
18:46
recognition definitely has become
18:49
a controversial subject. And I think
18:51
that's made it difficult for brands,
18:54
for advertisers who might be able to
18:56
benefit from using it to target their
18:59
brands at people. They've
19:01
had to take a step back.
19:03
Yeah, there are a number of
19:05
retailers now, I think this is
19:07
what you're referring to, who are
19:09
facing lawsuits for surveillance, for gathering
19:11
data on their customers without consent.
19:15
But really, like for now, it just feels like
19:17
whack-a-mole because we have not figured out as
19:20
a society what we think is okay and
19:22
what isn't okay when it comes to face
19:24
tracking. Yes, that's right. Companies
19:26
who develop these kinds of systems need to be very
19:29
careful. There needs to be more ethical oversight of
19:31
how these systems are developed. And right now, that's
19:34
only going to come from regulation, which is a
19:36
couple of years away, yes, but also campaign groups.
19:38
And there are some really good civil liberties groups
19:41
in the United States and Europe who are really
19:43
keeping an eye on this and just helping keep
19:45
companies on their toes. If there
19:47
wasn't the amount of kind of upset that
19:49
had been created around facial recognition, I think
19:51
there'd be a lot more advertisers using it
19:53
right now. But because people have
19:55
really rung alarm bells about it, then I think
19:58
that's made companies companies really just take stock and
20:00
just sit back and just say, okay, let's just
20:02
be a little bit more cautious about how we
20:04
use this. And I think that's a really good
20:06
thing. Given that
20:08
there still aren't major regulations
20:10
out there around facial recognition,
20:12
I mean, are we just
20:14
at the point of no return here? It is,
20:16
I mean, can't go back. There
20:19
definitely needs to be more laws and
20:21
regulation, but we have sort of gone
20:23
past trying to force
20:25
companies to design algorithms in
20:28
a way that are safe
20:30
and ethical because the algorithm is already
20:32
out there. But there is a law
20:35
coming from the European Union called the
20:38
AI Act, and it actually
20:40
bans all forms of facial
20:42
recognition for surveillance by
20:45
police unless it's for trying
20:47
to combat terrorism. So that's
20:49
a pretty blunt rule.
20:51
And I mean, that's going to be the first
20:53
kind of comprehensive
20:55
legislation around the use
20:58
of artificial intelligence algorithms.
21:00
I think the issue with it is that
21:03
it is so broad. It's not just about
21:05
facial recognition. It's about all forms of AI.
21:07
So whether that's recommendation systems
21:09
on social media or
21:13
facial recognition, it covers a lot. And so
21:15
enforcing it, I think, is going to be
21:17
difficult. You
21:20
know, one of the reasons all this
21:22
tracking is possible is because we have
21:25
accepted the idea that cameras are
21:28
in our pockets all the time. They're on
21:30
our doors. They are all over public spaces.
21:32
And we're okay with
21:34
it. We are okay, largely, with
21:37
being surveilled. Yeah, there's
21:39
something like 20 million homes in the
21:41
US have a video doorbell. The
21:44
thing about ring doorbells that
21:47
I think is really interesting is that actually the studies
21:50
that have been done about just
21:52
how effective these cameras are in
21:55
reducing neighborhood crime show that
21:58
the evidence is really flimsy. There's
22:00
actually not much evidence that they
22:02
do reduce crime, but
22:04
the big impact is on human
22:06
sentiment. So the owners of these
22:08
cameras feel a greater sense of
22:10
security and a greater sense of
22:12
control. But then on the
22:14
other hand, we also collectively come to accept
22:17
that our behavior is being watched. So yeah,
22:19
take that how you will. I think
22:22
we are just an increasingly surveilled society.
22:24
And I think people are just like
22:27
slow boiled frogs. We're increasingly accepting of
22:29
it because it's just what's happening for better
22:31
or worse. That's
22:33
Parmi Olson. She's a tech columnist
22:36
at Bloomberg. And you can see
22:38
her full talk at ted.npr.org. And
22:43
earlier, we heard from super recognizer
22:45
Yeni Seh, who works as a
22:47
translator in Australia. On
22:50
the show today, what's in a face?
22:53
Often, to understand how technology
22:55
will change our lives, we
22:58
just need to watch a movie, like
23:00
this one released in 2021. We're
23:03
going to watch a movie. We're
23:07
going to watch a movie. So
23:10
the champion was a film shot and
23:12
made in Poland. So everyone's speaking, basically,
23:14
Polish or German. This is Mike
23:16
Seymour. He's a researcher at the University
23:19
of Sydney and works in the film
23:21
industry in special effects. And it's
23:23
a great film. It's a true story about one
23:25
of the first members of Auschwitz who was a
23:27
boxer. Terribly moving story, but
23:29
of course, only in Polish or German.
23:32
And usually, when there's a foreign film
23:34
that wants to break into the English-speaking
23:36
market, there are three
23:39
options. Doving it so we
23:41
get somebody else to voice over a different
23:43
piece of dialogue. But of course, the lips
23:45
aren't right, so it looks kind of odd.
23:48
Or there's subtitles. Or we have
23:50
the new version of what we
23:52
call facial reenactment. Facial reenactment.
23:55
It's a new technique that Mike and his
23:57
team used on the champion. We
24:00
got involved as part of a team to convert
24:02
the entire film to English. So
24:06
now if you were to watch the film
24:08
in English, every actor speaks as if they'd
24:10
been shot in English. So
24:17
we've replaced effectively the actors' faces
24:19
with their own faces, saying
24:22
the lines in English. He
24:24
looks more like a small rooster
24:26
black of its feathers than a
24:28
champion. Okay, so Mike, when
24:31
I watch this English version, it's seamless.
24:33
It's like their mouths, their faces,
24:35
everything looks like it was originally
24:38
shot this way in English. Is
24:41
this common in the industry? Is it normal? Well,
24:44
it's the first time anyone's done it in the world, but hopefully it's going to become normal. In
24:46
fact, in the film industry... Months
24:49
after filming, the actors re-recorded
24:51
all their lines in English
24:53
as cameras taped their voices
24:55
and facial expressions. And
24:58
then through a process called neural rendering
25:01
technology, their faces were replaced.
25:04
So it looks like the film was just shot
25:06
twice in two different languages. Come
25:09
on, Jim. Come on. The
25:14
film industry is always pioneering new
25:16
tech to trick our eye, to make
25:19
someone or something look real.
25:22
But over the last few years, Mike has
25:24
been developing ways to use these techniques in
25:27
our real lives. Could
25:29
we take this tech and just sort of use it
25:31
outside the film industry that fascinated me, where I say,
25:33
well, hey, I don't want to get
25:35
shaven and put on a suit for my important
25:37
meeting today. So I'll just flip a switch and
25:39
get digital makeup and I'll look a whole lot
25:41
better and a whole lot smarter. And
25:44
I would be able to say, speak in Korean when I
25:47
absolutely can't speak in Korean. And
25:49
that, we hope, would facilitate much
25:52
more genuine communications across cultural
25:54
divides. Wow. You're
25:57
saying that maybe one day if... I
26:00
have relatives all over the world who speak all
26:02
different languages, but maybe one day we could
26:04
do FaceTime and it would sound
26:06
as though I was speaking fluent
26:09
Swedish and they were speaking back to
26:11
me, well they can speak English, but
26:14
that we would hear each other's native tongue
26:17
and wouldn't know the difference. It would look as though I
26:19
could speak fluent Swedish, but it wouldn't look
26:21
like I was not myself.
26:24
Yes, there is a lot of
26:27
modern technology that's very sophisticated that
26:29
would, we think, benefit from being able to have
26:32
an extra layer of communication
26:34
that you get from face-to-face
26:37
interaction. We're kind of this nexus
26:39
point where that's possible. Mike
26:41
Seymour picks up from the TED stage. We're
26:43
interested in being able to see if
26:45
we can put a face on technology
26:47
because how would you react when
26:49
a computer reacts to you with a smile?
26:52
Would a six-year-old learn
26:54
maths better if there was
26:57
a six-year-old teacher on the screen? What
26:59
about if it was a slightly older version
27:01
of herself? Would a grandparent
27:04
having a cup of tea be more likely to
27:06
check in with a computer system? They
27:09
didn't have to log in and type. They
27:11
could just talk to a virtual agent that
27:14
actually was somebody from their past. This is
27:16
what we're excited to explore with digital humans.
27:19
Our ability to produce digital humans up
27:21
until recently has been quite limited, but
27:23
we're now seeing interactive digital humans starting
27:25
to appear. The doors are
27:27
opening. We are in an inflection
27:30
point. We have this perfect storm
27:32
of faster GPU graphics cards, new
27:35
artificial intelligence deep learning algorithms and
27:37
great advances in game engines. It's
27:40
an incredible combination of things coming
27:42
together. This tremendous nexus
27:44
of points is just providing us
27:47
with an extraordinary opportunity of taking
27:49
the things that we can do. The important
27:51
thing about this technology is that we can
27:53
now use this to get these
27:55
faces to work with us in real time.
27:58
In other words, this is a really key thing. point, the
28:00
faces that we're talking about can
28:02
talk, interact and see us. Okay,
28:08
putting faces on our technology. Tell
28:10
me more about how you see
28:12
this working and the reasons
28:14
why we would want it. Yeah,
28:17
I mean there are a lot. Already
28:19
in New Zealand there is automatic sign
28:21
language, so if somebody's speaking a digital
28:23
human signs for the deaf community, you
28:26
might have an assistant sitting in on a
28:28
Zoom call that you can ask
28:30
to help book future things, take
28:32
notes, do stuff. In aged care you
28:34
could have an assistant that
28:37
logs in with somebody each day and make
28:39
sure that they're okay and are lucid and
28:41
they've taken their pills, not to replace a
28:44
healthcare worker but just simply that to make
28:46
sure that they're okay and facilitate them staying
28:48
in the home longer. And
28:50
so in a world where we're saying, hey, you
28:53
know, even to use the phone, there are no
28:55
buttons now, you have to, you know, swipe up,
28:57
swipe left, do all this stuff. People are like,
28:59
I have trouble with that. And so
29:02
we could bring a face from their past
29:05
that would be the one that they interact
29:07
with that technology. You don't think that would be odd
29:09
to someone that if you said, well, this is your
29:11
sister, she's not actually your sister, she just
29:13
kind of looks like your sister and she's
29:15
gonna help you use your phone.
29:17
That, I don't know, that might freak me out. Yeah,
29:20
you know, you just touched on a
29:22
really interesting point. People when asked traditionally
29:24
say, I wouldn't like that. So if
29:26
you project ahead, you say, hey, would
29:28
you have a digital human tell you
29:30
what to do? No, no, no, absolutely not. That would
29:32
be freaky. And yet every time we do a lab
29:35
test, they completely don't do that.
29:39
I looked up one of those services that
29:41
might be available in the near future. Hello,
29:44
this is Sol, Dr. Beanie's
29:46
assistant. Hello, Sol, this is Tyler.
29:49
I wanted to ask you about my recent surgery. In
29:51
the demo video, a man is home after
29:53
knee surgery and consults on
29:56
his laptop with his AI nurse.
30:00
seem to know her stuff. The discharge
30:02
summary states that you should take the
30:04
pain medicine about 20 minutes before you
30:06
put on your headset for your virtual
30:08
reality meditation therapy. And one way? Mike,
30:12
I don't know. I have to be honest, I
30:14
was a little unnerved by soul. Sure.
30:17
What I would say is it was
30:19
her lack of authenticity that probably bothered
30:21
you, not the digital representation of
30:23
the face. Once you get to
30:25
a certain level of quality, you kind of pass
30:27
what we've referred to as the uncanny
30:30
valley. So you now got something that looks
30:32
pretty darn good. It doesn't matter
30:34
whether you can tell it's real or not. That's
30:37
not the deciding factor. It's the authenticity
30:39
of the emotional kind of response that
30:42
matters. And that's the driving factor. And
30:44
so for us to succeed in those
30:47
cases, we really need to make sure
30:49
that it's the sort of the back
30:51
end behind the face that's delivering what's
30:53
wanted, not so much the face itself.
30:58
Speaking of the uncanny valley, you
31:01
did a demonstration on stage where you
31:03
showed off a very realistic
31:05
digital version of your head,
31:07
your face on a
31:09
screen that you could control.
31:12
Hi, I'm Mike. Well, kind of virtual Mike,
31:14
really. This is our digital human project, which
31:16
is a collaboration of a whole bunch of
31:18
people coming together to produce, well, a
31:20
virtual human. And not only a virtual human, but... I mean,
31:23
people, like right now, we can make digital avatars
31:25
of ourselves, but not like this. So
31:27
how do we make this realistic extent? How hard was
31:29
that to build? Yeah,
31:31
I mean, we sort of are close. I mean, that one took
31:34
a lot of people. So we
31:36
scanned my face and I got my
31:38
face done in one of the most
31:41
high resolution facial scanning systems in the world.
31:43
It produced this super realistic version of my
31:45
head. Then I could puppeteer that
31:47
in real time or have it driven. So how
31:49
do we do it? So first
31:51
we scanned my face. This
31:53
allowed us to produce a very complex
31:55
digital avatar of my head or
31:58
a digital puppet. Then with
32:00
a camera mounted on a head rig, the computer
32:02
can actually read my face. An
32:05
advanced AI engine then
32:07
basically interprets that into
32:09
expressions. Now the
32:11
computer can tell the digital puppet what to do.
32:14
In effect, what's happening is it's the
32:17
computer telling the muscles in the digital
32:19
mic how to smile, talk or do
32:21
things. It
32:24
makes me wonder if we might get to
32:26
a point where kids think, well, I would much
32:29
rather deal with my extremely
32:32
realistic-looking tutor on
32:34
my laptop who responds to me
32:36
but who doesn't actually give
32:39
me a hard time and won't be offended if I
32:41
tell it to shut up. How
32:43
do we make
32:45
sure that people don't choose these
32:47
artificially intelligent agents over humans?
32:51
Brilliant question and I wish I had a definitive
32:53
answer. I can only give you my hope. Imagine
32:56
I'm a vet, I come back,
32:59
but I'm an 18-year-old guy. I've
33:01
experienced some horrendous experiences in conflict.
33:04
I'm now suffering from all sorts of sexual
33:07
dysfunction. I cry at night. I have
33:09
things I'm really embarrassed about and ashamed
33:12
of. I kind of want help
33:14
but I don't want to have to sit there and tell
33:16
a doctor that. I'd actually like my doctor to know all
33:18
that so that they can help me. If
33:20
there are ways where you can communicate that
33:23
to an effectively like a digital nurse, a
33:25
digital doctor substitute so that the system can
33:27
know it but you don't have to face
33:29
them and look them in the eye and
33:32
say, you know, I have sexual
33:34
dysfunction, but you can then
33:36
get treatment and help and the
33:38
system knows and can look after
33:40
you, that's a tremendous benefit. So
33:42
hopefully for that generation, there'll be
33:45
tools that appear in their everyday life
33:47
that just make it a bit easier
33:49
and reconnect them with people,
33:52
not take them away. And I'd like
33:54
to think that if I
33:56
had teenagers who were in
33:58
distress and... teenagers that were struggling,
34:01
if there were tools that helped them,
34:03
that that would do just that, it
34:05
would help them. It wouldn't replace human
34:08
contact. In
34:12
a minute, the ethical dilemmas with
34:14
giving our technology a face.
34:17
I'm Manoush Zamorodi, and you're listening to
34:19
the TED Radio Hour from NPR. Stick
34:21
with us. This
34:31
message comes from NPR sponsor
34:34
Viking. Committed to exploring the
34:36
world in comfort. Journey through
34:38
the heart of Europe on
34:40
an elegant Viking longship with
34:42
thoughtful service, destination-focused dining, and
34:44
cultural enrichment. On board and
34:46
on shore. And every
34:48
Viking voyage is all inclusive, with
34:50
no children and no casinos. Discover
34:53
more at viking.com. This
34:56
episode's sponsor is PwC, which offers
34:58
the following message. A robot may
35:00
not be coming for your job,
35:02
but competitors are coming for your
35:05
market share. PwC pairs the right
35:07
tech with the right solutions to
35:09
help you gain a competitive edge,
35:11
reimagine operations from the cloud, fuel
35:13
innovation with responsible AI, and detect
35:16
risks before they become headlines. Human-led
35:18
and tech-powered, it's all part of the
35:21
new equation from PwC. This
35:24
message comes from NPR sponsor CarMax. CarMax
35:26
is putting peace of mind back in car
35:28
shopping by putting you in the driver's
35:30
seat to find a ride that's right
35:32
for you. Because CarMax believes you shouldn't
35:34
just settle for a car, you should love
35:37
your car. That's why every car they
35:39
sell has CarMax certified quality, so you
35:41
can be sure with upfront pricing that's
35:43
the same for every customer. Don't settle.
35:45
Find love at First Drive. Start
35:47
shopping now at carmax.com. CarMax.
35:50
The way car buying should be. This
35:53
message comes from NPR sponsor Planet
35:55
Oat, while some podcast topics
35:57
can be complex and pretty heavy.
36:00
Planet Oat Oat Milk is an
36:02
uncomplicated no-brainer. It's rich, creamy,
36:04
and an excellent source of
36:06
calcium with vitamins A and
36:09
D. Also, Planet Oat's
36:11
unsweetened varieties have zero grams
36:13
of sugar. It's great in coffee,
36:15
cereal, smoothies, you name it. So, next
36:17
time you're at the grocery store, save
36:20
the overthinking for the podcast and
36:22
reach for the one that has
36:24
it all. Planet Oat Oat Milk
36:27
or visit planetoat.com for more. It's
36:31
the TED Radio Hour from NPR.
36:33
I'm Manoush Zomorodi. And on the
36:35
show today, What's in a Face?
36:38
We were just hearing from Mike
36:40
Seymour, a film industry veteran who
36:42
now wants our virtual helpers to
36:45
look and act more human.
36:47
Hi, I'm Mike. Well, kind of
36:49
virtual Mike, really. This is our
36:51
digital human project, which is a
36:53
virtual human, but one rendered in
36:55
real time. Puppeteered or driven in
36:57
real time, rendered in real time,
37:00
Mike says that technology with a
37:02
face can better interact with us.
37:04
Talk to patients about how they're
37:06
feeling. Ask students where
37:08
they're struggling in algebra. Coach
37:10
brain injury survivors to be
37:13
more self-sufficient. So
37:16
many different types of technology, so many different
37:18
use case scenarios, as you've mentioned. I expect
37:21
we're going to see this area
37:23
explode in the next few years.
37:27
But along the way, wow, do
37:29
we have a lot of ethical
37:32
dilemmas to sort out. I mean,
37:34
you're just reminding me, we've talked about this on
37:36
the show a lot, the deep fakes.
37:39
You know, there's the famous case
37:41
of seeing President Obama giving
37:44
a speech that he never gave. Where
37:47
do you see some of the pitfalls that we need to watch
37:49
out for? I completely agree
37:51
with you. Some of the applications of this
37:54
technology by what I would describe
37:56
as bad actors is appalling and just,
37:58
you know, absolutely indefensible. It's
38:00
a really interesting sort of fundamental ethical question.
38:03
Is the technology good or evil or is
38:05
it the use of it and the application
38:07
of it? And I
38:09
can say this, for me personally, you can
38:11
use steel to make ambulances or tanks. I'm
38:14
in the business of trying to see if we
38:16
can't use it to make lots of good ambulances.
38:19
I know some people are going to make tanks, but
38:21
that's something I don't have any control over. But
38:24
I do feel that it's going to happen. The
38:26
best line of defense we have to
38:29
the deception that can be done
38:31
by this technology is an informed
38:33
public. If you see
38:35
something that's highly improbable, you're
38:37
going to say, hang on a second,
38:39
that's probably been faked or not real.
38:41
Or you'll dig in to try and
38:43
discover its authenticity. There's
38:47
a lot of ways we can produce inaccurate
38:49
material, but an informed public
38:52
that is aware of what's
38:54
going on, that understands what
38:56
the sort of limits of
38:58
technology are. And
39:00
where it's going is vital to being
39:02
able to do this sort of stuff.
39:04
But are we just barreling towards a
39:06
future where your identity gets stolen, but
39:08
it's not just your social security number,
39:11
it's your face that
39:13
can make it look like
39:15
you're handing over your bank account number to someone
39:17
on a Zoom call? Or
39:20
is my imagination way ahead
39:22
of the technology? Gosh,
39:25
I mean, people will deceive people with
39:27
this technology. But yeah, when
39:29
I'm talking about these AIs, they work very well
39:31
when you've got a limited amount of stuff that
39:34
you're asking them to do. So
39:36
if I was having an agent that
39:38
was helping you as a maths tutor,
39:40
and it was discussing maths and explaining
39:42
mathematical concepts, that could be completely plausible
39:45
and look photorealistic and wonderful.
39:48
But if I asked my maths
39:50
assistant, what does it mean to
39:52
understand existential philosophy in France, it
39:54
would completely blank out. So
39:57
we're not talking about a general
39:59
intelligence. intelligence. People quickly extrapolate
40:01
to that, but we are so far
40:03
away from that. General AI
40:06
intelligence is a long way off, but
40:08
as I say, these plausible,
40:11
realistic, domain-specific applications in
40:14
health, in aged care,
40:16
in all of these parts
40:18
of everyday life, completely
40:20
plausible and extremely likely to
40:22
happen because we just love
40:25
faces. We love
40:27
face-to-face communication. We
40:29
love seeing people face-to-face. Humanity
40:32
just likes faces. We're
40:34
talking about just putting a face
40:36
on technology so that it's a
40:39
bit more friendly, a bit more empathetic,
40:41
a bit more engaging, that has an
40:43
emotional response and therefore we find it
40:46
to be a better,
40:48
more pleasurable experience. That's
40:51
Mike Seymour. He's a researcher and
40:53
academic at the University of Sydney.
40:55
You can see his talk at ted.com.
40:59
On the show today, What's in
41:01
a Face?, how our faces are
41:03
captured, where that data ends up
41:05
and who has access to it.
41:08
Do you use Instagram? Do you
41:11
let Google Maps track you? Do
41:13
you, I don't know, let open
41:15
your iPhone with your face? Oh
41:18
God. I'm pretty privacy conscious,
41:20
as you might imagine. I don't
41:23
allow Google to store my location. I
41:26
don't have the face on lock turned on in my phone.
41:29
And that is in part
41:31
because I'm just aware of how sensitive
41:33
the data is and I feel so
41:35
self-conscious about it. This is Alison Killing.
41:38
She's a journalist who, ironically,
41:40
uses all sorts of data
41:42
that's available online to track
41:44
the actions of authoritarian governments.
41:47
So all of the digital traces that we
41:49
leave behind on the internet, like how can
41:51
we use those to investigate? And I'm mostly
41:53
focused on human rights. In
41:55
2021, Alison won the Pulitzer Prize
41:57
for her investigations into China. A
42:00
place where people's faces and
42:02
movements are constantly being watched.
42:05
There's really words to cover
42:07
cities in a way that they
42:10
are able to obtain as much data
42:12
as possible. So placing cameras in high
42:14
traffic areas, so for example at the
42:17
entrance to a neighborhood, where they can
42:19
then say, okay, we know everybody who
42:21
is in this neighborhood now, and we know
42:23
whether they're in or when they've left. China
42:26
has the world's largest surveillance network,
42:29
and cameras watch over residential
42:31
complexes, office buildings, train stations,
42:34
shopping malls. So these
42:36
very high traffic places where they can
42:38
then say, okay, these are
42:40
the people who are in this area, so
42:42
that they can then control that area. They're
42:46
collecting a lot of data, and there's huge ambition
42:48
about the things that they would like to do
42:50
with it. A lot
42:52
of work has gone into the processing
42:54
tools at the back end of this
42:56
software to identify people by gender and
42:58
age, and then controversially also by ethnicity.
43:01
And as you may know, the Chinese government has
43:03
been tracking one large group of people in particular,
43:06
the Uyghurs, a Muslim ethnic
43:09
minority in a western region called
43:11
Xinjiang. Yeah, there's been
43:13
a lot of discrimination. There's been
43:16
intermittent crackdowns on the practice of
43:18
Islam, but then in 2009 there
43:20
were two Uyghur workers killed,
43:23
and that led to protests
43:26
which turned violent, and about 200
43:28
people were killed. And
43:31
this was going to be the start as
43:33
well of the Chinese authorities starting to
43:35
crack down on the region and seeing it
43:38
as a very violent place, seeing it as a
43:40
site of terrorism. The
43:43
incident ushered in an era of
43:45
Chinese control of the Uyghurs, using
43:48
all kinds of tactics. So I think
43:52
from 2013, 2014, we saw the start of
43:54
this real campaign of oppression in Xinjiang with
43:58
the installation of this incredibly... invasive
44:01
surveillance state. And
44:03
the New York Times has done a
44:05
lot of investigation on this topic where
44:07
they actually found documents
44:09
from tech companies which
44:12
were boasting that they could identify
44:14
regas using facial recognition software. So
44:17
one of the first things that we saw
44:19
was the creation of this network of
44:22
detention camps. You
44:24
know, Alison, we were just talking to
44:26
Parmi Olson, she's a tech reporter, about how
44:29
people view facial recognition in the
44:31
Western world. And it always, it
44:33
often feels like what if scenarios.
44:35
But here in China, we are
44:37
talking about the worst case scenario come
44:39
true, with proof
44:41
that a minority are being tracked
44:43
and rounded up because you could
44:45
see the camps on satellite imagery.
44:48
Yeah, in the satellite imagery, we saw
44:50
them starting to appear in late 2016. And
44:53
these stories started to emerge that hundreds
44:55
of thousands of people have been disappeared
44:57
into these camps. And nobody
45:00
knew where they were. In the
45:02
far west of China, evidence is building
45:04
that a monstrous crime has taken
45:06
place. Uyghurs
45:11
are now being rounded up by the hundreds
45:13
of thousands. There are many accounts of people
45:15
who have had their relatives disappear into the
45:17
camps. And we don't really know what's happening
45:19
to them. Alison
45:21
Killing picks up the story from the TED
45:23
stage. I got involved
45:26
in investigating Xinjiang in the summer of 2018,
45:29
when I met Mega Rajagapalan, an American
45:31
journalist who had been working in China
45:33
for several years. Over the
45:35
past few years, China has been carrying
45:37
out a campaign of forcible assimilation. And
45:40
several nations have described it as a
45:42
genocide. It's estimated that
45:44
over a million people have been disappeared
45:46
into detention camps. And while
45:49
the Chinese government claims that these
45:51
are part of a benign programme
45:53
of re-education, dozens of former detainees
45:55
describe being tortured and abused and
45:57
women being forcibly sterilised. And
46:00
yet, for a long time, we lacked
46:02
information about what was happening in Xinjiang
46:05
because the Chinese government controls the internet
46:07
tightly and restricts journalists' work in the
46:09
region. Journalists would
46:11
be followed or detained, and the
46:13
authorities occasionally even went so far
46:15
as to set up fake roadworks
46:17
or stage car crashes to prevent
46:19
access to certain roads. Local
46:22
people who did speak to journalists faced the risk
46:24
of being sent to a detention camp for doing
46:26
so. Megha had
46:28
been the first journalist to visit one of the
46:31
camps, but shortly after publishing
46:33
her article, the Chinese authorities declined to
46:35
renew her visa and she had to
46:37
leave. Other journalists had
46:39
managed to visit the handful of the camps,
46:42
but they still represented a fraction of what
46:44
we believed was out there, and no one
46:46
knew where the others were. But
46:48
Megha was keen to find the rest. She
46:51
just needed to find a way to work
46:53
effectively from outside China. And
46:56
so this is where you come into the story,
46:58
Alison, because you and Megha decided
47:00
to team up. Yes.
47:02
So I met Megha at this
47:05
workshop from the summer of 2018.
47:07
I've been doing a lot of
47:09
cartography work and satellite imagery. And
47:13
we got talking and we realised that we
47:16
maybe had to implement these skills to be able
47:18
to send these camps. You know, the
47:20
way that Megha had found this first camp was through
47:22
satellite imagery. And so she had
47:25
the idea that that could be
47:27
a good way to find the rest. But
47:30
it's still... Like,
47:33
Xinjiang is a bit really massive,
47:35
so you can't just like scour
47:37
all of the satellite imagery of the region. We
47:40
needed to work out where to look. There
47:44
was no street-level imagery, but as I zoomed
47:46
in on the satellite images, this weird thing
47:48
happened. A light grey square
47:51
suddenly appeared above the location of the camp,
47:53
and then disappeared just as quickly as I
47:55
zoomed in further. It
47:57
was a bit like the map wasn't loading.
48:00
properly, but then I zoomed out and in again,
48:02
only for the same thing to happen. I
48:04
realized it couldn't be a problem with the map
48:06
loading because the tiles would have been in the
48:09
browser's cache. And when I found
48:11
the same thing happening at the other locations we knew
48:13
to be camps, I realized that we
48:15
had a technique we could use to find the rest
48:17
of the network. It's
48:21
quite rare for maps and satellite images
48:23
to have these blank spots because blank
48:25
areas tend to draw attention to themselves.
48:28
But here we got lucky. Obscuring
48:31
the camps had inadvertently revealed
48:33
all of their locations. We
48:44
worked with developer Christo Bushek who
48:46
specializes in documenting human rights issues
48:48
and building tools for open source
48:50
researchers to map the mask tile
48:53
locations. We had to
48:55
work quickly and secretly to map the
48:57
mask tiles before anyone found out what
48:59
we were doing and remove them because
49:01
our investigation relied on access to that
49:03
information. The idea was that
49:06
we could go and look at the mask
49:08
tile locations and then look at that same
49:10
image at that same location in other unaltered
49:12
satellite imagery and see what was there. Zooming
49:15
in on the satellite imagery, we can
49:17
see the bobbed wire in the courtyards
49:19
that creates exercise pens for the detainees
49:21
adjacent to the buildings. In
49:24
other images, we can even see people
49:27
all wearing red uniforms lined up in
49:29
the courtyard. These features
49:31
could help us decide whether a location was a
49:33
camp or not. As
49:36
we investigated further, we realized that
49:38
the camps program had evolved away
49:40
from the early days of makeshift
49:42
camps in former schools and hospitals
49:44
and it became more permanent that
49:46
the camps were now larger, higher
49:48
security and purpose built. This
49:51
is the largest camp that we know of. It's in
49:53
the Ban Cheng. The complex is
49:55
two miles long and it would cover a
49:58
quarter of New York Central Park. In
50:01
the satellite images, we can see the thick
50:03
perimeter walls, the God Towers, and these bluewiff
50:05
buildings which we believe to be factories. We
50:08
estimate that this complex can hold over 40,000
50:11
people without overcrowding. In
50:14
total, we found 348 locations
50:17
bearing the hallmarks of camps and prisons,
50:19
and we believe that this is close
50:21
to being the full network. We
50:23
estimate that these facilities have been built
50:26
to hold more than a million people.
50:29
That's enough space to detain
50:31
one in every 25 of Xinjiang's
50:33
residents. Wow,
50:36
your one little
50:39
lucky revelation finding that quirk
50:41
on the digital map led
50:43
to a horrifying and
50:45
huge discovery. And
50:48
how did China respond to the
50:50
allegations? So at the
50:52
beginning, when the rumors were
50:55
first emerging of all
50:57
of these people disappearing into camps, there was
50:59
denial on the part of the Chinese government that
51:01
this was happening. In
51:04
mid-2018, the UN had
51:06
made a statement about what was happening in
51:08
Xinjiang and raising concerns and saying
51:10
it was one of the most urgent human
51:12
rights crises in the world at that time.
51:15
And the Chinese government was then under pressure to respond to
51:17
that. And what they started
51:20
to say was, well, these places do
51:22
exist, but they're education and vocational schools.
51:24
People are there voluntarily. They're learning skills
51:26
which will allow them to get higher
51:28
paid factory jobs. That
51:30
wasn't true. People were taken there
51:32
forcibly. And so if the people
51:34
who were initially targeted to be sent to
51:37
the camps were the most highly educated people
51:39
in those communities. So,
51:41
you know, the Chinese government's claims about
51:43
these being vocational schools just were
51:45
incredible. So where
51:47
do things stand now in terms of what
51:50
you can do with this knowledge that you
51:52
have accumulated other than share
51:54
it with us? Yeah. And
51:56
one of the big things that has
51:59
been done. I mean, we've
52:01
seen sanctions on key individuals
52:04
within the Chinese Communist Party. We've
52:07
also seen sanctions on goods coming out of
52:09
Xinjiang. We go Force Labor
52:11
Prevention Act came into force earlier this year,
52:14
and that means any products
52:16
coming out of Xinjiang because it's
52:19
very, very likely that goods coming out of Xinjiang
52:21
have involved forced labor, and it's very difficult to
52:23
prove that they haven't. And
52:26
so that has also been a big impact
52:28
that we've seen. With
52:31
social media data and satellite imagery, we
52:33
can provide evidence of human rights abuses
52:36
in a way that wasn't possible before.
52:39
We can move beyond looking at individual
52:41
instances of human rights violations to show
52:43
the scale of what's happened. We
52:46
can corroborate the testimony of eyewitnesses
52:48
and provide further proof of their
52:51
stories. We can build
52:53
a more detailed picture of what's happening
52:55
to inform policymakers or to provide evidence
52:57
that can be presented in court. With
53:00
open source data, we can provide the
53:02
evidence needed for accountability. And
53:05
then, hopefully, action. Thank
53:07
you. That's Alison Killing. She's
53:14
an investigative journalist and an
53:16
architect. In 2021, she won
53:18
the Pulitzer Prize for her reporting. You
53:21
can see her full talk at
53:24
ted.com. Thank
53:30
you so much for listening to our show
53:32
this week. What's in a Face?
53:35
This episode was produced by Andrea
53:38
Gutierrez, James De La Hucie, and
53:40
Katie Montelillon. It was edited by
53:42
Sanaz Mezchanpour, James De La Hucie,
53:44
Rachel Faulkner-White, and me. Our
53:46
production staff at NPR also includes Matthew
53:49
Cloutier, Fiona Guerin, and Catherine Seifer.
53:52
Our theme music was written by Ramtin
53:54
Arablui. Our audio engineer was Quasi Lee.
53:58
Research support came from Cecile Davis-Davies. Vasquez.
54:00
Our partners at TED are
54:03
Chris Anderson, Colin Helms, Anna
54:05
Phelan, Michelle Quint, Jimmy Gutierrez,
54:07
and Daniela Balarezo. I'm
54:10
Manoush Zamarodi and you've been listening to the
54:12
TED Radio Hour from NPR. This
54:20
message comes from NPR sponsor
54:22
REI Co-op. REI has gear,
54:24
clothing, classes, and advice for
54:26
camping and glamping, biking and
54:29
hiking, axing and snaxing.
54:31
Visit your local REI Co-op or
54:33
rei.com for the million and one
54:36
ways to opt-out. This
54:39
message comes from NPR sponsor Capella
54:41
University. With Capella's FlexPath learning format,
54:43
you can earn your degree online
54:45
at your own pace and get
54:48
support from people who care about
54:50
your success. Imagine your future differently
54:52
at capella.edu. On
54:55
NPR's ThruLine, it's difficult
54:57
to imagine in America without tipping
54:59
in restaurants or wherever else. When
55:02
tipping first came, it was the most un-American thing
55:04
to tip and now it's
55:06
the most un-American thing to take it
55:09
away. The long, complicated legacy of tipping
55:11
in America. Find NPR's
55:14
ThruLine wherever you get your
55:16
podcasts.
Podchaser is the ultimate destination for podcast data, search, and discovery. Learn More