Episode Transcript
Transcripts are displayed as originally observed. Some content, including advertisements may have changed.
Use Ctrl + F to search
0:00
Reveal is brought to you by Progressive. Progressive
0:03
helps you compare direct auto rates from a variety
0:05
of companies so you can find a great one, even
0:08
if it's not with them. Quote today at Progressive.com
0:11
to find a rate that works with your budget. Progressive
0:14
Casualty Insurance Company and Affiliates. Comparison
0:16
rates not available in all states or situations.
0:24
From the Center for Investigative Reporting and
0:26
PRX, this is Reveal.
0:29
I'm Al Letson. As Russia's
0:31
war in Ukraine moves into a
0:33
second year, we're hearing more
0:36
and more about attacks carried
0:38
out by drones, some traveling
0:40
hundreds of miles to reach their target.
0:43
Russia accused Ukraine of deadly drone
0:46
attacks on two air bases yesterday.
0:49
Both sides are using drones in the battlefield,
0:52
and typically they're controlled by soldiers on
0:54
the ground. But many of the drones
0:56
take some actions on their own. They're
0:59
semi-autonomous. And earlier
1:01
this year, a Ukrainian government official
1:04
told the Associated Press that
1:06
fully autonomous drones with artificial
1:08
intelligence are an inevitable
1:10
next step. Back
1:13
in 2021, we did a show about AI
1:16
and the future of warfare. And
1:18
given
1:18
all the stunning advances in AI
1:20
today, we're bringing it back. Our
1:23
story begins with an earlier milestone
1:26
in autonomous weapon systems. It
1:28
took place nearly 12 years ago,
1:30
in September 2011, during the war in Libya. NATO's
1:35
air war against Muammar Gaddafi is
1:37
in its sixth month. Rebels
1:41
are gaining the upper hand. Gaddafi
1:44
is on the run. His days are numbered,
1:46
but his forces are unfolding.
1:48
The battle for Libya is not
1:51
over yet, with the heaviest combat
1:53
for days between anti-Gaddafi forces
1:56
and supporters of the fugitive colonel.
1:59
jets pressing down. There's
2:01
word that troops loyal to Gaddafi are
2:04
bombing civilians. British pilots
2:06
are getting reports that about 400 miles
2:09
south of Tripoli, there's a humanitarian
2:11
crisis unfolding.
2:13
Zachary Fryer Biggs was then a national
2:16
security reporter following this story
2:18
for the Center for Public Integrity. He's
2:20
now managing editor at Military.com.
2:23
A bunch of tanks and artillery are
2:26
outside of a small town, and they're lobbing
2:29
all kinds of bombs and munitions into
2:31
the town. These British
2:33
pilots hear about this, and
2:35
they see an opportunity.
2:39
They see an opportunity to
2:41
protect civilians under attack, and
2:44
to use a weapon in a completely
2:46
new way.
2:48
The pilots head south.
2:50
They're flying tornado jets equipped
2:52
with an armor-piercing missile called
2:55
the Brimstone. The British pilots
2:57
have permission to use this Brimstone
2:59
missile in a way it's never been used
3:01
in combat before. This is
3:03
the first time that
3:05
autonomous decision-making
3:08
is being used for missiles to
3:10
decide who to kill.
3:13
Autonomous decision-making. Up
3:15
until now, pilots have always manually
3:18
selected the missile's targets, but now
3:20
the Brimstone will pick its own prey.
3:24
Britain and NATO have kept quiet about the mission, so we
3:26
don't know why commanders chose to make this call, but
3:31
we know there's low risk to civilians. The
3:33
Libyan forces attacking them are positioned
3:36
miles away in the open desert. The
3:39
pilots flying overhead
3:41
pull a trigger, and so 22 missiles separate, and
3:45
once they're launched, the missiles
3:47
start to make a lot of decisions.
3:49
Heading to the Earth at supersonic
3:51
speed, the missiles use radar
3:53
to scan an area preset by the pilots,
3:56
the kill box. They
3:58
look in the air. and they try to find
4:01
something that looks like tanks
4:03
or artillery or the other sorts of targets
4:05
they know about. And then once they figure out what
4:07
targets are there, the 22 missiles
4:10
decide who's going to strike what.
4:12
Granny cockpit video shows the
4:14
brimstones pulverizing half
4:16
a dozen Libyan tanks. This
4:19
strike doesn't end the combat
4:21
or the war in Libya. It
4:23
doesn't remove Gaddafi. It's
4:25
a couple vehicles being struck in a desert, but
4:28
it means an enormous amount
4:31
for what the human role in warfare
4:33
is going to be in the future.
4:36
The US and other countries
4:38
already have missile systems that operate
4:41
autonomously. They're designed to
4:43
make split-second decisions to
4:45
defend military bases and ships.
4:48
What hasn't been the case is letting
4:51
computers and machines go
4:53
on offense. That's what's crucial
4:56
about the Libya mission. The
4:58
missiles themselves chose
5:00
what to hit and by extension who
5:03
to kill. In this case, a
5:05
group of Libyan soldiers.
5:08
Today, the Pentagon is moving
5:10
deeper in this direction. In
5:12
recent years, it has invested billions
5:15
of dollars into research on
5:17
artificial intelligence, a key
5:19
ingredient in new autonomous weapon
5:22
systems.
5:23
Zach says big picture. The US
5:25
doesn't want to give up its global dominance, especially
5:28
with Russia at war with Ukraine and
5:30
China threatening Taiwan.
5:32
US military planners are scared
5:34
that China and Russia are developing
5:38
artificially intelligent systems that
5:40
are going to be able to make decisions
5:42
so fast that if the US
5:44
is dependent on human beings making
5:46
decisions that we're
5:49
going to lose. And so they are
5:51
sinking billions into
5:54
some of these developing technologies that are
5:56
primarily coming out of Silicon Valley to
5:58
make their weapons smarter.
5:59
and faster. Smarter,
6:02
faster. America's military
6:04
leaders call it algorithmic warfare.
6:07
I call it ridiculously scary.
6:09
Like
6:09
haven't we seen this movie before? We're
6:12
in. Yeah,
6:14
I love science fiction. So it's easy for
6:16
me to think about a distant world, one
6:18
created in Hollywood, where humans
6:21
hand over total control of their weapons
6:23
to machines. You know, machines
6:26
with no emotions that make correct
6:28
decisions every time. I mean, how
6:30
could anything go wrong? That's the reason everything's
6:32
falling apart. Skynet has become self-aware.
6:35
In one hour it will initiate a massive nuclear
6:38
attack on its enemy. What enemy?
6:41
Us.
6:41
Okay, so let's leave aside
6:43
Terminator. So what's the real picture today?
6:46
Piecing it together is hard since most of
6:48
these weapons programs are highly
6:50
classified.
6:52
Zach has spent three years investigating
6:54
how artificial intelligence is
6:57
already transforming warfare and
6:59
perhaps our own moral code. You
7:02
have to have confidence that the machines
7:04
are making good, one might say moral
7:06
decisions. And that's hard to have
7:08
confidence in a machine to do that. So
7:11
a lot of the concern from
7:13
the human rights community has focused
7:16
on this idea of if you take a person
7:18
out of this decision, can a machine
7:20
really make a moral decision about
7:22
ending a life, which is what we're talking about here.
7:25
Zach picks up the story with Reveals, Michael Montgomery.
7:28
They're on their way to America's oldest military
7:31
academy,
7:32
West Point, where a new generation
7:34
of military leaders is preparing
7:36
for a new type of warfare. Coming
7:40
into view up, sort of towering
7:43
over us. Oh my goodness. Wow. Look
7:46
at that. It's a gray stone building.
7:49
Zach and I are going to Thayer Hall, the
7:51
main academic building at West Point. It
7:54
overlooks the Hudson River about 60 miles
7:56
north of New York City. But yeah, you've got,
7:59
you know, the
7:59
gray stone, you have the
8:02
sort of carvings on the side here that look
8:04
like gargoyles. They
8:06
really decked out these buildings
8:08
in proper sort of Gothic attire.
8:10
Hogwarts on the Hudson, maybe. More
8:13
than a century ago, this building housed
8:15
a huge equestrian hall where cavalry
8:17
troops trained for wars of the future.
8:20
Today, instead of horses, it's weapons
8:22
that can think for themselves. I'm
8:26
wearing a shirt. How
8:29
can I make our way down to the basement to West
8:31
Point's Robotics Research Center? Sophomore
8:34
cadets dressed in camouflage are preparing
8:36
for a mock battle. They're
8:38
gathered around two small square pens
8:40
about two feet high. They call
8:43
them the arenas. Inside
8:45
each arena is a six-inch tall
8:47
robotic tank. It's got rubber treads, a
8:49
video camera that swivels. That's the
8:51
high-pitched sound you're hearing. And
8:55
a small processor. Mounted
8:57
on the front of the tank is a spear, like
8:59
an ice pick but sharper. And
9:01
scattered in the arenas are two dozen balloons,
9:03
red, blue, orange, and green, all
9:06
part of the simulation. All right, so we're
9:08
going to get started this morning. Major
9:10
Scott Parsons co-leads the class. He's
9:13
an ethics and philosophy professor. As
9:15
you get your robot, grab this from the empty,
9:18
grab your robot. All right, so one group,
9:20
one member of the group, come on down from each group. The
9:22
cadets step up to face the challenge.
9:25
Their robot tanks need to be programmed
9:27
to attack the red balloons. They're the
9:29
enemy. At the same time, the tanks have to
9:31
avoid popping the green, orange, and blue
9:33
balloons. They represent civilians,
9:35
fellow soldiers, and allies. These
9:38
cadets are learning how to code these machines.
9:40
But that's a fraction of what they're doing. The
9:43
big discussion here is what
9:45
it means to use an AI
9:47
system in war. Major
9:50
Parsons says this exercise forces
9:52
cadets to think about the ethics of
9:54
using autonomous weapons in the battlefield.
9:59
too aggressive because if you don't program
10:02
it correctly, the orange balloons look an awful
10:04
light like red balloons, right, because there's a lot of times we're
10:06
in war and there's people that look like the
10:08
enemy, but they're not the enemy, and so we
10:10
shoot the wrong people. The
10:12
cadets release their tanks, and they
10:15
come alive. But things
10:17
don't quite go as planned. You
10:23
might say the fog of war descends
10:26
on the arenas. No longer under human
10:28
control, one tank does pirouettes
10:29
attacking invisible enemies.
10:32
The other tank is going after the green balloons'
10:35
civilians. It's
10:37
the sci-fi scenario of computers
10:40
running amok. You're
10:42
being brought up on war crimes. I'm
10:44
taking you to the hang. So
10:46
we had a couple of innocent civilians
10:48
on the battlefields. They just happened to resemble
10:51
the bad guys, and this
10:54
robot thought, ah, why not? And it took them
10:56
all out. Cadet Isabelle Regine's
10:58
tank is just spinning
10:59
around and making random charges.
11:02
It's not that aggressive. Just puncture
11:04
it. Finally, it plunges
11:07
the spear into a blue balloon. Blues
11:11
are friendlies, so, yeah,
11:14
we have to deliberate. Despite
11:17
all the joking amid popping balloons, Major
11:19
Parsons says cadets understand that
11:21
the lesson is deadly serious.
11:24
Our job when we fight wars is to kill
11:26
other people. Are we doing it the right way?
11:29
Are we discriminating and killing the people we should
11:31
be and discriminating and not killing the people we shouldn't be?
11:34
And that's what we want the cadets to have a long heart think about. The
11:37
beautiful thing about artificial intelligence, right, is you
11:39
can really refine and program it to
11:41
a very, very fine degree so
11:44
that you might actually be more
11:45
proportionate than a human being. When
11:57
the first round is over, Isabelle and her team
11:59
retreat.
11:59
to another classroom to find a way
12:02
to tame their tank. I just
12:04
want to see how it works under pressure. I'm a law
12:06
major, so this is something very out of
12:08
my element, I guess.
12:09
They're punching code into
12:12
a laptop that they've connected to the tank's
12:14
processor. This kind of coding is
12:16
new to Isabella and many of the other
12:18
cadets. But thinking through the legal
12:21
and tactical implications is not.
12:23
It's going to be interesting to see how it's going to impact
12:26
our leadership skills, you know. We might
12:28
not even be in charge of soldiers anymore.
12:29
And when weapons act
12:32
for themselves, it's not just who's in
12:34
charge, but also who's responsible
12:36
for the decisions they make.
12:38
We talked about that in this class as well,
12:40
so it's super interesting.
12:44
Robotics instructor Prathik Manjuna joins
12:47
Isabella's team at a large work table covered
12:49
with wires, batteries, and small computer
12:51
parts. We've given them an exhaustive code,
12:54
and they only have to change a few parameters for the robot's
12:56
behavior to change. And the parameters they're
12:58
trying to adjust are typical
13:00
for a lethal autonomous weapon
13:02
system. They're going to look at persistence,
13:05
they're going to look at deliberation, they're
13:07
going to look at aggression. So they're going to tune these three
13:09
variables to change the behavior of the
13:11
robot.
13:18
This is only the third class at West Point
13:20
to face this challenge. And driving
13:22
the simulation is a question that underscores
13:25
just about every conversation Zach
13:27
and I are having about AI and
13:29
lethal weapons. How far should
13:31
you go in removing humans
13:34
from the decision-making loop? If you have
13:36
a human in the loop, as
13:38
it's called, that means that a
13:40
human being has to actually approve
13:43
of the action. A human being has to say,
13:45
yes, it's okay, go ahead and fire your gun, or
13:47
yes,
13:48
that's the right target. By contrast,
13:50
when a human is out of the loop, the
13:52
system operates completely independently
13:55
without the possibility of intervention. Then
13:58
there's a third option, sort of...
15:59
they're going to be in that same position in three or four
16:02
or five years. And that's how you make them think
16:04
about it. And you relay that back, listen, this is a
16:06
robot in balloons, but this very well
16:08
could be your robot in battle
16:10
and you're killing people. This is going to be
16:13
you in real life.
16:15
The simulation is intended to demonstrate
16:17
what happens when autonomous weapons are
16:19
given too much control, but also
16:21
to show their advantages. Colonel
16:24
Christopher Korpella co-founded the Robotic
16:26
Center. He says the cadets learned
16:28
something else. That algorithmic
16:30
warfare isn't theory. The technology
16:33
is already here. The students participate
16:35
in this exercise are sophomores here
16:38
at West Point. And just in two years, they
16:40
will get commissioned, they'll be lieutenants and they'll be leading
16:42
the platoons. And so they may have 30
16:45
to 40 soldiers that they're in charge
16:47
of. And so the reality of what they're doing in here
16:50
is only a few years away. The
16:55
fact that they're investing the
16:57
time and energy to try to teach these
16:59
young cadets how to control
17:02
robot hordes says to me
17:04
that they're fully committed to AI being
17:07
in weapon systems.
17:12
The cadets in this class have now graduated
17:14
from West Point, moving forward into
17:17
a new world of warfare. The
17:19
professors who developed this class all
17:21
the way to the military's top brass tell
17:24
us they're confident humans will always
17:26
maintain some kind of control. But
17:28
as the weapons get smarter and faster, will
17:31
humans be able to keep up?
17:33
Will they understand what's going on well
17:35
enough and quickly enough to intervene? As
17:38
the speed of autonomous warfare accelerates,
17:41
is staying on the loop even
17:43
possible? Every iteration
17:47
of these weapons makes the
17:49
person seem like an
17:51
even slower hunk of meat on the other end
17:54
of a control stick. And what
17:56
that eventually will mean and where
17:59
we're headed is...
17:59
is a person's unlikely
18:02
to be able to fully grasp everything
18:04
that the computer's doing.
18:15
The US military can't build
18:17
autonomous weapons on its own. It
18:19
needs Silicon Valley and people
18:21
working in cutting edge technology. But
18:24
some tech workers are pushing back.
18:26
And of course, I didn't believe that
18:29
AI had any business taking a human
18:31
life. That's next on Reveal.
18:45
Since his death in 2009, the world
18:47
has struggled with how Michael Jackson should
18:50
be remembered, as the king of pop or
18:53
as a monster. I'm Leon
18:55
Nafak, the host of Fiasco and the co-creator
18:57
of Slow Burn. In my new podcast,
19:00
Think Twice, Michael Jackson, my co-host
19:02
Jay Smooth and I present a new account of
19:05
the MJ story. We explore his incredible
19:07
staying power in the face of disturbing allegations.
19:11
Follow Think Twice, Michael Jackson, on
19:13
Audible or the Amazon Music app.
19:19
I may sound biased here, but I
19:22
think our stories are pretty great.
19:25
And if you're listening to this, I have a feeling that, well,
19:27
you might agree. But have you
19:30
ever been left wanting even more? Reveal's
19:33
newsletter goes behind the scenes. Reporters
19:36
describe how they first found out about
19:38
these stories and the challenges they face
19:40
reporting them. Plus, recommended
19:42
reads and more. Subscribe now
19:45
at revealnews.org slash newsletter.
19:52
From the Center for Investigative Reporting and PRX,
19:55
this is Reveal. I'm Al Edson.
19:58
We're revisiting a show about...
19:59
about the rise of autonomous weapons, weapons
20:03
with minds of their own. And
20:06
I wanna play you this video that reporter
20:08
Zachary Fryer Biggs showed me. Navy
20:11
autonomous swarm boats, mission
20:14
safe harbor. It's from the Office
20:17
of Naval Research or ONR,
20:19
but I think they're aiming for something a little
20:21
more Hollywood. ONR is
20:23
developing the capability of autonomous
20:25
swarms of inexpensive, expendable
20:28
on-man boats to overwhelm
20:29
and confuse the enemy.
20:33
Four military pontoon boats glide
20:35
across the Chesapeake Bay in Virginia. The
20:38
ones on board, the boats are being piloted
20:41
by a network of machines loaded
20:43
with advanced software and sensors. They're
20:46
coordinating their movements and running in
20:48
formation. The swarm boats will intercept
20:51
and follow the intruder. The
20:54
Navy has been promoting the concept
20:56
of unmanned vessels to protect ports
20:59
and deliver supplies
20:59
across vast oceans, so-called
21:03
ghost fleets. But that's
21:05
not the whole story.
21:06
There's a secret side to these swarm boats,
21:09
secret as in classified. And
21:11
it's a part of a bigger push by the US military
21:14
into autonomous weapons. Zach
21:17
picks up the next part of the story with reveals
21:20
Michael Montgomery.
21:24
It's been said that no one in government has
21:26
ever gotten in trouble for classifying
21:29
information. And so even minor
21:31
details end up behind a thick veil
21:33
of secrecy. That's what Zach found
21:35
when he was investigating a military program
21:38
called CMOB. The technology
21:40
behind the program started as part of Mars
21:42
Rover and in research papers. And
21:45
then as it got closer to maybe
21:47
being useful for the Pentagon, all of
21:49
a sudden it ceases to be public
21:52
and more and more of it becomes classified.
21:54
Even stuff that had been public only
21:56
a couple of years before. He followed
21:59
a few breadcrumbs. and eventually discovered
22:01
the vision for CMOB. Unmanned
22:04
swarm boats, like the ones in that Navy
22:06
video, but armed with heavy machine
22:08
guns and ready to attack.
22:11
Zach also learned that the military
22:13
conducted one of the first tests of CMOB
22:16
in 2018 at Wallops Island
22:18
on the eastern shore of Virginia. So
22:21
we came here to get a sense of what went down.
22:24
We're pretty much in the middle of nowhere. It's beautiful
22:27
sort of bays and seashore. And
22:31
dug into that territory are a whole
22:33
bunch of government facilities. You've got NASA,
22:36
you've got a Naval Research Facility,
22:38
and they're out here with very
22:40
little else.
22:41
There's an enormous dolphin fin
22:44
right off the coast there. The
22:46
boats used in the experiment were small,
22:49
fast. The Navy's got a billion of them. They're
22:51
cheap, they're easy to repair, they're
22:54
tough as nails. And bristling with
22:56
tech. They were being monitored remotely,
22:58
but the boats were piloting themselves.
23:01
So, I mean, if you were able
23:03
to just peer out at this test,
23:06
what you'd see is these boats circling
23:08
each other and moving in and out of the shallows
23:11
and
23:11
swarming very much like
23:13
a group of insects. And if you looked really
23:15
closely, what you'd see is the throttle
23:18
levers moving up and down, the wheels spinning
23:20
around, and nobody on board.
23:22
And what you couldn't see happening was that the
23:24
boats were communicating with one another
23:27
at lightning speed about positioning
23:29
and how fast they were going. Zack Sorsas
23:32
told him the military wanted to see if these swarm
23:34
boats with a license to kill could
23:37
help Marines storm a beach. What
23:39
makes this whole program different is the
23:42
real guts of this are
23:44
based on video cameras. They're looking
23:47
at the world as we do, as
23:49
images.
23:51
The military did not want a lot of information
23:53
getting out about this. They wanted no information
23:55
getting out about this. Other
23:57
than the name of the program and that, it gets money.
24:03
Zach learned there's something common to many
24:05
Pentagon programs like CMOB, getting
24:08
machines to see the world like humans.
24:11
This technology could serve
24:14
as the backbone of a whole
24:16
wave, a whole generation of new weapons
24:20
that the Pentagon is creating
24:23
to allow humans
24:25
to be removed from the front
24:27
lines of the battlefield.
24:32
We went to Wallops Island in February 2020,
24:35
just before the lockdown. Back
24:37
in DC, we arranged to see the official
24:39
in the middle of all this, General Jack
24:42
Shanahan. At the time, he was
24:44
running the Pentagon's Joint Artificial
24:46
Intelligence Center. I think the future
24:48
is about robotics. It's about autonomy.
24:51
It's about smaller, cheaper, disposable,
24:53
and swarming capabilities in every domain. Swarming
24:56
under sea, swarming on the surface, swarming in the
24:58
air.
24:59
We knew in advance that General Shanahan
25:02
wouldn't talk about CMOB or any
25:04
other specific weapons out of what
25:06
his office calls operational security.
25:09
Still, he was blunt about where
25:11
he sees warfare heading. We
25:13
envision a future which is algorithm against
25:16
algorithm. The speed of decision
25:18
making will be such that sometimes
25:21
you will have machine to machines and human machines
25:24
having to operate in timelines we're just not
25:26
used to because of the type of fight we've been in for
25:28
the last 20 years.
25:30
It's not just U.S. military leaders
25:32
who envision this future. It's also
25:34
potential adversaries like Russia
25:37
and China.
25:38
China's commitment is extremely
25:40
large. There is a national strategy
25:42
on AI. And if we slow
25:44
down, it will turn into a strategic competition.
25:47
We would have the prospects of being on the wrong side of that.
25:52
China has declared it will become the
25:54
global leader in artificial intelligence
25:56
by 2030 and is investing
25:59
heavily in up-to-date.
25:59
upgrading its military. Russia
26:02
has claimed it's integrating AI
26:04
into all aspects of its military,
26:07
from battlefield communications to weapons
26:09
systems it's using in Ukraine.
26:12
The prospect of America falling
26:14
behind Russia and China isn't
26:16
exactly news to the Pentagon.
26:19
Zak discovered the US military has
26:21
been coming up short in computer-simulated
26:23
war games for at least a decade. The
26:26
details of the war games are classified,
26:28
but what I've been told by sources is
26:31
that American troops were consistently
26:33
losing in these simulations or at the very
26:35
least fighting to a stalemate.
26:39
I think it's been clear that the US has been losing
26:41
its edge for a long time. Paul
26:43
Shirey served as an army ranger in
26:45
Iraq and Afghanistan and was also
26:48
an official at the Pentagon. He's currently vice
26:50
president at the bipartisan Center for
26:52
a New American Security.
26:54
The problem has been up until
26:57
recently, the answer that many
27:00
parts of the Defense Department had for responding
27:02
to that was just give us more
27:04
money and let us buy more things. And the answer
27:06
is like buying more F-22s
27:09
isn't going to fix this problem. And so what really
27:11
happened was this daunting
27:14
realization that we've got to do things differently.
27:20
The
27:20
Pentagon was looking for a major
27:22
reset, a strategic advantage. Shirey
27:25
says they drew inspiration from the newest
27:27
technologies being used in Afghanistan
27:29
and Iraq, remote piloted drones
27:32
and robots that could remove roadside bombs.
27:35
And the common theme among all these was
27:38
greater autonomy. We need more
27:40
autonomy. Just as the Pentagon
27:42
was beginning to think more strategically
27:45
about robotics and AI, one
27:47
valley was experiencing major breakthroughs
27:50
in image recognition and computer
27:52
vision, an issue Zach has been following
27:54
for years. If you
27:56
really want to have a human
27:59
and a machine work together,
27:59
together, the machine has to experience
28:02
the world in some ways like a human
28:04
does. Then
28:06
in 2015, for the first time, computers
28:09
were performing better than humans and
28:11
identifying a huge set of images taken
28:14
from internet sources like Twitter. All
28:16
of a sudden, computers become
28:19
to certain planners trustworthy.
28:22
If they're better than people, why aren't we trusting
28:24
them for various applications? If
28:27
they're better than people, why aren't we using them
28:29
in weapons systems?
28:31
To do that, the Pentagon needed to go outside
28:34
the cozy world of military contractors
28:37
and partner with Silicon Valley. By
28:39
that point, Google, Microsoft and other tech
28:41
companies were piling into the AI
28:44
space. So in 2017, the
28:46
Defense Department developed a plan to
28:48
work with private companies on integrating
28:50
computer vision into its battlefield
28:53
technology. They called it Project
28:55
Maven. The idea was that the Pentagon would
28:57
be able to take these mounds
29:00
of video footage
29:01
that they collect from drones, from satellites,
29:04
from airplanes. And
29:07
instead of having people try
29:09
to dig through the small
29:12
portion that they can, allow
29:14
computers to dig through all of it.
29:16
The key part of this is that the Pentagon didn't
29:18
have the technology to do it themselves. The
29:21
person tasked with running the project,
29:23
General Jack Shanahan. It became
29:26
almost a myth about what Maven wasn't
29:28
when it was not. There's no weapons involved. Like
29:31
we used it for Hurricane Florence to help people
29:33
understand where the damaged areas were. General
29:36
Shanahan says Maven was about intelligence,
29:38
surveillance and reconnaissance. And
29:41
it wasn't a complete secret. The project had
29:43
its own website, but it ignited
29:45
a firestorm. Nearly a dozen
29:47
Google workers reportedly resigned
29:50
in protests over the company's involvement
29:52
in an artificial intelligence drone program
29:54
for the Pentagon.
29:55
The protest included a petition signed
29:58
by more than 3,000 employees.
29:59
that said Google should not
30:02
be in the business of war.
30:03
And that immediately
30:07
struck Pentagon planners and
30:09
officials as an existential
30:11
threat. Since the Pentagon doesn't create
30:13
this technology, if they can't get
30:15
Silicon Valley to work with them, they're
30:17
gonna fall behind other countries like
30:20
China, where the tech sector doesn't
30:22
have an option as to whether it works with the military.
30:26
The generals saw these rumblings as
30:28
a disaster in the making, but to Liz
30:30
O'Sullivan, the protests at Google
30:32
were inspiring.
30:34
To see other people who were
30:36
working on it so vocally opposed this
30:40
was sort of eye-opening.
30:41
Liz had joined a New York-based tech
30:43
company called Clarify in 2016. She
30:46
says she signed up believing that AI
30:49
could make the world a better place.
30:51
I was incredibly excited about what AI
30:53
could do and bring modern medicine
30:56
to underdeveloped countries and detect
30:59
climate change at scale by using satellite
31:01
imagery. And this was
31:03
just the period of time
31:06
that we characterized as being
31:08
so optimistic about what technology
31:10
would bring to the world.
31:12
But Liz says the world started to see
31:14
the dangers of technology. Facebook
31:16
and Twitter became conveyor belts for disinformation,
31:19
racism, and extremism. China
31:22
was using AI to crack down on ethnic
31:24
minorities, and the algorithms had
31:26
their own biases. Researchers
31:28
were finding that facial recognition software
31:31
was often less accurate identifying
31:33
women and people with darker skin. Then
31:36
Liz says words started circulating around
31:38
the office that Clarify had landed
31:41
a big government contract,
31:42
but her bosses kept a
31:44
lid on what it was all about. The
31:46
government required that they install
31:49
surveillance cameras in the stealing of our office
31:51
and that they close off the windows. A
31:54
perfect engineer that was working in the room.
31:56
Some information started leaking out.
31:59
And it became clear.
31:59
that it was not just a government contract,
32:02
but that it was a military contract. And
32:04
more details leaked out through the rumor mill, and
32:06
it was not just a military contract,
32:09
but a drone contract.
32:11
Liz says she took a closer look at all
32:13
the products Clarify was developing.
32:15
That's when I first discovered the meaning of
32:17
the term dual use. Like our product
32:20
roadmap was full of the
32:22
components of technology that someone
32:24
could use to build an autonomous
32:26
killer robot, not that we were
32:28
necessarily building them, but that it
32:31
could be very easy for someone
32:33
to take the products that we offered and
32:35
to do that with our technology.
32:41
In June 2018, Google announced it
32:43
wasn't renewing the Maven contract. At
32:45
the same time, the company was still involved
32:48
in AI projects in China. General
32:51
Shanahan says Pentagon leaders were
32:53
irate. They believed Google's work
32:55
could be directly or indirectly benefiting
32:58
the Chinese military.
33:00
Do you understand by not working
33:02
with us, but potentially working with
33:04
China, the signal that sends
33:06
to everybody in the United States
33:09
military, that was a defining moment.
33:11
And I'll tell you, at the Chairman of the Joint Chiefs
33:13
of Staff level, General Dunford, I mean, there
33:15
were people visibly upset
33:17
in the department about this.
33:19
General Shanahan concedes that it was
33:21
a learning moment for the Pentagon, and
33:24
that the military needs to be more transparent
33:26
about its work with private tech companies. But
33:29
he's only willing to go so far. There
33:31
are some things we will talk about. There
33:33
are others that we will just, in general terms, saying we're
33:36
interested in more autonomy across the Department
33:38
of Defense.
33:39
The growing controversy engulfing Project
33:41
Maven was something Zach was following closely.
33:44
What Maven did was track objects.
33:47
It's true that the technology that
33:49
Google was providing wasn't used
33:52
to tell a missile exactly
33:55
where to strike. But if you can track objects,
33:58
it can tell you what you might want to strike.
33:59
And so the Google workers were
34:02
concerned that the technology they
34:04
had developed for truly
34:06
commercial purposes was
34:08
going to be used to help the Pentagon pick who to
34:10
kill.
34:16
When she realized what the technology could
34:18
be used for, Liz O'Sullivan was
34:20
horrified. She decided it was
34:22
time to take a stand.
34:24
I didn't believe that AI
34:26
had any business taking a human life. You
34:28
know, I had seen AI systems fail. And
34:31
it's not that they fail, it's how
34:33
they fail. And they fail wildly and in unexpected
34:36
ways.
34:39
Liz wrote a letter
34:41
to clarify CEO Matt Zieler
34:44
asking that the company make a promise to never
34:46
work on any projects connected to autonomous
34:48
weapons. About a week later,
34:51
she says her boss called an all-staff meeting.
34:53
And during that
34:55
meeting, he made it very clear
34:57
that the company's position was that AI
35:00
was going to make the military safer and
35:02
better, and that even autonomous
35:04
weapons were good for mankind
35:07
and that would help save lives,
35:09
not the opposite. And that's
35:11
when I quit.
35:13
We reached out to Matt Zieler and he declined
35:15
to talk to us.
35:19
The Pentagon thought Project Maven would prove
35:21
the military could work with Silicon Valley,
35:24
but it backfired. In the aftermath
35:26
of the controversy, Zach got his hands on
35:28
an internal Defense Department memo.
35:31
They warned if
35:33
the Department of Defense didn't find
35:36
a way to convince tech workers to work with
35:38
the military that they were going to lose future
35:40
wars. They were in a battle
35:43
for hearts and minds. So
35:45
over the past few years, the military has
35:47
been stepping up its outreach to the tech
35:49
community in some unexpected
35:51
venues. I
35:54
traveled to Las Vegas for the gathering
35:56
of technologists, hackers, and digital-free
35:59
spirits that are here.
35:59
that's called Def Con. It was August 2019.
36:04
30,000 people packing a cluster of hotel
36:06
casinos. It feels kind of super mainstream,
36:09
but Def Con has serious outlaw
36:11
roots. Zach's been here a couple times. This
36:14
was a hacking conference, and hacking was dangerous,
36:17
and it was illegal. And so you had
36:19
law
36:19
enforcement people, you had intelligence
36:21
people who'd show up just to keep an eye on what this
36:24
hacking community was doing. And so the
36:26
game they used to play was called Spot the Fed, which is
36:28
where you tried to notice who
36:30
was one of these law enforcement
36:33
or intelligence people keeping an eye on the hacking
36:35
community.
36:36
There's still a little bit of an anti-establishment
36:39
vibe. You're not supposed to take pictures
36:41
of people's faces, and ID badges
36:43
don't have real names on them. So
36:45
a lot of people use their Twitter handles.
36:48
Tell me your name. My
36:50
handle is Casper, right? CSP3R.
36:54
Casper's real name is Scott Lyons,
36:56
and he's wearing a red T-shirt that says Goon.
36:59
They're the volunteers who organize and run
37:01
the conference. He's got lots of tattoos
37:03
and distinctive hair. That's a thing
37:05
at Def
37:06
Con. At the same time, he tells me he's
37:08
done security work for big corporations,
37:10
the government, even the military. The
37:12
funniest looks that I get, especially rocking a blue Mohawk
37:15
in business meetings, you know, was walking into
37:17
the Pentagon and just being looked at like, oh, crap, there's a
37:19
hacker here. Come on, man, you're killing
37:21
me here. You're killing me.
37:24
Like, seriously. You know, hackers are
37:26
people, too. It's your next-door neighbor. It's
37:28
your kid, right? It's your coworker.
37:32
Everybody is a hacker. Everybody
37:34
finds ways around and
37:36
are able to circumvent traditional
37:39
conventions. There
37:42
are other signs of change. The feds
37:44
and the military are here, but they're
37:46
not undercover. I
37:49
meet Alex Romero. He's with the Pentagon's
37:52
Defense Digital Service. They're running
37:54
something called Hack the Air Force. It's
37:56
a competition that pays hackers a cash
37:59
bounty
37:59
exposing security vulnerabilities. In
38:02
this case, the target is a key component
38:04
from a fighter jet. We really want to invite
38:07
the community to come either hack us through these programs
38:09
or to come join our team directly here. Any
38:12
results so far from the... Oh, yes.
38:14
I'm not probably going to talk about them because
38:17
we had to fix them.
38:18
At DEF CON, I catch up with Liz
38:20
O'Sullivan. She's joined the resistance.
38:24
Hi, everybody. Thanks so much for coming to our talk on
38:26
autonomous killer weapons. This is going
38:28
to be a very light conversation for a Saturday afternoon,
38:30
so I hope you guys are really excited about that.
38:33
Liz is speaking in a crowded meeting room
38:35
on behalf of the Campaign to Stop
38:37
Killer Robots. The group is pressing
38:39
for a global ban on fully autonomous
38:42
weapons.
38:42
Up until January of this year, I worked
38:44
for a company called Clarify. Liz
38:47
talks about her decision to quit her job
38:49
at Clarify over the company's contract
38:51
with the Pentagon. I'm not a technophobe.
38:54
I believe that AI is going to make its way into the
38:56
military, and we hope that it will be done in a
38:58
way that will reduce the loss of innocent
39:00
life. But the alarm that we're trying to raise
39:02
here is that these technologies are
39:05
so new, so risky, and so poorly
39:07
understood that to rush
39:09
forward into autonomy based off
39:11
of these kinds of detection systems is unacceptable.
39:14
The presentation lasts two hours,
39:17
and the audience stays engaged. Thank
39:19
you for doing this talk, by the way. I'm obviously a big supporter
39:21
of the Campaign to Stop Killer Robots. They come
39:24
from academia, tech companies, human
39:26
rights groups, and military contractors,
39:28
even the world of science fiction.
39:31
But there are some challenging questions.
39:34
What are we going to do to defend ourselves from
39:36
swarms of killer drones? We don't control
39:38
everybody in this planet. It's a very
39:41
altruistic thing that you guys are trying
39:43
to do, but not everybody
39:45
in the world is a good guy.
39:47
International humanitarian law has been successful
39:49
in banning weapons before. It is possible, and
39:52
we can do it again.
39:57
I think a lot of people worry that we're going to have
39:59
killer robots. about drones invading New York City. Liz
40:03
says she spends a lot of time educating
40:05
people about the difference between science
40:08
fact and science fiction.
40:09
I think the real concern is that this technology
40:12
will be a cheap and easily scalable way
40:15
for authoritarian regimes to tame their
40:17
own public or for the US
40:20
to go to proxy wars with less technologically
40:22
advanced nations.
40:26
We asked General Jack Shanahan about
40:29
all this. After all, when we spoke, he
40:31
was the Pentagon's point person on AI.
40:33
He told us it's far too early
40:35
to consider any kind of treaty that
40:38
would put limits on autonomous weapon systems.
40:41
I never question somebody's principles.
40:43
They have a reason. They are
40:45
worried that the Department of Defense will do this. Let
40:48
me say that the
40:50
scenario which they project is so
40:52
far advanced and so far out of my
40:55
time horizon that I, to
40:56
me, is not
40:58
the most pressing concern on my table. Some 40
41:01
countries have called for a ban on the
41:04
development of fully autonomous weapons. Among
41:06
the opponents are the countries leading the
41:08
way in developing AI for the battlefield,
41:11
Russia, China, Israel, and
41:14
the United States. General Shanahan
41:16
says there's a simple reason for
41:18
the US to keep ahead of the pack. I
41:20
don't think any American can challenge that
41:22
assertion that we don't want to lose. That,
41:25
to me, is what this is about, premature. We
41:28
don't want unilaterally do it when others are proceeding.
41:31
Just to
41:31
put you on the spot, you do not
41:34
support the idea that the US,
41:37
the US military, should very explicitly
41:39
say that we will never develop
41:41
fully autonomous weapons.
41:43
You're correct. I do not say that we should ever
41:45
explicitly say that. Could there be over
41:48
time some agreements we make internationally
41:50
about some sort of limits on some aspect
41:52
of that? I think that's a different conversation
41:55
to have at a different time at a policy level. But
41:57
right now, explicitly, no.
42:07
That was Reveals, Michael Montgomery. Since
42:10
our story first aired, Liz O'Sullivan
42:12
was named CEO of Vera, a
42:15
tech company that analyzes AI code
42:17
for dangers in areas like discrimination
42:19
and privacy. And after more than 35
42:22
years of service, General Jack Shanahan
42:25
retired from the military. Meanwhile,
42:28
the Pentagon is expanding its AI
42:30
program and partnering with companies
42:32
like Microsoft, Amazon, and
42:34
Palantir.
42:35
All of this is changing the role
42:38
of humans in warfare. Commanders
42:40
are looking at a situation where they're just going to have
42:42
to trust these advanced systems without
42:45
being able to fully understand what's
42:47
happening. That's up next
42:49
on Reveal.
43:00
Hi, my
43:02
name is Michael Montgomery and I'm a producer
43:04
and reporter here at Reveal. Reveal
43:07
is a nonprofit news organization and we
43:09
depend on support from our listeners.
43:12
Donate today at revealnews.org
43:14
slash donate. Thank you so much.
43:19
From the Center for Investigative Reporting and PRX,
43:22
this is Reveal. I'm Al Edson.
43:25
You've been hearing about how future wars will
43:27
be fought with artificial intelligence to
43:30
enhance battlefield communications, speed
43:32
up intelligence gathering, and even allow
43:35
autonomous weapons to kill. It's
43:38
a future that is approaching fast and
43:40
I, for one, am not excited
43:42
about it. With me to talk
43:44
about this is reporter Zach Fryer-Biggs. Hey,
43:47
Zach. Hey, Al.
43:49
So, you know, we first aired these
43:51
stories a couple of years ago and so
43:54
much has happened since then. Big
43:56
advances in artificial intelligence.
43:59
Some experts...
43:59
are comparing the moment we're in to
44:02
the start of the Industrial Revolution.
44:05
Yeah, we're definitely on the brink of this
44:07
huge shift that's going
44:09
to change a lot of things about our lives.
44:12
And when you look at technology like
44:14
AI, it's what they call dual use.
44:16
So it can be used as a weapon,
44:19
it can be used as a tool, it can
44:21
help with medicine, it can change weapon
44:23
systems. And so I think, you
44:25
know, when we're talking about how AI
44:27
might be used for autonomous weapons,
44:30
we have to keep in mind that the fundamental
44:32
technology here is going to be pretty much everywhere.
44:35
And it's sort of getting rolled out in Ukraine
44:37
right now. Yeah, I wanted to ask you about the
44:39
war in Ukraine. I mean, I know without
44:42
a doubt, lethal drones have been important
44:44
to both Russia and Ukraine. Yeah,
44:46
and they've been used to sort of steadily
44:49
escalate the situation. When
44:51
Russia launched its full invasion
44:54
of the country, one critical
44:56
component of Ukraine's defense was
44:58
this Turkish made
44:59
drone. And it
45:02
provided an ability to take
45:04
out Russian radar systems as well as tank
45:06
columns, using laser guided
45:08
bombs. That weapon
45:11
became just a critical part of
45:14
repelling the invasion. At the
45:16
same time, as the war has gone
45:18
on, we've seen Russian troops deploy
45:20
all kinds of drones, we've seen, you know,
45:23
drones being funneled in from
45:25
Iran, other NATO countries,
45:27
Israel, we're seeing drones made
45:30
from all over the world being
45:32
deployed and in some ways proving
45:34
their worth for militaries on
45:37
this battlefield. I'm curious
45:39
about where the US stands on
45:41
this. General Jack Shanahan
45:44
was quite blunt that the US would
45:46
not consider any kind of ban on autonomous
45:48
weapons. Is that
45:49
still the US position? Basically,
45:52
yes. US officials continue
45:55
to refuse to put any real
45:57
limits on what the military would
45:59
be able to do.
48:00
But, at the same time, you have to consider
48:02
what's the moral decision that's going
48:05
on here to kill, to take a life.
48:07
And you're removing that from the front lines, from
48:09
someone who is on the ground
48:12
in country. And once
48:14
you start taking that human
48:16
decision away, once you start moving it
48:18
both geographically further from the location
48:21
of killing, and also further
48:23
from a human thought process, because
48:26
you got machines making some of these decisions, that
48:29
makes it a little easier for a
48:30
commander to sort of let something loose, to
48:32
have a commander say, okay, autonomous weapon, you
48:34
make the decision on whether to kill, because
48:37
I don't have to struggle with the moral consequences
48:39
of that choice.
48:40
That whole decision is so fraught, because you're
48:43
basically allowing
48:45
a machine to decide the value
48:48
of human life. You
48:51
know, I know this is the easy place
48:53
to go, but I am a science fiction nerd,
48:55
and I just can't help it that
48:57
that's how Skynet started,
49:00
which created the Terminator,
49:03
where the machines took over
49:05
the world. I mean, it
49:07
sounds a little far fetched, but
49:09
it feels like that's where
49:11
we're headed, the idea of intelligent
49:14
machines taking over all
49:16
of the decision making for humans.
49:19
That's a touchstone that I think we all come back to,
49:21
and I would say that it happens for me,
49:23
and we're both in good company, because the former
49:25
vice chairman of the Joint Chiefs of Staff used to routinely
49:28
talk about the Terminator conundrum, as he called
49:30
it. Now, I will tell you, his staff absolutely
49:33
hated it when he talked about it, because they don't
49:35
want to talk about the doomsday
49:37
scenario. But I think the fact that
49:39
you have someone in that position talking about it is
49:41
a reflection of the
49:44
concern that's very real. And
49:47
while it may not be, you know, global
49:49
annihilation,
49:49
the real concern here is
49:51
the machines will be making decisions. The
49:53
way those machines make decisions is
49:56
just different than the way humans do. They don't have
49:58
brains like we have brains.
49:59
If you ask the machine, why did you do X, it
50:02
can't explain it. It doesn't have a rational thought process
50:04
it can relay to you. And so as
50:06
a person trying to supervise that system,
50:10
I kind of have to just trust it.
50:11
And that's where you start to end up in some
50:13
really scary situations in which you're giving
50:16
a machine the authority
50:18
to choose life or death and I can't
50:20
understand why it's making the choices it is.
50:23
Zachary Fryer Biggs
50:25
is the managing editor at Military.com.
50:28
Zach, thanks so much for talking to me. Really
50:30
enjoyed it.
50:37
Our lead producer for this week's show was Michael
50:40
Montgomery. Brett Myers edited the
50:42
show. Special thanks to the Center for
50:44
Public Integrity.
50:49
Before we go, we got some exciting
50:51
news. Our new documentary, Victim
50:54
Suspect, is now streaming on Netflix.
50:57
The doc follows reporter Rachel DeLeon's
50:59
investigation into a troubling trend.
51:02
Young women who report sexual assaults
51:04
to the police and then end up as
51:06
suspects. Victim Suspect,
51:09
stream it now on Netflix.
51:13
Nikki Frick is our fact checker. Victoria
51:15
Baranetsky is our general counsel. Our
51:18
production manager is Steven Rascone with
51:20
help from Zulema Cobb. Score and
51:22
sound design by the dynamic duo. Jay
51:25
Breezy, Mr. Jim Briggs, and Fernando,
51:27
my man, yo, Arruda. They had help
51:29
from Claire C. No Mullin. Our CEO
51:32
is Robert Rosenthal. Our COO is
51:34
Maria Feldman. Our interim executive
51:36
producers are Taki Talanidis and Brett Myers.
51:38
Our theme music is by Comorato, Lightning.
51:41
Support for Reveal is provided by the Reva
51:43
and David Logan Foundation, the Ford
51:45
Foundation, the John D. and Catherine T. MacArthur
51:48
Foundation, the Jonathan Logan Family Foundation,
51:51
the Robert Wood Johnson Foundation, the
51:53
Park Foundation, and the Hellman Foundation.
51:55
Reveal is a co-production of the Center for Investigative
51:58
Reporting and PRX.
51:59
I'm Al Ledson, and remember,
52:02
there is always more to the story.
Podchaser is the ultimate destination for podcast data, search, and discovery. Learn More