Podchaser Logo
Home
David Gunkel on robot rights

David Gunkel on robot rights

Released Monday, 5th December 2022
Good episode? Give it some love!
David Gunkel on robot rights

David Gunkel on robot rights

David Gunkel on robot rights

David Gunkel on robot rights

Monday, 5th December 2022
Good episode? Give it some love!
Rate Episode

Episode Transcript

Transcripts are displayed as originally observed. Some content, including advertisements may have changed.

Use Ctrl + F to search

0:11

Welcome to the Sentience Institute podcast,

0:13

and to our 20th episode. I'm

0:16

Michael Deli , Yvo strategy lead and

0:18

researcher at Sentence Institute on

0:20

the Sentience Institute podcast. We interview

0:23

activists, entrepreneurs, and researchers

0:25

about the most effective strategies to expand

0:27

humanities moral circle. Our guest

0:29

for today is David Gun. David

0:32

is an award-winning educator, scholar,

0:34

and author specializing in the philosophy

0:37

and ethics of emerging technology. He

0:39

is the author of over 90 scholarly

0:41

articles and book chapters, and as

0:44

published 12 internationally recognized books including

0:46

the Machine Question, Critical Perspectives

0:49

on ai, Robots and Ethics of

0:51

Remix, Ethics and Aesthetics.

0:54

After remix and Robot

0:56

rights, he currently holds the

0:58

position of distinguished teaching professor in

1:01

the Department of Communication at Northern Illinois

1:03

University in usa. All

1:06

right . I'm joined by David Gun. David, thank you so much

1:08

for joining us on the Sentience Institute podcast.

1:10

Yeah, thank you for having me. Good to be here.

1:12

Great. Great to have you. So , uh,

1:14

I'd like to start with a question , uh, about terminology.

1:17

And I should preface this by saying I

1:19

actually know your answer, I think because I heard

1:21

it in your interview with the Ben Byford , um,

1:24

with Ben Byford on the Machine Ethics Podcast

1:26

in 2020. Uh , but I think it's a

1:28

good place for our listeners to start. So

1:30

why do you call it robot

1:32

rights and not artificial intelligence riots

1:35

sentence , or something else like that?

1:37

Yeah, so it's a really good question, and I

1:39

think I can answer it with three sort

1:42

of ways of, of directing your

1:45

, uh, you know, way of thinking about

1:47

this. First of all, it's

1:49

just, it's an alliterative statement, right?

1:51

Robot rights. So just the alliteration

1:53

sort of makes it easy to say artificial

1:56

intelligence rights seems a little clumsy, and

1:58

as a result, robot rights is pretty easy

2:00

for a person to sort of rattle

2:02

off, and it has a alliterative

2:05

, sort of poetic feel to it because

2:07

of the way it sort of rolls out of your

2:09

mouth and , uh, sounds when you hear

2:11

it. The second reason is that

2:13

this terminology is not mine. Um

2:16

, this is terminology that came to me from other

2:18

people who preceded me in this work.

2:21

Either , uh, legal scholars or

2:23

people in philosophy or in

2:25

the area of artificial intelligence and ethics, but

2:28

they were the ones that began using this terminology.

2:31

And so my engagement

2:33

with it was to sort of pick up the thread and

2:36

carry it further. And since that was

2:38

the terminology that they had already employed, I

2:40

sort of inherited that terminology. Lastly,

2:43

and I think most importantly, the

2:45

difference between the word robot and

2:48

artificial intelligence is something

2:50

that we oftentimes, you know, struggle with and

2:52

try to figure out where to draw the line. And is

2:54

it cloud based application a robot? Is

2:56

it just ai? You know, how do we,

2:58

we sort of sort these things. The

3:01

important thing for me is that AI

3:03

is the result of a academic

3:07

workshop held in the mid 1950s in

3:10

Dartmouth College. Robot, on the

3:12

other hand, is the result of science

3:14

fiction. It comes to us from the 1920

3:17

stage play rur , and

3:19

it was formulated and

3:22

really was a reuse of a

3:24

check term robota , meaning worker

3:26

or slave laborer , uh, by

3:28

Carl Chap . And already in that

3:30

play, which not only gave us the idea of

3:32

the robot, but the word robot, the

3:35

robots are already subjugated to human

3:37

masters, and there's this uprising.

3:39

So not only does that set the sort of template

3:41

for future science fiction, but

3:44

it also gives us this notion of the robot

3:46

as an enslaved or

3:49

a servant type , you

3:51

know, figure or individual. And

3:53

so the robot rights , uh,

3:56

idea sort of fits in that pattern,

3:58

beginning with topics play, and

4:00

the way in which that has developed not only in

4:03

subsequent science fiction, but also in subsequent

4:05

writings on the legal and

4:07

the moral aspects of these

4:09

technologies and the way they connect with us.

4:11

Yeah, that , that makes sense. I

4:14

, I think for the rest of the interview, I'll probably use

4:17

robots and maybe have that

4:19

, um, apply to AI as well , uh,

4:21

just as , as shorthand, but in

4:24

a sense, they're not really interchangeable.

4:26

I , I feel, is that, does that make sense to

4:28

you? Because AI sort of , um, brings

4:31

, uh, an image of , uh,

4:33

an intelligence that's not , that's not necessarily tied

4:35

to a physical body, whereas robot seems to

4:37

imply it's tied to a physical body. Does that

4:39

sound about right?

4:41

It sounds about right, but I , I will say

4:43

that this idea that the

4:45

intelligence is disembodied, it's almost a

4:47

kind of transcendentalist way of thinking that's

4:50

almost religious ouris are

4:52

embodied, right? Even the cloud has a body, it's

4:55

a server connected to, to wires and

4:57

to fiber network cables and

4:59

things like this. So there is an embodiment

5:01

even for the so-called disembodied

5:04

ai. It's just that that body doesn't

5:06

look like our bodies. And so I

5:08

find this to be a useful distinction, the embodiment

5:11

distinction, but I also find it a bit troubling because

5:13

it leads us to think that the AI has

5:15

this kind of transcendental feature to it, and

5:18

it really doesn't, when we talk about the, you

5:20

know, resources of the earth about power,

5:23

about , uh, environmental impact and

5:25

all these other things that AI certainly

5:27

contributes to carbon and to climate

5:29

change. And that has to do because

5:31

of the way it is embodied and where it is

5:33

embodied.

5:34

Yeah, yeah, that makes sense. Thanks for mentioning that. So

5:37

in 2018, you wrote a paper called

5:39

The Other Question, Can Robot, Sorry.

5:42

Can and Should Robots Have Rights? And

5:44

the other question here being in reference to

5:46

the main question most people focus on when

5:48

they think about AI and robots, which is about

5:50

how they affect us, for example

5:52

, um, AI safety being a hot topic at

5:55

the moment, that's, that's mostly about how an

5:57

AI affects humans. And the

5:59

, the other question, I guess is more focused on the

6:02

interests of the robots themselves. So

6:05

you wrote that there's an important difference between

6:07

the can and the should here. So can you

6:09

start by talking about that?

6:10

Right. So I'll make two distinctions, which I

6:12

think will help sort this out. Uh , in more

6:14

philosophy, we distinguish the moral

6:17

agent from the moral patient. A

6:19

moral agent is someone who can act in

6:21

such a way that is either good, bad, or

6:24

is morally culpable or

6:26

praiseworthy, whatever the case is, a moral patient

6:28

is the recipient of that action. Now,

6:30

in our world, we are both moral agents

6:33

and moral patients. We can do good and

6:35

bad, we can suffer good and bad,

6:37

but there are some things that are moral patients

6:39

and not moral agents animals, for

6:41

example, we don't hold the dog responsible

6:44

for barking at the postman, but we do hold

6:47

the postman responsible for kicking and

6:49

injuring the dog, right? And the dog therefore

6:51

can be the recipient of an action that is either

6:53

good or bad. And as you said, I saw

6:55

a lot of people putting a

6:58

lot of the research they were doing on

7:00

the side of agency. And the question was,

7:02

how can we ensure that these devices,

7:05

these tools, these instruments, these

7:07

artifacts are employed in our

7:09

world in a way that has the right outcomes,

7:12

that doesn't disadvantage people, that doesn't create bad

7:14

outcomes, whatever the cases , that's an agency question.

7:17

My question, the other question was, well, okay,

7:19

that's great, but what is

7:21

the status of these things? What is , what is

7:23

the social moral, legal position of

7:26

these devices that are increasingly intelligent and

7:28

socially and interactive? And

7:30

how do we grapple with the moral patency

7:33

question of these machines? And then when

7:35

go to the next stage is the

7:37

can should distinction. And this

7:40

is really derived from David Hume. David Hume

7:42

says, You know, you cannot derive ought from is

7:44

, it's a very famous item in

7:46

, in David Hume's thinking, but a lot of people have picked it up

7:48

and developed it to us in, you

7:50

know, since that time, the can and

7:53

should question is really this, Can

7:55

robots have rights? Yeah. All you have

7:57

to do is make a law that says robots have rights.

8:00

Now, should you do that is another

8:02

question. So the ability to

8:04

do so maybe is entirely

8:06

feasible and very easy to

8:09

answer that question. The moral question,

8:11

the should question is a little more complicated. And

8:13

I think that's where we get into the weeds on , um,

8:16

how we want to shape the world that we live

8:18

in and how we integrate these things alongside

8:21

us in our social reality.

8:23

Mm-hmm . So I'll

8:26

jump the gun a little bit, and just because you mentioned it and

8:28

ask , uh, what are some reasons

8:30

why robots shouldn't have rights? Uh

8:32

, what are what are some arguments one might use?

8:34

So one of the strongest arguments is that they

8:37

machines, right? They're not people,

8:39

they're not human beings, they're not animals. They

8:42

should, you know, they're just artifacts and therefore they

8:44

are things. We may have this

8:46

distinction that comes to us from the Romans , uh, from

8:49

Gaas in in particular, that, you

8:51

know, actions that are moral or

8:53

legal in nature have two kinds

8:56

of objects or two kinds of , uh, entities.

8:58

There is either persons or things. And

9:01

so in the category of persons, we put you

9:03

and I and we put corporations

9:06

and we put maybe some animals, but

9:08

we don't generally put technological objects. Those

9:10

are things that's a strong argument based

9:12

on this ontology that we've

9:14

inherited from the Romans and the way in which

9:16

our legal structures especially have

9:19

operationalized this way of dividing things into

9:21

persons or property. Another argument

9:23

is that if we give

9:26

robots some kind of moral or

9:28

legal standing, we have complicated

9:31

our legal system in ways that

9:33

goes beyond what maybe we

9:35

would like to handle, and

9:37

that it maybe doesn't lend anything

9:40

very useful to the way that

9:42

we decide these relationships. Those

9:45

are usually the two big ones.

9:47

Mm . When you make those arguments

9:49

or when , when you put those forward, it makes me think

9:51

of , uh, these are some of the arguments I've heard in

9:53

relation to why when people say non-human

9:56

animals shouldn't have rights, it , it

9:58

complicates the legal system. Um, it

10:00

, uh, it's, yeah. So that , that , that

10:02

sounds familiar, but what are , what

10:05

are the , um, best arguments for why robots should

10:07

have rights?

10:08

So there's a number of arguments, and I

10:10

don't wanna try to be exhausted here, but let

10:12

me cover some of the important ones

10:14

that have been circulating. Um, the

10:17

literature in this field has absolutely exploded

10:20

, um, in the last decade. And

10:22

when I started working on this back

10:24

in the early, well,

10:27

mid , mid two , you know, 2006 I started First

10:29

book comes out in 2012. So in those early

10:31

years, it was really easy to keep track of

10:34

who was arguing what, because the number

10:36

of different arguments and circulation were pretty manageable.

10:39

By the time I get to robot rights in 2018,

10:41

this thing is spinning out of control

10:43

because a lot of people find

10:46

reason to engage the question and

10:48

to deliver their own sort of response

10:51

to it. So let me just hit a few

10:53

reasons why we might want to

10:55

do this. One is directly derived

10:58

from what we've learned in the animal

11:00

rights experience. So in

11:02

animal rights, we know Jeremy , Jeremy Bentham

11:04

really , uh, is the pivot, right? And he said it's

11:06

not, can they think, can they reason, but can

11:08

they suffer? Are they sentis ? And that

11:11

opened up the moral circle to include

11:13

things that had been previously excluded.

11:15

It had been, up until that point, a very

11:17

human-centric kind of moral universe.

11:20

And when we start to engage

11:22

in the animal question, we widen

11:24

that circle to include other

11:26

creatures that are non-human. And the reason why

11:29

we included animals in the moral circle,

11:31

whether you're following Peter Singer or

11:33

Tom Regan , or one of the other innovators in

11:36

animal rights, is because of

11:38

Sentis , because of the experience the

11:41

animal has of pain

11:43

or pleasure. And you can see just recently

11:45

with Blake Lemo , he was

11:47

talking to Lambda, and Lambda, he

11:50

said, is sentient , which led him to believe

11:52

that Lambda needed to have rights protected

11:54

for it because it was another kind

11:56

of sentient creature. So we use sentience

11:59

as a benchmark, and the question is, how

12:01

can you tell whether an AI or a robot is

12:04

sentient ? Well, that takes you back to the tour test

12:06

because you can't necessarily look

12:08

inside and know exactly what it's doing. You

12:10

can know some of what's going on, but

12:12

really what Blake Lamoin did is learn

12:14

from the behavioral experience that

12:16

it was exhibited to him in the conversational interactions

12:19

that he had with Lambda. So that's

12:21

one reason why people argue for giving robots

12:24

rights, this notion that they'll be at

12:26

some point either sentient or conscious

12:29

or some of these other benchmarks that make something

12:31

available to having standing

12:34

status and need for protection. Another

12:37

argument, and this comes from Kate Darling, who I

12:39

think was really innovative in this area by using Ks

12:41

indirect , uh, duties argument , uh,

12:44

Kant , unlike Bentham, was no animal rights advocate. Kant

12:46

thought animals were just mechanisms like Da Hart

12:48

did. But he argued you should not hurt animals, because

12:51

when you do so, you dease

12:53

yourself, you are corrupting

12:56

your own moral character. You're corrupting

12:58

your own moral education and

13:00

providing a bad example for other people. And

13:03

so indirectly, you're harming somebody

13:06

else if you harm an animal. And

13:08

Kate Darling says, you know, this is one way

13:10

of thinking about why we don't want to harm

13:12

the robot because of the example

13:15

it sets, because of the , uh,

13:17

way in which it could corrupt our own

13:19

moral characters. And she

13:21

uses the constant direct duties argument to make a

13:24

case for the rights of robots as

13:26

a way of protecting our social mechanisms

13:28

, uh, the way that we relate to each other, either morally

13:31

or legally. A third argument

13:33

for doing this, and this is more

13:35

in line with where I take it , um, in

13:37

my own research, is that the

13:40

properties, ascensions and consciousness have

13:42

traditionally been really good benchmarks for

13:44

things like animal rights and , and items like that.

13:46

But I come out of environmental ethics and

13:49

in environmental ethics, you don't

13:51

harm a mountain, dirt

13:54

does not feel pain. A waterway

13:56

does not experience this pleasure. Nevertheless,

13:59

these are part of our

14:01

integral experience on this

14:04

planet, and we have responsibilities

14:06

to the other entities that occupy

14:09

this fragile planet with us. And I think climate

14:11

change is a really good example of

14:13

how we can screw that up if we assume

14:16

wrongly, that these things are just raw materials

14:18

that we can utilize to our benefit and

14:21

artifacts may also play a role

14:23

in our social world in a way that we

14:25

need to think more creatively about

14:27

how we craft our moral

14:30

responsibilities and our legal responsibilities

14:32

to others. And that is

14:34

what I'm calling a relational approach

14:36

to moral status. That it's

14:38

not what the thing is, but how

14:40

it stands in relationship to us and

14:43

how we interact with it on

14:45

a scale that treats

14:47

these other entities as

14:49

fellow travelers, as kin . And

14:52

how we can come up with ways

14:54

of integrating not just

14:56

the natural objects of the environment, but

14:58

also the artifacts that we create in

15:01

our moral universe, in our legal

15:03

universe in a way that makes sense for

15:06

us, but also for our future

15:08

generations and for the kind

15:10

of environment and the kind of world

15:12

we want to occupy.

15:14

That was great. Thanks David. There's , there's

15:16

a lot. Uh , I wanted to, to , um,

15:18

mention from that first , uh, that you

15:20

mentioned Blake Lemo and Lambda. Uh , we actually

15:22

spoke to , uh, Thomas Metzinger in our last

15:25

podcast episode that came out last week, and we

15:27

, we talked about that topic as well. Um,

15:29

you, so the second,

15:31

the second argument you made about if

15:35

we treat robots in

15:37

, in a bad way, in the same way that if we treat animals

15:39

in a bad way, that might have repercussions

15:42

for , um, for how

15:44

it affects us and how it affects humans in general. Uh

15:46

, now it sounds like even

15:49

if there is, even

15:51

if robots are not currently sentient , even if

15:53

robots maybe even can never be sentient, that

15:56

would still remain an argument in favor

15:58

of giving robots rights. Uh , does , does

16:00

that sound, does that sound right? Um , yes.

16:02

Yeah. Great. Yeah, sure.

16:04

And , uh, the , the last point you mentioned

16:07

it again, it seems to be coming back

16:09

to , uh, how , um,

16:12

how the way we interact with robots

16:15

affects , affects humans. Uh , I

16:17

know you've spoken about the case of the Wongan Nui

16:19

River in New Zealand being granted legal person

16:21

who writes as an analogy. And I think you said

16:23

something like, it's not that people are arguing that

16:25

the river is sentient , it's just

16:27

that that's a tool in our current legal system , uh,

16:30

to get protection. And then it's for instrumental

16:32

purposes or for , um, orsic

16:36

purposes. It's how , uh, that

16:38

, uh, affects humans. Uh

16:40

, so giving robots, right, giving

16:42

robot rights, sorry, giving

16:44

robots rights now might therefore

16:47

be an instrumental tool in the same way. Yeah,

16:49

I , I think a really good example is a recently

16:51

12 states in the US made

16:55

some legislative , uh, decisions

16:58

and , and put into act , uh, some laws that

17:01

try to deal with these personal

17:03

delivery robots that are being deployed on the city streets.

17:06

And they decided that for

17:09

the purposes of figuring out

17:11

right of way and who can go

17:13

in the crosswalk and who can't go in the crosswalk

17:15

and things like this, it made sense to

17:17

extend to the robot the rights of a pedestrian.

17:20

Now, that's not making a decision about robotic

17:22

personhood. That's not making a , you

17:24

know, a distinction that would grant personhood

17:27

to the robot. It's just saying rights

17:30

are the way that we figure out how to integrate

17:32

things in situations

17:34

where you have competing claims, powers, privileges,

17:36

or immunities coming from different actors

17:39

in the social environment. And

17:41

one way in which we negotiate this is

17:43

by granting rights, by giving something a

17:45

right, a privilege, a power, a claim, or

17:47

an immunity. It has to be at least , uh,

17:50

respected by some, by someone else in

17:52

that social environment. Yeah . So this

17:54

is just a tool we have to try to

17:56

develop ways of responding

17:59

to these challenges that allow

18:01

us to occupy space and

18:04

work with. And alongside these

18:06

various things,

18:07

I , I think I a pretty clear example of

18:09

that is , um, how we corporations have

18:11

rights and they , they're clearly not , um, senti , they're

18:13

clearly not , um, persons, but in , in the eyes of the

18:16

law, often they are treated like they are , uh,

18:18

persons in , in a lot of

18:19

Ways. Can I say the , Yeah . The real trouble here.

18:21

I think the , the real point of, of debate

18:24

and contention is the fact that we're

18:26

trying to work with two legal categories that

18:28

are mutually exclusive person or thing.

18:31

And as I said before, this comes

18:33

from 2000 years ago when Gaas developed

18:36

this distinction in his own , uh,

18:38

legal thinking. And our western legal

18:40

systems have worked with us for a long time, and

18:43

it's worked pretty well, but that's why the corporation got

18:45

moved from a thing to a person because we wanted

18:47

to be able to sue it. We wanted it to

18:49

be able to stand before

18:51

the court as a subject and

18:53

not just an object. And so this

18:56

whole debate is about how do we negotiate this

18:58

distinction person, On the one hand that

19:01

is an subject before the

19:03

law, an object or thing on the

19:05

other hand, and it may be the case that we just

19:07

need a better moral ontology, we just

19:09

might need something that gives our

19:12

legal system a little more latitude with

19:14

regards to the kinds of entities that

19:16

can occupy these

19:19

kinds of , uh, positions in our world.

19:20

Yeah. When , when it comes to say,

19:23

non-human animal rights, I do like Peter Singer's

19:25

take on what that might look like. It's, it's

19:27

not that we're asking for non-humans to have the

19:29

right to, to drive or to vote, for example,

19:31

We're just asking that , um, their interests are

19:34

considered they're similar interests. Um, for

19:37

example, there , uh, the right

19:39

to not be harmed. Uh

19:41

, so yeah. With that in mind, do you have any

19:43

thoughts about what robot rights might look

19:45

like? Um, or what, what

19:48

perhaps one of your most ideal scenarios

19:50

might be for how that might look

19:52

in practice? Uh , you've sort of talked about that a little

19:55

bit, I guess. Um, but also just given that

19:57

, uh, robots may have very different interests

19:59

to us, let's say in the case where they do

20:01

become sentient. And I'd just like to nudge you as well to

20:03

maybe mention , uh, camel , uh,

20:06

maax paper, humans, Neanderthals, robots

20:08

, uh, and rights. I think that

20:10

that seems relevant here as well. That's talking

20:12

about moral patients, US agencies . So , um,

20:15

yeah, please, please talk about that a little bit.

20:17

Yeah. So, you know, this is

20:19

where I think the analogy with animal

20:21

rights starts to, if not

20:23

break down , at least reach a

20:25

limit. Mm-hmm . <affirmative> animals, I

20:27

think we can say have interests and we

20:29

can guess pretty well what

20:32

they are, even though it's guesswork , we can

20:34

pretty much sort of figure out pretty

20:37

well for ourselves, you know, what the dog

20:39

has as an interest, food, a walk, whatever

20:42

the case is, right? Our current technologies

20:44

don't have interests, and if we're

20:46

looking, looking for interests, we might be

20:49

barking up the wrong tree at some point in

20:51

the future, that's possible. But

20:53

I don't want to hold out this

20:55

expectation that we should wait for that to happen before

20:58

we engage these questions. Um

21:00

, if that does happen, then we maybe finding

21:02

ourselves having to deal with this

21:04

, uh, analogy to the animal much more

21:07

directly. But I think for now, the

21:09

real issue is we have interests and

21:11

these objects are in our world, and

21:14

we need to figure out how to integrate

21:16

them in our moral and legal systems

21:18

in a way that makes sense for us. And

21:21

just responding to these

21:23

devices as if they were tools or instruments

21:26

doesn't seem to work.

21:28

In other words, the robot is sort of in between

21:30

the thing and the person, and it resists

21:33

both reification and personification.

21:36

And that's the problem, right? That's, that's

21:38

the challenge and the opportunity before

21:40

us. How do we scale existing

21:43

moral and legal systems that work

21:46

with these two categories for something

21:48

that seems to resist one or

21:50

the other? And I think what

21:52

Camille has done in his work on

21:55

, uh, the subject of moral patency is

21:57

really instructive because he's saying,

21:59

We're not looking for solitude. We're

22:01

not looking for robots to be like human

22:03

beings and therefore have those same rights

22:06

as a human being would have. Robot

22:08

rights are not the same thing as the , as a

22:10

set of human rights. Human rights are very specific to

22:12

a singular species. The human being

22:15

robots may have some overlapping powers,

22:17

claims, privileges, or immunities that

22:19

would need to be recognized by human beings,

22:22

but their grouping or

22:24

sets of rights will be perhaps very different.

22:26

And I think Camille's point is

22:28

that difference actually matters here. And

22:31

that if we're looking for similitude, we

22:33

are going to , uh, actually paint

22:36

ourselves into a corner on the subject

22:38

where , uh, we'll be really missing

22:40

what's important. And I think the

22:43

focus on difference and how to

22:45

integrate what is different into

22:47

, uh, our current way of

22:49

thinking about our relationships to other

22:52

kinds of things will actually help.

22:54

And again, I think environmental ethics is a really

22:56

good guide here, because

22:58

we don't want to think about

23:00

the water like us. It's

23:03

not like us, right? What is the

23:05

water to us and why is

23:07

it different? And how is that difference important to

23:10

our way of engaging it

23:13

and living alongside it?

23:13

Yeah. So to , to go back to that paper,

23:16

and there's one one point I found , uh, kind of

23:18

interesting. Uh , so they used

23:20

, uh, uh, Neals as

23:23

, um, as an analogy, I guess for, for

23:25

robots in that there seems

23:27

to be some evidence I might , I'm not really familiar.

23:29

I just , um, going off that paper that

23:32

, uh, ne the tools might not have

23:34

, um, uh, moral agency

23:36

, uh, per se . They might have moral agency

23:38

, uh, correct me if I'm,

23:40

I'm mistaken here, but they , they're arguing that , uh,

23:44

it , it might , um, we

23:46

might want to treat Neos if they were in our

23:48

current society, in the

23:51

legal system, treat them more as say

23:53

we would a human child in that , um,

23:55

what we do to them , uh, matters

23:58

and what, but we might not necessarily hold them

24:00

, um, accountable or to blame for

24:02

the actions they do , uh, to us. Does

24:04

that , um, sound about right and is , if so,

24:06

is that a reasonable analogy for how we

24:09

might treat robots in a legal system?

24:10

Right. So lemme say , uh, two things.

24:12

One, a disclaimer. I know very little about Neanderthals

24:15

<laugh> , so I'm not , I'm not gonna speak directly to the same page.

24:17

Yeah , we're on the same page there . Um , so I can't

24:19

really enlighten you in any appreciable

24:22

way in that regard. Um

24:24

, but I will say, and this again, I I'm

24:26

gonna go back to environmental ethics. Um, Thomas

24:29

Birch, who was an environmental ethicist , um,

24:31

said that, you know, what we're talking about

24:33

is power. When you talk about widening

24:36

the moral circle or widening

24:38

the inclusion of what is on

24:40

the inside and what's on the outside , um,

24:43

someone in

24:45

the inside decides whether or not to expand

24:47

the circle, right? And that is

24:49

a power relationship where those in

24:51

power extend to those without

24:53

power inclusion. You

24:56

can see this already in previous rights

24:58

, uh, expansion, Mary Walstone

25:00

Craft , who wrote the vindication of the rights of women,

25:03

had to pitch her argument to the men

25:05

who were in power and had the right

25:07

to them because they were the ones who

25:09

could expand the circle to include women in

25:12

moral and legal consideration. The

25:15

same with animals, right? In order to

25:17

include animals, someone had to pitch

25:19

an argument on behalf of the animals, but

25:22

they were inside the circle to begin with, otherwise

25:24

they would not have been able to make the argument

25:26

for expanding that circle. And

25:29

I think this is the same thing we see playing

25:32

out with, you know, rights expansion beyond

25:34

, um, even animals. That

25:37

this is a dynamic that is

25:39

very much related to power and politics,

25:42

and how this plays out is really

25:44

something that is in our hands because

25:46

we're the insiders. We're the ones

25:48

who make these decisions. So how

25:51

robots get integrated into our moral and

25:53

legal systems is entirely ours

25:55

to decide, and therefore, we

25:57

need to engage this responsibly

25:59

in ways that really adhere to

26:02

our moral standards and protect our

26:04

futures.

26:05

You've, you've said that to change the topic a

26:07

little bit , uh, you've said that AI ethics , um,

26:10

is often dominated by , um, western thinking,

26:12

and that expanding the dialogue to include other ways

26:15

of thinking, like, for example, indigenous or

26:17

aism could be useful. In

26:19

some of our research at Sentience Institute, we,

26:22

we found that people who reported more

26:24

belief in , uh, that artificial

26:26

beings like ais and robots, if

26:29

they have reported belief that those

26:32

entities can have spirits, they also tended

26:34

to extend more moral consideration

26:36

to them, which , um, doesn't sound that

26:38

surprising, I guess, but it's

26:41

an example of how maybe some other ways of

26:43

thinking might actually be , um, beneficial

26:45

to bring into this discussion. So do

26:47

you have any other examples of how, say

26:50

bringing indigenous animism or

26:52

other ways of thinking into the AI

26:54

ethics conversation might be useful?

26:56

Yeah, so lemme just say that a lot of the

26:58

AI ethics , uh, discourse has

27:01

been distinctly western, right? We've used

27:03

consequentialism, we've used deontology,

27:05

we've used virtue ethics, we've

27:07

used traditions that are very much grounded

27:10

in a Western European Christian sort

27:12

of tradition. And there's nothing wrong with

27:14

that except that we've gotta recognize that that's

27:16

not a universal position, right?

27:18

That's very particular. And for people

27:21

who live in the global north and have

27:23

grown up with these philosophical and religious traditions,

27:25

it may make sense, but the

27:28

rest of the world looks at things from

27:30

different perspectives and does

27:32

things , um, that do not necessarily

27:35

track with what comes out

27:37

of a Western experience. And

27:39

so, I think you're exactly right. There's ways in which we can

27:41

look beyond , um, our own

27:43

way of thinking about these matters and do

27:46

so to help inform this in a more

27:48

global perspective and draw on

27:50

a wider range of human wisdom as

27:52

a way of developing responses to this. Now,

27:55

I'll caution, we gotta be careful here because

27:57

this could turn into Orientalism, right? This

27:59

is one of the premier sort of

28:01

colonialist kinds of gestures. You go

28:03

out to the other and you take from them

28:05

what you think is gonna help you , um,

28:08

in your own endeavors. And we've

28:10

gotta protect against that kind of gesture.

28:12

It's not about going

28:15

and colonizing these other ways

28:17

of thinking , um, in order to mine from

28:19

them some sort of insight that we lack

28:22

in our way of doing things. So

28:24

it's about learning, it's about engagement

28:26

in order to be , um,

28:29

students of other ways of thinking

28:31

and to learn from these other

28:33

traditions how to see and engage

28:35

the world in ways that will be different

28:37

from what we may have grown up with and different from

28:40

the standard practices that we have

28:42

from our own traditions. So

28:44

I'll mention , uh, just a couple things that I think

28:46

are useful here. One

28:48

is, I think African philosophies like Ubuntu

28:50

. Obviously Ubuntu is not one

28:52

philosophy, it's a, a collection

28:55

or a constellation of different

28:57

philosophies, but it is a much more

28:59

holistically oriented and less individualistic.

29:02

Whereas Decart said, I think therefore

29:04

I am the philosophers

29:07

arguing and working in the abub to tradition

29:09

says, you know, I am because we

29:11

are, And it comes much more out of a

29:13

communal kind of relationship to

29:16

a wider perspective on the world.

29:18

And I think that can help us, because

29:21

I think a lot of the work that is done in

29:23

a AI ethics and in

29:25

even the robots rights literature

29:27

tends to be very much focused on a Cartesian

29:30

subject that is sentient , that is conscious, and

29:32

that becomes the unit of analysis. If you

29:34

look at things from a more holistic, communal perspective,

29:37

we're looking at it then in a more relational approach

29:39

that I had described earlier. Another

29:42

tradition I think can be really useful

29:44

is by looking to indigenous

29:47

, uh, epistemologies and cosmologies.

29:49

And again, there is no one indigenous

29:52

epistemology. There are a plurality, a

29:54

multiplicity, because they're very different across

29:56

the world. But there are ways in which

29:59

our very idea of rights is

30:01

already a western concept, right? This idea

30:03

of God given rights to the individual. And

30:06

that's very Christian, it's very European, it's very

30:08

modern. And the

30:10

pre-modern sort of indigenous ways of thinking

30:12

about these things look at not rights.

30:14

They don't have that concept yet. They

30:16

talk about kinship relationships and

30:19

how do we build kin with our machines?

30:21

How do we exist alongside

30:24

these entities that are

30:27

our tools, our servants,

30:29

our instruments that doesn't turn

30:31

them into a slave, that doesn't turn them into

30:34

something that is beholden

30:36

to us. And I think kinship relationships as

30:39

developed in a lot of indigenous traditions can

30:41

be a nice way to sort of complicate

30:43

the rights literature that we often bring

30:46

to bear on these questions. And

30:48

then the third thing I will say, and this

30:50

comes outta Confucianism and some research that

30:52

some Confucian scholars have done recently. Instead

30:55

of talking about robot rights, r i

30:57

g H t, they talk about robot

31:00

rights, r i t e s,

31:02

that it's idea of a ritual of

31:04

a performance, and that the

31:06

robot is engaged with us alongside

31:09

us in performative activity. And

31:11

as a result, they are engaging us

31:13

in rights of social interaction

31:16

and that we should put the focus not on

31:19

rights as our I G H

31:21

T, but writes r i t

31:23

e s as a different way of sort

31:26

of shifting the focus from this

31:28

individual possession to a communal

31:30

performance.

31:31

Yeah, that's, that's interesting. Do you have any examples

31:33

of how that might look in practice? What

31:36

, what would that entail doing?

31:37

So this is what I've tried to develop, especially

31:39

with this relational turn , um, concept

31:42

that I, along with Mark Kleberg have

31:44

really been formulating and researching

31:47

for the last decade or more. This

31:49

idea is not ours alone. It comes out

31:51

of environmental ethics. It comes out of the sts

31:53

feminist , uh, ethics , uh, like Karen Barard

31:56

and Rosiere . Um

31:58

, but it's this idea that we need to begin

32:00

to think about our more relationships

32:03

as maybe taking precedence

32:05

over the individual moral , uh,

32:08

entity, and that we are all

32:10

born alongside others, and that

32:12

we are already in that relationship

32:15

prior to this extraction

32:18

of our sort of identity of ourselves.

32:21

Um , so it sort of works counter to the

32:23

Cartesian way of thinking about being in the world

32:25

where Decart is sort of isolated

32:28

from others and then has to go out to

32:30

others and figure out his , uh,

32:32

responsibilities to others. This way of thinking

32:35

is always already responsible

32:37

to others, and that the, the individual is

32:40

a product of that sort

32:42

of interaction. But , uh,

32:44

yeah, that's , uh, that's a life's work right there.

32:46

Sure, sure. Thanks. Do

32:49

you have any thoughts about what a human interaction

32:51

might look like in the future? We've talked a bit about

32:54

the , the legal context , um, but there's, there's,

32:56

there would be a lot of aspects of, of

32:58

interaction. And , um, I guess this

33:01

, you could answer this in the long term where,

33:04

as you say, when at some point in the future,

33:06

robots , um, become likely sentient

33:08

, uh, but there's also the short term answer.

33:10

I mean, we have human robot interaction now that's

33:12

not necessarily , um, related to robots

33:15

being sentient. So what, what, is

33:17

there anything that we can expect in the

33:19

future? How much can we, One thing I

33:21

wanna talk about as well is how much can we learn from science fiction? Um

33:24

, how much of that is lessons

33:26

about what we might see and how much of that is just

33:28

me a fantasy?

33:29

Yeah. No , this is a really important question because

33:31

I think sometimes we think that the robot invasion

33:33

is something from the future, right? We're

33:35

waiting for it. The robots are gonna rise up, or they're

33:37

gonna descend from the heavens with guns and,

33:40

you know , bombs and they're gonna attack us. And that's

33:42

, that's a science fiction scenario. I

33:44

think the robot invasion is way less

33:47

exciting, way less dramatic, even

33:50

moon mundane, it's like the fall of Rome.

33:52

We invite these things into our world, and

33:55

over a couple hundred years, we wonder where the robots

33:57

came from , um, because they have infiltrated

33:59

us in very slow movements

34:02

of, you know, our decisions to use

34:05

a device here to use of a device there.

34:08

So I think we need to look not necessarily

34:11

at the big picture, long

34:13

term kinds of questions, but I wanna look more

34:15

immediately, where are we at

34:17

right now? Like, what is happening

34:20

in our relationships to these devices

34:22

that is maybe of interest to us in

34:25

changing our social relationships

34:27

with each other in the process? So

34:30

one thing we've seen recently , um, as being

34:32

reason for both concern, but also of

34:34

interest is children saying, Thank you to

34:36

Alexa. Now, that's weird. We don't

34:38

say thank you to objects, right? We, we,

34:41

you know, don't say thank you to

34:43

our automobile for getting us around town. We

34:45

say thank you to persons . Some people do, Yeah , mostly

34:48

not. But you know, when we say

34:50

thank you to persons, right, and yet

34:52

the abilities of these very

34:55

simple digital assistance to

34:57

use language brings us into

34:59

a social relationship where

35:01

we think that we need to be polite to

35:04

the object. And there's nothing necessarily

35:06

wrong with that. There's reason

35:08

to think that that is part of what makes

35:11

us social creatures and

35:13

that we need to be really concerned

35:15

with not only , um, what

35:17

that artifact is, but how we

35:19

engage it, how we respond to

35:21

it. I think sometimes people try to write this

35:23

off as anthropomorphism. They say, you know,

35:25

this is the anthropomorphism, anthropomorphism

35:27

is a dirty word because we shouldn't

35:30

be doing that. Um, I think anthropomorphism

35:32

is not a bug. It's a feature . It's a feature of human

35:35

sociology. We do it to each other, other,

35:37

we do it to our animals, and we do

35:39

it to our objects. So it's not a matter of

35:41

yes, no, with anthropomorphism, it's

35:43

not a binary. It's a matter of

35:45

careful and informed

35:48

management. How do we want to manage

35:50

the anthropomorphism that

35:52

we are developing and designing in

35:54

the process of creating these things? And

35:57

I don't know that we have answers to those questions,

35:59

but I do know we have lots

36:01

of ways of engaging in this question.

36:04

Cause we not only have the example of talking to Alexa

36:06

and saying, thank you. We have robot abuse studies

36:08

in which people find it very , uh,

36:11

disconcerting and problematic to , um,

36:13

harm something that they're

36:15

told as just like a toaster. Nevertheless,

36:18

it's social interactivity makes it

36:20

very difficult to do these things. We

36:22

can already see in very

36:25

rudimentary robotic and AI

36:27

systems, ways in which we are accommodating

36:29

ourselves to these objects and

36:32

bringing them into our social relationships in

36:35

ways that maybe don't exactly

36:38

fit our human to human relationships, but

36:40

are creating new relationships. I'm

36:43

part of a new field in , uh, communication called

36:45

human machine communication. And that's

36:47

because we recognize the machines are no longer the

36:49

medium through which we send messages to each other.

36:51

They are the thing we talk to, they

36:54

are the thing we interact with. And this, I think, raises

36:56

some interesting , uh, immediate

36:59

questions that we don't have to wait until , uh,

37:01

you know, two, three decades from now when

37:03

we get sentis or agi or whatever the heck it

37:05

is .

37:06

Yeah, yeah. Um, we

37:08

, we talked about this a little bit with , uh, Thomas

37:10

Smith singer as well. It's, I guess, kind of a social

37:12

hallucination where we, we might just

37:14

all accept that, whether Alexa

37:17

or something else , um, we just

37:19

kind of accept and act like it's , uh, sentient

37:21

even if it's not. Um, one

37:24

, one thing I wanna maybe push back a little bit on

37:26

is , uh, I mean there are some examples,

37:28

I guess other examples of where people kind

37:30

of act like something is sentient

37:32

when it's not like children with

37:35

stuffed toys, for example. Or maybe

37:38

in like a very realistic video game where

37:41

you kind of, or you

37:43

are , maybe not intentionally, but you're sort of forgetting

37:46

maybe that it , it what you're interacting with

37:48

is an npc, like an

37:50

AI character, not , um,

37:52

not another , a real person. So I , I

37:55

have to ask, I guess , um, is that necessarily a

37:57

bad thing or is, I mean, you mentioned before

37:59

, uh, it

38:01

, it , the way we treat , um,

38:03

robots, even if they're not sentient, might actually be

38:05

important because it , it , uh, influences how

38:08

we affect how we interact with other humans

38:10

as well. So is that , is that a good thing, a

38:12

bad thing? Not , not quite a clear answer.

38:14

So I don't think it's a good or bad thing, but

38:17

it's a thing, It's a thing we have

38:19

to really take seriously. Um, we

38:21

talk about suspension of disbelief. When you go

38:23

to the theater or you watch a movie, the characters

38:25

on screen are not real. And yet we

38:28

feel for them, we engage with

38:30

their emotions, and we have an experience as

38:32

a result of that. Um, and, you

38:34

know, know in the early years of cinema, that

38:37

was something that people were worried about. Um,

38:39

would people, you know, lose themselves

38:42

in, in the story and, you

38:44

know, exit reality and spend

38:46

more time in the , in the movies than in

38:48

real world? Well, that didn't happen. We figured

38:50

it out. But that's why I say I think it's a management

38:52

problem. It's a way of managing

38:55

these relationships and managing these responses

38:57

that we make to these devices, because

38:59

that is where I think the real challenge

39:02

is . I think saying yes no

39:04

is way too simplistic. It's, you know,

39:06

we we're not going to fix this by saying, don't

39:08

do that. I don't think you fix

39:10

a social problem by saying to people, stop

39:13

doing something. Prohibition never

39:16

really fixes the problem. You've gotta figure

39:18

out how to engage them in

39:20

a reasonable and emotionally

39:22

informed response that we are

39:24

able to effectively manage and that works for

39:26

them.

39:27

Yeah. I actually find that a little bit amusing

39:29

how you mentioned people would think

39:31

that , um, cinema

39:34

is going to make people lose, lose

39:36

themselves in, in all these fictional

39:38

worlds. I guess , uh, the example I'm familiar

39:40

with most recently is , um, virtual reality and

39:43

I guess video games in general. People had

39:45

that worry , Uh, and I didn't realize there was

39:47

that worry about cinema. And then I also thought, well,

39:49

what, I mean, you could go back further and save

39:52

with this . It's not like , um, cinema was the

39:55

first , uh, iteration of fiction. There were plays

39:57

there, there were books. So

40:00

unless something is particularly different about

40:02

this new medium, maybe it's, you know , the newer

40:04

mediums are more engaging. It

40:06

is kind of interesting

40:08

and funny to think about for me. So one

40:12

example from science fiction that I wanted to

40:14

get your thoughts on is in science fiction , uh,

40:16

artificial entities are often seen as being quite discreet.

40:20

Uh , so for example, what often you have a

40:22

robot and that robot is sentient , and that's

40:25

in , like, their mind is encased in that

40:27

, in that physical robot. But in

40:29

reality, it might be a little bit more complex.

40:32

You might have you say

40:34

a single sense , uh, sentient entity , artificial

40:36

entity that controls

40:39

multiple different robots. Uh , and you

40:41

mentioned already that , uh, it's, it's a

40:43

mistake to think about artificial intelligence as being

40:45

disembodied because it is embodied somewhere. It

40:47

might just be more , um, diffuse more,

40:49

more spread out in say, different servers. So

40:53

for example, maybe for , um,

40:55

maybe for an AI

40:58

losing that controls multiple different robots,

41:00

losing an individual robot might be more like,

41:02

say, losing a limb than say, a human

41:04

dying. So in

41:06

, in cases like this where it's much

41:09

more diffuse and hard to tell really where an

41:11

AI begins and ends, or robot begins and ends,

41:13

how might this affect the case for robot

41:15

rights? Or how might this affect rubber rights in practice?

41:18

So I think here, corporate law provides

41:21

a pretty good template because

41:23

corporations are also diffuse, right? Corporations

41:26

are in such a way that there is no

41:28

one place you can go and say, that's the corporation,

41:30

right? It's all over the place and

41:33

it , it has different manifestations. And

41:35

I think if we really want

41:37

to learn from that experience,

41:40

I think we'll have a good template, at

41:42

least for beginning to address a lot of these questions.

41:44

Because I think the relationship

41:47

is much more direct between

41:49

AI and the, and the corporation because

41:51

both our artifacts, both are humanly

41:53

created and both have a

41:56

kind of status. Um, in

41:58

the case of the corporation, they have personhood. In

42:00

the case of ai, we're now arguing whether

42:02

AI should also have personhood. Um,

42:05

and again, I think oftentimes we're looking to the

42:07

animal question as the template

42:09

for how we decide a lot of questions

42:12

regarding the moral legal status of AI

42:14

and robots. But I think the corporation may be a

42:16

better , uh, analogy to follow.

42:19

Um , as we try to think our way through these things.

42:21

Are there things you think that we can learn from

42:23

science fiction that maybe some,

42:26

some depictions where the useful thought experiments

42:28

or where you might think, Oh, they've , they've

42:30

got it right, That looks like it's a plausible , um,

42:33

plausible scenario?

42:34

Yeah, I think there's a lot we can learn from

42:36

science fiction, and I appreciate the question because

42:38

I think sometimes , um, the

42:41

response to science fiction by roboticists

42:44

is this kind of, yes, but

42:46

no, you know , I'm interested in it,

42:48

but don't go there because science fact is way

42:50

more complicated and it's not as

42:53

simplistic as what you get in fiction. And we how

42:55

to bracket off that fictional stuff. So we can talk

42:57

about the real stuff. I think science

42:59

fiction does a lot of heavy lifting for us in

43:01

this field. It already gave us the

43:03

word robot. We wouldn't have the word robot

43:06

if it wasn't for science fiction to begin with. Secondly,

43:08

I don't think science fiction is about the future. Many

43:11

science fiction writers and filmmakers will tell

43:14

you this. Corey Daro is one of them . He

43:16

says, Science fiction is not about predicting the future, it's

43:18

about diagnosing the present. And

43:20

so what we see in science fiction are present

43:22

anxieties, present worries, present

43:25

concerns projected on the screen of

43:27

a possible future. And so we

43:29

can see science fiction as a way of self-diagnosing

43:32

our own worries and

43:34

concerns, almost like a psychoanalysis of

43:36

us as a species right

43:38

here, right now, and what really troubles

43:41

us. And so if we look at science fiction, not

43:43

as predictive, but as diagnostic, I

43:45

think we can learn a lot from our

43:47

science fiction. I also think science

43:49

fiction gets a lot of things

43:52

right way before we get into

43:54

those matters in the scientific

43:57

research. So for example, already in Blade

43:59

Runner, you have this analogy between

44:01

real animals and robotic animals. And

44:04

this whole notion of the electric sheep

44:07

that is the title from Phillip Cape

44:09

original novella, is this idea that

44:12

we are developing devices that

44:14

are indistinguishable from real entities,

44:17

and that we could have artificial animals

44:19

and we could have natural animals. And

44:21

so this idea, I think, helps us grapple

44:24

with the way in which we build these analogies to

44:26

other kinds of entities, the way we

44:29

analogize the robot by comparing it

44:31

to the animal and learning from

44:33

our relationship to animals, how we

44:35

relate to the robot or the ai.

44:37

I also think you see in science

44:40

fiction, a lot of deep

44:42

thinking about human robot interaction.

44:44

I mean, we already today are

44:46

talking about the , uh, effect and

44:49

possible social , uh, consequences of

44:51

sex robots. We've already grappled

44:54

with a lot of those questions in science fiction. Now,

44:57

maybe we haven't got the right answers, and maybe we

44:59

haven't , uh, developed even the right

45:01

inquiry, but we've already seen prototyped

45:03

for us the kinds of things that we should

45:06

be concerned with, the kinds of directions

45:08

that we should take our inquiries so that

45:10

when we do engage these questions in social

45:12

reality, we are prepared to do

45:15

so. Finally, I think science

45:17

fiction does a lot of good work making

45:19

public a lot of things about

45:21

robots and AI that average

45:24

people would not have access to. A

45:26

lot of the research is done in proprietary

45:28

labs behind closed doors, and

45:31

we only hear about it once it's released to the public.

45:33

And then there's either excitement or outrage

45:35

as the case may be , right? I

45:38

think a lot of people, if you ask them

45:40

what they know about robots, they inevitably

45:42

are gonna talk about Mr. Data . They're

45:44

gonna talk about Westworld, they're gonna talk about

45:46

Wally, they're gonna talk about R two D

45:48

two . They know the robots science fiction

45:51

way before they get into the

45:53

science fact . This is what it's called, science fiction prototyping.

45:56

And I don't think that science fiction prototyping

45:58

is necessarily something that is bad.

46:00

I think there's a lot of education that

46:02

takes place in popular media, and

46:04

if we are careful about how we create

46:07

our stories, if we're careful about how we

46:09

cultivate critical , uh, viewers

46:11

in our young people, I

46:14

think we can use this resource

46:16

as a really good way of popularizing

46:20

our thinking about these new challenges.

46:22

Yeah, I I really like what you said about , uh,

46:24

science fiction being almost like a , a thought experiment,

46:27

which that's one of the reasons why I love

46:29

sci science, reading science fiction, watching science fiction so

46:31

much. And I just wanna shout out as

46:33

well, one of my favorite science fictions, which depicts

46:36

, um, AI in a lot of different

46:38

forms is , um, Culture Ian Banks , uh,

46:41

series. Um, so I would

46:43

recommend , uh, people check that out. Uh

46:46

, one , one thing that's related this from

46:48

, uh, that we found from some research at Cent Institute,

46:50

we , um, uh, found

46:52

that , uh, people with a

46:55

science fiction , uh, fan identity

46:57

who , um, self-identified as being , um,

46:59

science fiction fans, that trait

47:01

was correlated with people perceiving

47:04

more mind in currently existing robots and AI

47:06

perceiving more mind in

47:08

robots that might exist in the future , Uh,

47:11

stronger beliefs that Ai , ai and robots

47:13

should have , uh, would have similar value

47:15

to human feelings, less moral exclusion

47:18

of robots in the eyes than I could go on. Um

47:20

, but it , it does seem like , uh, that

47:23

, um, science fiction fan identity or being

47:25

interested in science fiction has some positive effects,

47:28

I guess hard to say, whether that's , um,

47:30

whether that's causal or maybe that's, if

47:33

one , someone has one that likely to have the other.

47:35

But that gets me thinking

47:37

about , uh, what, what

47:39

kinds of things can we do to , uh,

47:42

actually, I guess , um, almost

47:44

like an intervention, if we were interested in moral circle

47:46

expansion in the AI robot context

47:48

, um, what can we do? I don't

47:50

, I don't mean like , uh, making people watch science

47:53

fiction or something, but , um, is there anything

47:55

that you think we could do to , uh,

47:58

encourage people to think about robots

48:00

and AI in a more positive light? Or should we,

48:02

and should we be doing anything?

48:03

Yeah, no, it's, it's , again, it's a really good

48:05

question and it's important because more

48:08

expansion is something that is part

48:11

of our evolution in

48:13

both moral philosophy and in law, right? I

48:16

mean, we , we've opened the circle to include previously

48:18

excluded individuals and groups, and

48:21

that's been a good thing. And so engaging

48:23

people in these kinds of , uh, exercises,

48:25

if you wanna call 'em, that I think is , uh, not

48:28

only instructive for them, but

48:30

it also contributes to our

48:33

evolution in our thinking on these

48:35

matters. I think, as

48:37

we just have discussed, I think science fiction is

48:39

one place that you can engage people in these questions.

48:41

I know when I work with my students, one

48:44

of the things that I find them to be

48:46

most engaged with and most

48:48

excited about is when

48:50

you can take something in their

48:52

popular media experience and

48:55

open it up in a way that allows for

48:57

them to really see a lot

48:59

of these things at play , um, and gives

49:02

them some access to it. Because

49:04

I think a lot of times these technological subjects

49:06

and these matters seem rather

49:09

inaccessible. And if you can make

49:12

'em accessible by fiction, by

49:14

whatever means, I think that's a , a really good

49:16

, um, opener to the

49:18

conversation. It doesn't mean you end

49:21

the conversation there, but that's where you begin to

49:23

cultivate this way of thinking. I

49:26

think another way to do this, and I

49:28

again, have found this to be a direct

49:30

, uh, instance in my own

49:32

classroom, is by giving people

49:35

access to devices, by

49:38

letting them just engage with robots.

49:40

You know, we have this idea of the robot petting

49:42

zoo that some people put together conferences and

49:45

stuff, but I think this is important. I

49:47

think kids are curious, especially

49:49

younger, younger kids, you know, high

49:51

school and and below, they

49:54

want to engage these things. They want to

49:56

take their curiosity and see, you

49:58

know , what happens. And

50:01

giving them access, I think is

50:03

crucial, because otherwise, it's

50:05

something that happens at Google. It's something that

50:07

happens at Microsoft, and therefore

50:09

it's not really a part of what they

50:12

are. It's not really in their world in

50:14

a way that they can make sense of it. And I

50:16

think access is absolutely necessary in

50:18

, in that area. I also think education

50:21

is very key to a lot of

50:23

this stuff. Again, I think we've

50:25

limited access to a lot of these materials

50:28

to specialists in computer science, artificial

50:30

intelligence, engineering, and elsewhere. We

50:32

, I think we've gotta open the curriculum and make

50:35

this stuff broadly available. Um, you can see already

50:38

with the release of Dolly and the

50:40

way people are using it to generate images, we

50:42

need artists to be engaged with this

50:44

technology, and we need artists to

50:46

help us make sense of what this creativity with

50:49

AI is all about. And if we don't

50:51

let artists into the conversation, we're

50:53

not going to learn what we can possibly

50:56

gather from the history of human

50:58

artistic expression and experience. The

51:01

same with music, the same with

51:03

journalism, the same with any field.

51:05

I think this technology is

51:08

no longer able to be limited to

51:10

one specific field, and we've gotta teach

51:13

it across the curriculum in a way that begins

51:15

early and that gets the curiosity

51:18

of our young learners engaged

51:20

from the very early stages of

51:22

their career.

51:23

Great. Thanks for that. Uh , so just

51:25

a couple of questions to sort of wrap this

51:27

all up. Um , I've noticed that you, you

51:29

had an interest in programming from

51:32

a young age, and you've actually developed internet

51:34

applications. You, you , um, uh,

51:37

a an established developer, but

51:39

instead of pursuing computer science , uh, more generally,

51:41

you followed a career in the philosophy of technology.

51:44

Um, why do you think that is? What interests you about, about

51:46

the philosophy of technology more so than coding

51:48

itself?

51:49

Yeah, this is interesting because I , I

51:51

used web development as

51:53

the means to pay for the bad habit of going to

51:55

grad school, <laugh> . But

51:58

it's funny because those two things tracked

52:00

really nicely because one was very

52:02

hands on , very practical, and

52:05

the other was very heady, very theoretical, and

52:07

so they sort of balanced each other out. Uh

52:10

, but one , one thing I noticed as I was

52:12

doing these things simultaneously is

52:15

that the one could really speak to the other, if

52:17

somebody would build the bridge, that

52:19

what we were doing in programming

52:22

and in developing applications could

52:24

actually gain from the traditions

52:26

of human learning, from a epistemology,

52:28

from metaphysics, from ethics, you name

52:30

it. If we would only build the bridge

52:32

to those philosophical traditions, we'd be able to open

52:35

that conversation. And I think we've been rather successful with

52:37

that. If you see how AI ethics has really

52:39

exploded in the last five years,

52:42

but it also goes the other direction. I

52:44

think the computer and digital

52:46

media and ai and robots can

52:49

actually provide philosophy with

52:51

some really interesting thought experiments

52:53

on the one hand, but also some interesting challenges,

52:56

the human exceptionalism and the way we think

52:58

about ourselves as being the center of the universe, and

53:01

therefore the only sentient creature to,

53:03

you know, exist on planet earth, which obviously

53:05

isn't true. So what I

53:07

saw as this ability to use the

53:09

one to inform the other, and the reason

53:11

I went in one direction as opposed

53:13

to the other direction, it just had to do

53:15

that. It turned out I'm a better teacher than

53:17

I am programmer <laugh> . And so

53:20

I pursued the one that was gonna take me in

53:22

that direction.

53:23

Yeah. Do you , do you think , uh,

53:25

your work , um, in

53:27

developing has given you some credibility? Because

53:30

I imagine there , there

53:32

might be some , uh, people in the

53:34

philosophy of technology who maybe

53:37

aren't taken so seriously by people who

53:39

actually work on artificial intelligence, machine learning,

53:41

what have you. Uh , and

53:44

I , I can think of , um, some,

53:46

there are some people who don't, like , for example, don't

53:48

take , um, AI safety very seriously,

53:50

who work in , uh, who work

53:52

in actual the actual development of ai. They might think

53:55

these people, you know, they have these ideas,

53:57

but they don't really know anything about technology. They're kind of naive

54:00

is , is what they might say. So do you think because

54:02

you've kind of , um, you've , you've

54:05

done both, do you think that gives you some credibility in,

54:07

in that , uh, the tech space?

54:09

I hope it does. What , what I will say

54:11

is that it feels very

54:13

dishonest to me to

54:15

talk about machine learning or

54:17

any other technology and not know how

54:20

it works. Mm-hmm . <affirmative> , I'll

54:23

just give you some examples from my own sort of trajectory.

54:25

So I wrote a book on remix in

54:28

the, you know, 2016,

54:31

I think it came out. And it

54:33

took me a while to write the book, not because

54:35

I couldn't write it, but because I wanted to learn

54:38

to be a DJ before I wrote the book. And

54:40

I spent all this time developing the practice because

54:43

I didn't think I had the credibility to

54:45

speak about this artistic practice

54:47

and the technology behind it without

54:49

knowing the how it works, knowing the tools

54:51

and having hands on experience with it. The

54:54

same, when I started to engage with

54:56

AI and robotics, I knew that

54:59

there was no way I could speak with any

55:01

credibility about machine

55:03

learning, about big data, about neural

55:05

networks and all these things. If I hadn't

55:07

built one, if I hadn't done the

55:10

task at hand in actually

55:13

constructing a neural network, training it on

55:15

data and see what it does. And I

55:17

think for my own writing, this is what

55:19

allows me to speak with some conviction,

55:22

with some real good

55:24

grounding in the technology. And

55:27

that hopefully is communicated

55:30

in the resulting text to the rest of

55:32

the world that I'm not just

55:34

making this up. I, I come to this from

55:36

the perspective of really knowing what goes on

55:38

behind the scenes and have brought my

55:40

philosophical knowledge to bear on something I

55:43

have direct hands-on experience with.

55:45

See it . So I I, I've got a similar experience

55:48

in that I tried to , um, I , I actually did

55:50

a PhD in space science , uh,

55:52

and , um, I have a

55:54

geo geoscience background and I wanted to , uh,

55:56

work a little bit on say, some long-termism

55:59

, uh, like ethical

56:02

questions and apply that as, as it applies

56:04

to space science. Uh , and I thought

56:06

that doing that might , could be more credibility

56:08

when I talk about these ethical problems. Um,

56:11

but I , in my experience, something

56:13

that's too soon to say perhaps, but it doesn't feel

56:15

like it. Uh , I think there's

56:18

just , I've been met with a lot of skepticism from the space

56:20

science community on some of those ethical ideas.

56:23

Um, but it , it sounds like that's worked out better for you, so

56:26

that's that's good to hear.

56:27

It . It doesn't mean that you don't get pushback . It

56:29

doesn't mean that you don't get criticism. Yeah , yeah . Um

56:31

, I think, you know, it , it's

56:33

always a push pull . You're always kind of putting

56:36

yourself out there and then trying to justify what it

56:38

is you're doing to others who may be skeptical of

56:40

it. Um, especially when your

56:42

ideas might be less than popular, which

56:45

often is the case in , in academia. But

56:48

I think the dialogue is crucial.

56:50

And I think meeting people where

56:52

they're at is part

56:54

of building that transaction , um,

56:57

and making it work. Uh , I did have a

56:59

guy at one point on Twitter say to me , Um,

57:01

you should shut up because you don't work

57:03

in this field and you don't know what you're talking about. And

57:05

so I sent him the , uh,

57:08

neural network that I built. I

57:10

just sent him the code, and I just said, Here,

57:14

<laugh> , that that must

57:16

be satisfying in a , in a kind of <laugh> , vindictive

57:18

Way. It was very satisfying.

57:21

Can imagine. Um, well

57:23

, yeah, just , uh, to , to kind of bring this

57:25

together. Do you , um, over your career

57:28

so far, do you think you've noticed any shifts

57:31

in thinking , um, about how we think

57:33

about robot rights? Um, one, one just

57:35

to , to kind of prime you is , um, maybe

57:37

a shift in people thinking about robots

57:40

, um, as moral agents

57:42

to shifting to thinking about them more as moral

57:44

patients. Do you, what have you seen

57:46

over your career so far?

57:48

So I , yeah, I , as I said, I think earlier is

57:50

, you know, when this started for me it

57:52

was a really small, you know, I , I

57:54

could really hold on, you know, my

57:56

fingers, how many people were working on this subject, right?

57:58

And that was it. And it's really exploded.

58:00

I think the work that you've done at Sun Institute documenting

58:03

this in your review of literatures that

58:05

you've done , uh, really shows this exponential

58:08

increase in interest, but also

58:10

, uh, scholarship in this field. And

58:13

that, on the one hand is very gratifying. On

58:15

the other hand, it's hard to keep up <laugh> cause there's

58:17

so much happening and there's a lot to read and

58:19

a lot to stay on top of. Um,

58:22

but I will say that a couple of trends

58:24

have emerged , uh, in this process. I

58:27

think there has been an increasing move

58:30

from this being a speculative

58:32

moral subject to this being a

58:34

much more pr pragmatic and

58:36

practical legal subject. My

58:38

own thinking has evolved in that way. My first book,

58:41

the Machine Question was very philosophical,

58:43

very morally situated

58:45

in those traditions. My most recent

58:47

book, which is gonna be this thing called Person thing robot

58:50

from MIT press , come on next year

58:52

, um, is much more engaged

58:54

with the legal philosophy and

58:56

with legal practice. And that

58:58

just, I think, is a reflection

59:00

of the fact that that's how the trend is

59:02

gone in the research over the

59:05

last decade. Another thing

59:07

I've noticed is a

59:09

development in the,

59:12

bringing into the conversation these

59:15

non-western ways of thinking about these questions.

59:17

I think when these questions began over

59:20

a decade ago, the way

59:22

in which I and others were very much

59:24

engaging these things were by leveraging

59:26

sort of the western moral and legal traditions

59:29

to try to speak to the people

59:31

who are building these things and developing these technologies.

59:34

Over this past decade, we've

59:37

seen, I think, a greater engagement

59:39

with, in a greater desire to

59:42

engage with other ways of thinking

59:44

and other ways of seeing not

59:46

as a way of , uh, doing something better

59:48

or worse or whatever the case is , but it's just

59:50

tapping into that difference

59:53

that we can see in human thought

59:55

processes that allow for us to really

59:57

cultivate a relationship to

1:00:00

the wide range of human wisdom as

1:00:02

it's developed over time, but also over space.

1:00:05

And I would say the last

1:00:07

thing I've seen, and this is very gratifying, I

1:00:09

think when I started this and

1:00:12

began to talk about robot

1:00:14

rights as a subject matter for

1:00:16

investigation , uh, there

1:00:18

was I think a lot of very

1:00:21

abrasive and very , uh,

1:00:23

sort of triggered reactions. How can you say

1:00:25

that this is, this is just horrendous. I

1:00:27

mean, who would talk this way? Um,

1:00:29

and I had this very famous picture I put

1:00:32

on Twitter with me holding a sign that said, Ro

1:00:34

robot writes now say

1:00:36

that . Yeah . And it really sparked an amazing,

1:00:39

huge controversy , um,

1:00:41

about a decade ago , well, about

1:00:43

five years ago. And I

1:00:46

learned a lot in that little exchange that

1:00:48

took place, but it was an explosion of in , of

1:00:50

interest, but also of , of pushback . I

1:00:53

think we've seen it evolve to the point now where people

1:00:55

are saying, Yeah, we need to talk

1:00:57

. This has gotta be talked about. This has gotta be grappled

1:00:59

with, We can't just put

1:01:01

, put our fingers in our ears and go, blah, blah, blah,

1:01:03

blah, blah . This doesn't exist. It does exist. Laws

1:01:07

are being made, hearings are

1:01:09

happening. AI personhood is

1:01:11

not something for the future. It's something

1:01:13

that legislatures are looking at right now. And

1:01:16

as a result of all this, taking these questions

1:01:18

seriously , um, and engaging

1:01:20

with them in a way that is informed by

1:01:23

good moral and legal thinking processes,

1:01:25

I think is absolutely crucial. And I've seen in

1:01:28

the last five years that mature in

1:01:30

a way that I think really , uh, speaks

1:01:32

to the fact that a lot of people have

1:01:34

found this to be not only of interest, but

1:01:37

also something that is crucial for our

1:01:39

engagement with as researchers.

1:01:42

Hmm , <affirmative> . Great. Well thanks. Thanks for that , David. Um

1:01:45

, just to finish up, where can listeners

1:01:47

, uh, best follow you in your work? And is there anything

1:01:49

in particular you'd wanna suggest they look at, whether

1:01:51

it's a book or any, any other , um, piece of

1:01:53

work you've worked on? If they're , especially if

1:01:55

they're interested in this topic and want to wanna learn more.

1:01:57

So you can follow me on Twitter. It's , uh,

1:01:59

David Underlined Gun. Uh , that's

1:02:01

my handle. Um , you can find it very

1:02:04

easily. My website is gun web.com

1:02:06

and you can go there for access

1:02:09

to texts and books and things

1:02:11

like that. I would say right

1:02:13

now, if this is of interest

1:02:15

to you and you really want to jump in

1:02:18

feet first and sort of see what it's all about , uh,

1:02:20

the two books that began all this was the

1:02:22

Machine Question, Critical Perspectives

1:02:25

on ai, Robots and Ethics from 2012.

1:02:28

And Robot Writes from 2018,

1:02:30

both published by MIT Press, and

1:02:32

, uh, you should be able to get both of 'em used for very cheap

1:02:35

these days. Um, or

1:02:37

go to the library, they have 'em too. Uh

1:02:39

, but that's a pretty good way to get,

1:02:41

I think, into this material.

1:02:44

Um , and because of the kind of research I

1:02:46

do, I try to be

1:02:48

as exhaustive as possible and documenting

1:02:51

what people have said, where it's

1:02:53

going, where it's come from, and hopefully make

1:02:55

sense of it. So it hopefully will

1:02:57

provide people with a good guide to

1:03:00

, uh, finding their way through this stuff

1:03:02

and figuring out where they stand.

1:03:03

That's great. Thanks. We'll have , uh, links to

1:03:05

all of that in the show notes and everything else that we've

1:03:08

referred to in the, in the show. So thank

1:03:10

you again, David, Really appreciate your time and thanks for joining us.

1:03:12

Yeah, it's been really great to talk to you and , uh, I

1:03:14

appreciate the questions. Uh, as

1:03:17

, as you said early on, there's a sort of reason

1:03:19

that the Sentence Institute is

1:03:21

interested in these questions and there's a reason I'm interested

1:03:23

in these questions and I think they dovetail very

1:03:26

nicely. And it was great, you know, to talk with

1:03:28

you about these matters.

1:03:30

Thanks for listening. I hope you enjoyed the episode.

1:03:33

You can subscribe to The Sentence Institute

1:03:35

podcast on iTunes, Stitcher,

1:03:37

or any podcast app.

Unlock more with Podchaser Pro

  • Audience Insights
  • Contact Information
  • Demographics
  • Charts
  • Sponsor History
  • and More!
Pro Features