Podchaser Logo
Home
We Are Worried About The Wrong AI (feat. Shamim Mokles & Anupam Guha)

We Are Worried About The Wrong AI (feat. Shamim Mokles & Anupam Guha)

Released Wednesday, 15th February 2023
Good episode? Give it some love!
We Are Worried About The Wrong AI (feat. Shamim Mokles & Anupam Guha)

We Are Worried About The Wrong AI (feat. Shamim Mokles & Anupam Guha)

We Are Worried About The Wrong AI (feat. Shamim Mokles & Anupam Guha)

We Are Worried About The Wrong AI (feat. Shamim Mokles & Anupam Guha)

Wednesday, 15th February 2023
Good episode? Give it some love!
Rate Episode

Episode Transcript

Transcripts are displayed as originally observed. Some content, including advertisements may have changed.

Use Ctrl + F to search

0:02

Hello and welcome to the big Story. Two

0:04

point oh hello, this

0:07

is the first episode of the second season

0:09

and we are your hosts. I am

0:11

Anjali and I am Prateek.

0:14

So Anjali, what's

0:16

new in the season? What is so two point

0:18

about it. So you know how our news

0:20

culture is quite noisy and

0:23

there are so many headlines that come every

0:25

day, this is happening, that is happening. So

0:28

exactly what is happening, that

0:30

is where the story comes in. So

0:33

just like season one, we will

0:35

get experts on the podcast to answer

0:37

all of our burning questions. But

0:39

in this season since we wanted to spend

0:41

more time on longer discussions,

0:44

we are moving to a fortnightly

0:46

series instead of a daily show and the

0:48

episodes are longer as well. We

0:50

have very free wheeling,

0:53

you know long, long discussion with

0:55

our experts so that we get the more

0:58

nuanced and well rounded up that

1:00

is all about the show. But let's talk about

1:03

this episode before we

1:05

go further. I just want to make an appeal

1:07

to our amazing listeners. Please

1:10

listen to and check out all the future

1:12

episodes of big story, all the previous

1:14

episodes of the big story on your

1:16

favorite audio streaming platforms like apple

1:18

podcast, Spotify gone adios everywhere.

1:21

We are on the Queen's website, we're on the Queen's

1:23

Youtube channel. We have taken over the world

1:26

and also check out our other podcasts.

1:28

Yes, we have a movie

1:30

and tv review show. We have

1:34

and we have a show on indian politics called

1:36

C assets. So to check them

1:38

out as well. Right? So

1:41

for our first episode we sort of looked around

1:44

and we try to find

1:46

out what is the one thing that is creating

1:48

headlines and

1:50

you might have guessed it. It is,

1:52

it's artificial intelligence,

1:55

machine learning. So

1:58

there is a lot of curiosity about how

2:00

it works, what it can do, is it

2:02

good? Is it bad? There is a group

2:04

of people who believe that it is going to change the

2:06

world for the better. And

2:08

then there is another group of people saying

2:10

that all is not take

2:12

our jobs to replace

2:15

humans, robots will kill us, we

2:18

will come back. So there's a lot

2:20

of talk and so we wanted

2:22

to answer a lot of noise, like

2:24

genuine, genuine noise probably

2:27

generated by But

2:30

yeah, so we thought, let us

2:32

get to experts from

2:34

different walks of life but

2:36

connected to Ai and

2:38

ask them some questions that

2:41

I would say we have and

2:43

a lot of our listeners would have as well, like we start

2:45

from very basic and then we want

2:47

to go into some, you know, deep questions

2:50

about how it will impact the society at large.

2:52

So Anjali, what were

2:54

some of the questions that you have for our

2:56

guests? So I think I would

2:58

like to start by understanding how does that work?

3:01

Like in very simple language, how

3:04

does it work? What is the

3:06

thing behind there also this emergence

3:09

of these new tools like

3:11

gPT dolly

3:13

and lens are in, you

3:15

know, very common use

3:18

cases like art and essay

3:21

writing and all of that. So it's actually

3:23

a really interesting, you know,

3:25

I think to deep

3:27

dive upon key AI

3:30

and Art co relation

3:32

right? And for

3:35

that, like we have

3:37

our first guest who is

3:39

Jemima claims he is a Youtuber

3:42

and he's also a graphic designer of

3:44

working real human graphic designer,

3:46

not an a a graphic designer. He works

3:48

at weapons. And

3:50

we want to talk to him about his

3:52

journey with art in general

3:55

and how he sees this

3:57

encroachment of AI in

3:59

the space of art. Interestingly

4:02

enough, he actually made a Youtube video about

4:04

this recently where where he sent

4:07

an AI generated design to a client.

4:09

So that

4:12

would be interesting to explore

4:14

what was that? Like what was the whole

4:17

vibe of the clients? Did they know

4:19

that they find out? So that was really nice.

4:21

So we just want to get

4:23

a peek into how

4:25

does the creative world look at ai

4:28

what are the kind of problems that

4:30

AI is solving for them? Do

4:32

they like it? Do they not like it? And

4:34

at the same time we really want to zoom out

4:36

and talk about a more macro view

4:38

at AI for which we have

4:40

Doctor who is

4:42

a professor at iit Bombay and

4:45

who works extensively in

4:47

aI policy. He has

4:49

spoken about this in a lot of places.

4:51

So, I think he'd be the perfect fit for

4:53

us to understand what

4:55

a policy around here should look like, what

4:57

is really doing to the world.

5:00

And should we be

5:02

worried about it's going to be so interesting to have an artist

5:04

and a turkey talk about. All

5:07

right. So let's jump right into the corners. Let's

5:09

jump right into it. Welcome. Sammy. Welcome

5:12

And hello, I want to

5:14

start today's discussion, first of all

5:16

by talking about ai

5:22

ai artificial intelligence

5:25

Dalil bad.

5:28

Let us you know, hold back a

5:30

bit and discuss what

5:32

artificial intelligence is and how

5:35

it works in a sort of simple

5:37

language. Right? But I was thinking instead

5:39

of Anupam telling us

5:41

what it is because he has been working

5:43

closely with the eye and he knows the internal,

5:46

you know, workings of it. How

5:48

about all of us tell

5:50

him what we think Ai

5:52

is and how it works. And then he

5:55

takes our class and we're like, okay,

5:58

we will do our version of

6:00

it. And he'll give us marks who

6:03

wants to start, I am at the bottom of

6:05

the of this pyramid. So I

6:07

but no, no, we'll do we'll do we'll do

6:10

it alphabetically. So Anjali, like

6:12

I said, I'm at the bottom of this pyramid. So my

6:15

definition is going to be the most

6:17

basic. So how I think social

6:19

intelligence works is like a

6:21

told a while back, it is just a

6:23

very sophisticated coding

6:25

where you feel into

6:27

a program, all the information

6:30

that that is on the internet. And

6:32

you ask it to identify

6:35

patterns. And then

6:37

when given a prompt with

6:39

the identification of those patterns

6:41

create a new result. Now

6:43

this can be done for Yeah,

6:51

so helping off of what she is

6:54

saying. I think artificial artificial

6:56

intelligence, how I understand is

6:58

is a way to, as the name

7:00

suggests, artificially create

7:02

the pattern of thinking of her human

7:05

brain I would say because Hamas

7:09

that becomes sort of our data

7:12

observed and then

7:14

we recognize some patterns

7:20

process information

7:24

Intelligence generate

7:27

over the years. It gets more sophisticated

7:30

and that's how

7:32

I understand it in a more most basic

7:34

sense. I mean what do you think? Yeah, obviously

7:37

you guys went a little too technical

7:39

I think I feel will

7:41

correct me if I'm wrong but Ai is

7:43

basically pattern recognition.

7:45

Like it's very good in recognizing patterns.

7:48

That is what Ai is for me in my

7:50

profession because I'll only talk about my

7:52

profession here but more

7:54

or less like when we talk about jobs and all everything

7:57

and if you look at it from

7:59

a bird's eye view, then I think Ai

8:01

is a tool that will make us

8:04

as a species more productive in the future

8:06

and will make our lives easier. So

8:09

that's what Ai is according

8:11

to me. Okay, so

8:14

so first of all, first

8:17

of all, thank you for having

8:19

me here and

8:21

uh thank you for coming.

8:23

So it's very interesting to talk to you

8:26

a number uh

8:31

definition Angelica

8:36

actually

8:37

actually because

8:46

let me try to explain, first

8:49

of all instead of explaining

8:51

ai Kahuta. I want to start

8:54

with a I can write

8:56

because there is a lot of cultural

8:58

opinions on what it is and I think

9:01

we need to cut a little

9:03

bit there. So the first

9:05

thing you have to understand that artificial

9:08

intelligence is a misnomer or rather

9:10

a market term, it

9:13

has pretty much nothing to do with intelligence

9:16

which is where protic, your

9:18

definition flies out of the window.

9:20

There is no neuroscientists

9:24

on earth right now.

9:26

Who knows why

9:28

are we conscious survey? Are we really

9:30

intelligent and

9:32

hence there is no technology which can

9:34

artificially replicate intelligence

9:43

outward patterns exist,

9:48

record any

9:53

machine which can autonomously

9:55

make decisions is called a nail. And

9:57

generally the technology which is being

9:59

used in 95 96%

10:02

of ai is machine learning. Machine

10:05

learning how to, I would say angelica,

10:07

definition more or less character. Machine

10:09

learning data.

10:12

Machine decision

10:24

decision. Later classification.

10:30

Machine orange

10:39

classification machine learning or

10:50

who are quite fruit drawing.

10:52

Bannock is um generative

10:54

viable daily

10:57

charge gpt a generative machine

10:59

learning types but fundamentally

11:02

all machine learning algorithms,

11:05

they eat a lot of data

11:07

which is called training data.

11:09

So sorry first

11:13

are they new like things like dolly and

11:15

charge. Gpt.

11:16

No they are not, they're not.

11:20

I think 2nd 3rd versions

11:22

of like dolly

11:24

actually

11:25

more more than more than 2nd

11:27

3rd versions of that technology particular

11:30

brand names. But

11:38

so it is basically what is

11:40

called in the technical field a

11:43

language model exiled

11:45

language english

11:49

or patterns

11:53

or patterns is

11:56

a language model. Language

11:59

models, basic technology

12:02

nanotechnology. Earlier two thousand's

12:05

major technologies exist but

12:08

joe hardware software

12:12

but Gpus market manager

12:14

of hardware Nous software checked

12:17

earlier research

12:20

or Joey I just

12:23

ca laboratories methodology which

12:25

was something we were working on

12:27

in our fields. They

12:29

suddenly started to stop

12:32

remaining as scientific projects

12:34

and became market commodities

12:36

which started to be converted into products.

12:39

I would say that was a man.

12:41

I mean we were trying to solve some

12:43

specific problems but

12:45

as a I became commercialized

12:48

people also started to realize

12:50

instead of trying to solve specific

12:52

but hard problems, it is much

12:55

easier to throw lots

12:57

of data models, quantity

12:59

has a quality of its own or

13:03

it's a research

13:06

career or what

13:15

problem are they trying to solve? I

13:28

want to I want to pause you for a second because you

13:31

bring up an interesting point now

13:35

this and since we were talking about things

13:37

like dolly and mid journey and

13:39

stable diffusion. Shamim Ur

13:42

a working professional right. You work

13:44

with graphics on a day to day basis. So

13:46

how do you think have

13:48

these tools sort

13:51

of infiltrated in your

13:53

industry and casually and probably problem

13:55

discussed, correct problem

13:57

exists. Tools

14:00

solve career in your industry specifically

14:02

in my industry at this point

14:04

it's making us more productive.

14:07

Like I still remember in

14:09

my initial days, 2007, 8

14:11

at the time of seven was

14:13

I learned in like the very

14:15

like vintage version there. If you

14:17

have to remove people from a background,

14:20

it was a very big task like it was like a full

14:22

day task. Now it's just one click.

14:25

If I have to make a brochure

14:27

then if I'm using five

14:29

images as such, then

14:31

I can create a brochure in one hour, two or

14:34

maximum three years at the time

14:36

that those three years were three days.

14:38

So as a designer

14:41

I'm being more productive, my team

14:43

is being more productive and if you

14:45

talk about all these designing software, they are becoming

14:47

like intelligence

14:50

is not what they are but for us they

14:52

are becoming intelligent in a way that

14:54

they give us solutions easier

14:56

and faster. Do do you think they seem intelligent

14:59

to us looking

15:01

at it as a layman coffee, truly

15:04

intelligent Like they they might not be intelligent

15:06

in the way we are intelligent but

15:09

they seem pretty into

15:14

background removal. I think also

15:17

because for so long people have

15:19

been removing background that is

15:21

something that the software has also been learning

15:23

that when somebody wants to remove a background

15:25

this is the distinction that they are seeing

15:27

between foreground background and

15:29

after we just discussed

15:32

after all of this data has been collected

15:34

now they've developed a thing where

15:36

one click and the software knows

15:39

what do you think is the foreground and the background

15:41

if you see a new update will come

15:43

and that software will add a new

15:46

to background removal. Background extension

15:48

is just one option. Suppose it's

15:50

very difficult to tell you but you

15:52

get multiple options in a very short

15:54

amount of time. But I I don't

15:56

I don't feel it's it's

15:58

a threat because see jobs

16:01

will change the perception

16:03

of jobs will change. I cannot

16:05

be rigid key. No no I will I will

16:07

do the best work. A I cannot do

16:10

the way I will do and I'll

16:12

only do it myself then I will

16:14

be kicked out because other design will come he'll use

16:16

AI and he'll do

16:19

multiple things in the same amount of time.

16:21

Is something like that happening. He

16:23

has. No

16:26

no I haven't seen because it's

16:28

all about productivity. Right. Who wants to work

16:30

till four in everybody is

16:33

a. I being used very prominently.

16:35

Yeah I'll tell you

16:37

one live example. I'll tell you suppose

16:39

you have a holiday

16:42

client. Okay suppose you are working with a client

16:44

who sells holidays. It's not possible for

16:46

us to go and click a hut in the mountain picture

16:48

all the time. So what we do will

16:50

create an image like that in a I suppose

16:53

and then if necessary we'll try to

16:55

regenerate it because we we

16:57

don't have inspiration before right? But the

16:59

Ai has as said no the

17:01

Ai has been fed so many

17:04

images with a mountain in the heart.

17:07

He'll give us options because

17:09

Ai work is still very patchy. Like I

17:11

I don't see it very like

17:13

fine fine. Okay so we'll

17:15

watch this image and we'll get

17:17

inspiration of the lighting and set up and how

17:19

the mountains and our this thing. But

17:21

we'll create it manually so

17:24

it will look better for it will look better

17:26

and it will be in our control.

17:29

So that's what Ai

17:31

is being used now so

17:33

we can create a safer image. Like it

17:35

won't get cooperated because we are not stealing the image

17:38

we are constructing saying you are

17:40

currently using Ai as a starting

17:42

bouncing board of the

17:44

first idea mildew. And then we

17:47

can because you

17:50

also had made a video about this where

17:52

you said that you gave an

17:54

Ai generated design to an actual client.

17:57

So talk to us about that key

18:00

battle. What kind of brief did you get

18:02

and how was that whole experience

18:05

of you know actually going

18:07

through it and and

18:10

what was that like see how

18:13

it works like and first of all, why did you do did you just

18:15

do it as an experiment? Yes.

18:18

Yes, yes. It was an experiment because we

18:20

know okay. We know when we are teaching something.

18:22

No we are throwing

18:24

things like in the dark we don't know

18:26

what will stick. So you show

18:28

what you can do for them And they'll

18:31

say yeah we can do this also. So you

18:33

do things to get rejected

18:35

sometimes if you're lucky then something from

18:37

that only will get selected okay.

18:39

But like 99% of time

18:41

though it's like okay,

18:44

okay these are very good. But we had

18:46

this in our mind and this is the hook point

18:48

in the business, right? If they

18:50

if you get to that point then you have the job

18:53

and those two logo options.

18:55

No, that was my way of

18:57

like reducing the what

18:59

you call beating because if I tell the initial

19:02

exactly if I did,

19:04

you know, I don't know quite

19:07

possible, likely reject. But

19:09

at least the client actually

19:12

yes, exactly. Because the

19:14

best thing about Ai in our businesses like

19:17

you're not stealing anything. It's not

19:19

cooperated yet because

19:21

I will talk about today.

19:23

What is the idea behind stealing

19:25

it because the air generated

19:27

options also know I did some tweaking

19:29

there also it was not properly. I

19:32

we showed it to them and

19:35

if they like you. Okay this is the go

19:37

one and we need 10 poses

19:39

of this one. At least we have one

19:41

basic post. So we'll do more changes and

19:43

you're saying Ai helps in you know, cracking

19:46

that first step. Yes. There will

19:48

be one graphic designer who's better in air

19:50

and there will be one graphic designer who is really who wants

19:52

to do things the old way.

19:55

Okay, okay. It's a simple digital

19:57

camera and the camera, the transition was

19:59

very, very crooked. Okay. The

20:02

old people who were like real camera

20:04

users, they were very, they didn't want

20:06

it to go to digital camera,

20:08

but who did, who did go to the digital

20:10

camera? They survived. Right.

20:12

So this is what it is. The technologists

20:15

will be evolving and you need to

20:17

write the way basically you need to be updated.

20:19

That's that's what I feel I

20:25

was listening. I have

20:27

one question for Shamim in your

20:29

video. Also you mentioned that

20:31

sometimes what an Ai lacks

20:34

is context. So

20:36

and I think this is very cultural also

20:39

that when you have an indian client

20:41

you have the entire cultural

20:44

background that this client is coming from.

20:46

And so you know you

20:48

might not get it in one go what

20:50

the client wants but you have certain

20:53

context which a lot of times

20:55

because the air doesn't know who this person

20:57

feeding in the prompt is

20:59

might not be able to give

21:02

a more accurate

21:05

image. So how does that

21:07

work? If you can explain more about context

21:09

in graphic communication,

21:14

see your question. No, I I

21:16

want to give it to because your question

21:18

answer is hidden is

21:21

Jarvis possible and if

21:23

Jarvis is possible then is Skynet

21:25

possible, Jarvis and Skynet

21:27

as possible before he answers this, I just

21:29

want to interrupt and explain

21:32

to our listeners what Skynet actually is.

21:34

So a terminator movie,

21:36

the Kyonggi Skynet is like

21:38

the villain of that film and it is like

21:41

a I would say which

21:43

gains self awareness and which becomes

21:45

human and at

21:47

the end it decides that

21:50

humans are the biggest

21:52

enemy in the world and they mission

21:55

is to you know kill humans.

21:58

And essentially Skynet is the example

22:00

that is used to illustrate ghetto

22:04

hockey a human, this

22:09

is like the first time

22:12

in popular culture that we at the

22:14

scale. So all of this obviously irobot

22:16

or go India go

22:18

robot shankar androgyny, that

22:22

movie's premier Ai basically

22:25

humans. So um

22:27

yes is Jarvis and

22:30

Skynet possible. Yeah

22:33

then then you have your

22:34

answer again, I used the

22:37

I I used the word when

22:39

I started to describe what Ai

22:41

is and isn't right, I use the word

22:43

and I used it very specifically I

22:46

said think of ai as a parrot,

22:49

Jarvis is

22:51

not a parrot right? A parrot doesn't

22:53

know what it is saying, A parrot just

22:55

says things, it has heard

22:57

patches from things it has heard

23:00

but I want to answer this

23:02

in a bigger detail, you know like

23:04

because you guys are having this conversation and I was

23:06

listening very closely to

23:08

try to sort of so there

23:11

are again things which

23:13

we have to be very clear about

23:15

Shamim, you used the word

23:17

or rather the phrase that

23:20

Ai is coming right

23:22

and you

23:26

would like to make an analogy. Technology

23:29

never comes. People choose

23:31

to use technology and the people

23:33

who have power and money get

23:35

to dictate which technology is

23:37

used in waterway technological

23:39

part three nautical technologies coming

23:41

another phrase we keep hearing is genie's

23:44

out of the bottle. Yeah, it's

23:46

not a genie, it can't be out of the

23:48

bottle. So bad. There

23:51

is a technology, it is

23:53

used in particular ways

23:55

which way's will become popular

23:58

and which ways will not become popular

24:00

depend a lot on

24:02

ai industry or control what

24:05

use cases they fund, What use cases

24:07

the research, what use cases they

24:09

develop an

24:12

individual level. Sure Arabic

24:14

professional to achieve skills

24:18

survived but at

24:20

a macro level and

24:28

as a citizen facial

24:34

recognition tech are

24:37

identified privacy technology.

24:43

So remember at the end of the

24:45

day her technology wider

24:48

context apart, policy,

24:51

political, economics,

24:54

politics or economics. So Tianna

24:56

who control the technology and

24:59

the second thing you guys were talking about

25:02

a plot bad, correct and Anjali said something

25:04

which was actually correctly segmentation

25:07

job, bad background removal. Computer

25:10

vision that problem is called segmentation,

25:17

segmentation, possible segmentation

25:21

millions of times important

25:28

point. Whatever

25:34

value any eye generates

25:36

that value is

25:38

coming out of training

25:41

of data which was at some point

25:43

made by human beings.

25:44

So doesn't this then bring the question

25:46

of ownership

25:48

exactly here

25:50

the question comes at no AI can

25:52

exist without

25:54

data. A lot of data

25:57

and it is

25:59

a shame that right

26:01

now that data is kind

26:04

of being stolen Internet

26:07

Italia. It's that right. I'll give you one

26:09

example. There is a very

26:11

famous facial recognition

26:13

tech company in America evaluation.

26:16

Billions Clearview Clearview

26:19

dataset. They

26:21

just went on the internet on

26:23

America digital law enforcement website

26:27

under trials, photos, Internet,

26:31

social media, facebook, photo

26:34

media natalia right.

26:38

Company company.

26:46

So yeah, ke

26:50

hame jo bad career gives the job

26:52

maker surprise society. It's

26:56

easy. You can't leave

26:58

them to people

27:00

who are going to make money from it. You

27:03

have to have that discussion at

27:05

a societal or at a policy level

27:10

data protection, europe

27:15

had a law, it is called G. D.

27:17

P. R. You might have heard of it. It's a very

27:20

strong law is locked

27:22

in europe. Your data

27:24

cannot be even if it is on the internet, some

27:27

company can't come and randomly.

27:28

So I'm just going

27:30

to pause you here. So in some way

27:32

would you think that what

27:35

Shamim did with his experiment

27:38

that using somebody else's like using

27:41

ai generated images or even I

27:43

have my profile picture as

27:45

many lands and generate Corinthian.

27:47

I use a lot of a yard. So

27:49

would you consider that unethical

27:52

at some levels.

27:54

As a policy academic.

27:57

I try to stay

27:59

away from questions of

28:01

individual ethics, ethics

28:03

are not something an individual can decide.

28:06

Si Shamim when

28:08

he did that Shameka

28:11

demand met Tonia anarchy. I am using an

28:13

Ai ai millions of images. Million

28:16

images does

28:25

in the past

28:29

but as

28:34

an individual but

28:37

company

29:06

I actually had another question

29:08

to ask. The Ai

29:10

tools are using the databases that

29:12

are available on the internet virtually

29:15

for free of cost. Right. So

29:17

then is

29:19

there a problem with the entire

29:22

money making strategy of

29:28

they have collected the data from the

29:30

internet without giving any money to

29:32

the people to collect or

29:34

fruits. Data model train

29:38

a Prada problem.

30:00

You had that in our culture

30:03

and when I say our culture I just wrote me in

30:05

indian culture. I also mean american culture.

30:07

There is a false narrative or a

30:09

false narrative here. Regulation

30:12

innovation or year

30:15

problem here in America

30:18

or India and our country's key.

30:20

We don't have any

30:23

regulation on data or

30:26

even data protection

30:29

data protection bill. Hell

30:31

on him. But even an

30:33

imperfect law would have been progress

30:36

but I'm sorry to survive now.

30:38

We don't have any law to protect

30:40

your on my

30:41

data. Progress and what will happen if

30:43

a law like this comes

30:46

home ari daily life

30:48

depends on what depends

30:50

on what is written in the law. Like

30:52

right now last

30:54

joe draft data protection card

30:58

criticism law,

31:01

citizens sorry, responsibilities

31:04

but state sorry responsibility,

31:07

Jessica

31:08

again, Shamim should

31:10

be king of

31:12

like professor

31:22

science or humanities. Would

31:35

you be right now If you wouldn't have pivoted

31:37

to policy. Would you be working in

31:39

one of these companies now? Yeah

31:41

I would have been working in Silicon Valley in

31:45

2019.

31:54

I wanted to go

31:56

back to that thing like because after

31:59

listening to my, I'm completely

32:02

with and we are on the same page where

32:04

we talk about laws and policies because whenever

32:06

I meet somebody like who wants to be

32:09

a graphics and who sees me, who meets me

32:11

somebody I know I'll tell them our

32:13

professional, our profession is not just a job, it's

32:15

a superpower because all the fake

32:17

news, all the hysteria and all everything that

32:19

is done by people like us.

32:22

Okay and now there's

32:24

all this like

32:26

what you call Deepfakes and now

32:28

the fake news is exactly

32:30

defects and everything. This is done

32:32

by Ai. You don't put

32:34

a law in breathing right? You put

32:36

a law in cutting a tree because that's something

32:39

is dangerous, breathing is not dangerous. I

32:41

don't have problem with Ai. Okay, I have a problem

32:43

with the hysteria that is being created

32:45

against a you

32:48

will eat your job is dangerous this

32:50

and that. So I'm going back to the same

32:52

how dangerous is let's okay,

32:54

let's talk about key danger in

32:56

its worst form, what is the

32:58

harm they can do? And since we

33:00

are recording it on a friday the 13th

33:02

let us go,

33:03

Okay, let's

33:06

let's actually talk about the danger of

33:10

the answer is a very

33:12

contradictory answer to the patients,

33:17

yes Ai is extremely

33:20

dangerous. No. Skynet is not possible

33:22

because it is not dangerous in the way people

33:24

think of its danger then a common

33:26

person thinks of AI is dangerous.

33:49

You

33:50

make eyes also stupid

33:52

act, you've taken all the

33:54

power away from

33:55

it. Yes because but

33:58

I would also say this very stupid

34:00

thing is extremely dangerous in very

34:02

stupid ways. A problem here

34:04

was stupid, basic, basic,

34:09

basic, basic

34:11

examples how dangerous he is

34:13

America joe

34:21

building America

34:26

program technically

34:30

what is the chance? Criminal

34:37

judges or juries? Common

34:41

sense, shoulder case, black

34:53

community but

35:08

but problem

35:18

or

35:27

was this, what year was this

35:29

has been going on for decades.

35:31

This is a huge problem in America

35:34

example, amazon

35:37

warehouses to

35:39

amazon has moved over from

35:41

purely online presence and going to

35:43

brick and mortar industry of a warehouse. Banaras

35:46

jamming carried warehouses,

35:51

amazon, they tried to

35:53

replicate their whole

35:56

hiring process by converting it

35:58

to machine learning. Machine learning

36:03

problem bias

36:10

people, machine learning, machine

36:15

learning women, Black,

36:45

okay, similarly, but

36:59

generally in

37:10

the army police

37:13

departments, Delhi punjab

37:16

assam, Hyderabad Chennai

37:19

police departments, facial

37:22

recognition tech used to start

37:25

machine learning machine learning

37:30

probabilistic up

37:36

a I say inaccuracy nickel,

37:39

it is not a technical

37:41

fault, inaccurate,

37:44

Neykova inaccurate,

37:48

inaccurate right about decision

37:53

the idea. Okay, inaccurate recently,

38:02

ex Chief justice sheriff

38:04

bob, he announced that

38:06

he has decided and the Supreme Court

38:09

judges think indian courts make

38:13

resources, procedure,

38:27

procedure simplify justice

38:29

be a content justice be

38:31

a product of justice,

38:34

A product now

38:41

machine learning, Machine learning, sorry,

38:48

jobs, jobs spear

38:51

member vodka, intelligent

38:55

creative jobs. Economic

39:01

50 60% jobs, manual

39:04

laborers, especially especially

39:11

like India these jobs

39:13

cannot be replaced because the

39:15

worth of a human being in India is very

39:18

low. Let's be very clear

39:20

about jobs, jobs,

39:24

jobs are but never

39:31

color job correct

39:37

companies, companies

39:39

mining or

39:43

jobs, jobs, jobs

39:46

employee to employee. You

39:49

have an entire economic

39:52

in this country where

39:54

increasing automation and increasing

39:56

machine learning has enabled platforms,

39:59

the platforms in a machine learning and

40:02

because of the popularity

40:04

and profitability of these platforms as

40:06

a profitable dot com company

40:10

will be lost because of that.

40:12

And because of no policy

40:14

on gig work whatsoever.

40:16

Our entire workforce

40:19

is slowly changing into gig work and

40:21

this is extremely dangerous,

40:23

short long

40:37

overall jobs.

40:44

But the biggest problem also

40:47

I would want to get Shamim into this

40:49

key. Even

40:51

us when we were talking about it,

40:54

we were only considering

40:56

all of these higher

40:58

level jobs like creatives and all of that.

41:01

But the actual jobs, we

41:03

aren't talking about those kind of jobs,

41:05

right? And when we also don't associate

41:07

them with Ai Ai

41:11

S C jobs could replace Karenga especially

41:17

after listening to after listening to

41:20

my whole point of

41:22

view has changed this problem that has

41:24

said has never come came to my mind.

41:26

Exactly yeah. The question coming

41:28

to my mind is that key, what

41:30

we should do because mom said key individual

41:33

contribution is very less so

41:35

that's why we need laws and policies

41:38

but how we can

41:39

help like at an individual

41:41

level do what is best

41:44

for you at an individual level

41:46

encourage famine because he has a platform.

41:48

You should encourage people to learn these skills

41:50

so that at an individual level.

42:01

But as

42:06

a, as a policy professor as

42:08

somebody who looks at the entire society

42:11

masses of individuals individuals

42:15

hama english, advanced

42:18

degree police.

42:33

Exactly, sarkar

42:35

policies, laws

42:38

at the end of the day, they

42:40

also come from the popular mass

42:56

popular topic local

43:03

insurance, cancel job

43:06

jobs are jobs

43:09

they call her cell jobs but

43:14

online content creator Hamara

43:24

come bath,

43:30

low quality content, high

43:32

quality content. So

43:34

the point I'm trying to make especially with

43:36

Sammy because Sammy has a large audience

43:39

on youtube right? Loco

43:41

encouraged debates,

43:45

just scholarly debates. Academy

43:47

academy a measure debates, policymakers,

43:53

conference fairness, accountability,

43:55

transparency, fact, conference ai

43:59

or policies or actual

44:08

danger technologies

44:11

related her

44:13

actual danger, boring sensation

44:22

gradually. Exactly

44:29

I was just saying because

44:31

you may be

44:43

in the same

44:45

the same way medical

44:47

field maybe like er

44:51

careful because medical

44:54

it is said it will be the most

44:56

useful

44:58

right now. Yes,

45:00

you should be So simple

45:02

reason. India publishes

45:05

a lot of data for free. Some open

45:07

government data whose website,

45:09

52% downloads I think

45:12

health data, health

45:15

data, jokey health

45:17

data, machine learning

45:19

Alana hottest field commercially

45:24

there are big companies

45:28

wings, telemedicine telemedicine

45:32

remotely bad chat vodka

45:34

through doctor. So

45:36

Covid fella basically

45:38

doctors training as far as technology

45:44

and science

45:49

methodology. Techno solution is

45:51

um techno solution is um

45:53

is that bad idea that

45:56

any social problem can be solved with

45:58

technology Dr Valerie

46:09

problem, chat

46:14

board media, stable

46:20

internet connection, smartphone

46:22

in english. Problem

46:28

solved problem DR

46:32

problem solved colleges.

46:38

Education don't call dr medical

46:46

problem already.

46:56

America makes research

46:59

american medical data,

47:01

hospitals machine learning

47:04

use cases patients

47:10

patients have

47:15

to be very patient a

47:25

malnourished, poor or black

47:27

local data time machine

47:31

decision to medical decision.

47:33

There are a lot of time they were medically

47:35

incorrect decision a

47:39

devil's advocate wala. Point comes

47:41

where people say that

47:43

the exclusive

47:53

technology

48:01

main difference healthcare

48:03

healthcare. So

48:11

then

48:12

would you would you advocate

48:14

for removal of things like

48:16

AI in matters of health

48:18

care and education or more?

48:20

I would advocate for

48:22

I would advocate for no

48:25

I would advocate for rational decision

48:27

making on who is owning that

48:29

rate of O.

48:31

A. I. Ca. Use seer of

48:33

profit making kelly. Individuals

48:36

and private companies decide career yet

48:38

as a society. Hm Sabbatical decided

48:41

correct. Use examples

48:44

of where popular

49:06

election useful

49:11

now but

49:16

marlo you

49:31

can't as a customer ask for

49:33

the company to change its policy but

49:35

as a citizen you can ask the government

49:38

to change its policy services

49:55

and citizenship

50:01

to sort of conclude as

50:04

a policy or where government

50:06

comes in ai is to

50:08

regulate for what and

50:10

for what not AI should

50:12

be used. That

50:14

is where the government should come in and

50:16

it should clearly dictate the use

50:19

of AI is okay. So that

50:21

would also cover all of our creative

50:23

fields created exactly

50:27

what in your opinion would be okay use

50:29

for a

50:30

more than okay as I would say that

50:33

first of all this proceed this process

50:35

to decide what the government should do.

50:37

You should not just hand it to the government

50:39

like that. It

50:45

has to be a democratic process right

50:50

down to

50:59

the public is

51:01

response dolly is

51:05

document I think

51:10

with this chat also it was

51:13

our objective. Yes

51:15

we want to talk about dolly and chad

51:17

Gpt and all of these are

51:19

you know certainly nice

51:21

popular issues but as

51:24

I got Amerco views

51:28

the car and even from your

51:30

understanding of it that these are not

51:32

the biggest issues that we should

51:34

be worried about. I think it's episode

51:37

titled We are worried about the wrong.

51:39

Oh yeah, I

51:42

would like to add one thing is that yes,

51:44

as I mentioned in my video also.

51:46

Okay. Somehow I feel

51:48

key we are being targeted like

51:50

the creative people are being targeted. Everybody

51:52

saying your job will go, your job will

51:54

go today. I understood what AI

51:57

is and how AI is dangerous.

51:59

Exactly. So I want to thank you guys

52:01

thank you for doing this with

52:03

me because it was such a privilege because

52:05

um you just opened my mind

52:08

truly like I'm

52:10

feeling good. Also that the way I use

52:12

AI is not the most dangerous one.

52:14

Don't you think this is also

52:16

like a self romanticization?

52:19

A lot of even creative people are thinking

52:22

that homeless victims it

52:27

I don't think that is the case. We

52:29

are the most privileged

52:31

people. Um it's as a toy

52:33

used we have the option

52:36

to choose it to use it or not use the term.

52:38

This is a playground for us right

52:40

now. Exactly. But year

52:43

options taxi driver because

52:48

Uber has out competed the taxes from

52:50

the market. So that is very important.

52:52

This point of our

52:55

ability to confuse

52:57

the actual reality with our reality

52:59

are very small

53:01

group of elite people who consumed

53:07

but I'm very happy protect

53:10

and Angela that you invited me my

53:12

subcommittee criticized Sada

53:16

ham academics problem but

53:22

very few of us talk to the common

53:24

masses. Some paper papers

53:26

have papers, there

53:29

are very few platforms, they have

53:33

issues seriously but okay so first of

53:35

all I would really like to thank, went for

53:38

in take me to give me a small

53:40

chance to translate

53:42

some of these issues to the common

53:44

language. I hope I was a little

53:47

successful at that.

53:48

Thank you. So

53:51

in closing I would just like to ask

53:54

um what is the first thing because obviously

53:56

the recommendation and I would be endless.

53:58

What is the first thing that people

54:00

who have just heard this chat should

54:02

go and read up

54:04

about. So there is

54:06

a paper I would

54:08

like people to read. The

54:10

paper is called on

54:12

the dangers of stochastic parrots,

54:15

stochastic simply means probabilistic.

54:17

This paper was written by a bunch

54:20

of researchers in the

54:22

U. S. It is a very nice introductory

54:25

paper to understand the

54:27

dangers of language model, language

54:29

model charge deputy, anything that

54:31

generates language. Also,

54:33

the paper is written I think in a very

54:35

easy to understand english

54:37

which anybody can read. You don't need to be

54:39

a machine learning professional. This would be an

54:42

interesting paper to start off with

54:44

very realized that these

54:46

artifacts are not operating

54:48

in a vacuum that they have a social

54:51

impact and what those social impacts

54:53

can be. So this could be an interesting paper

54:55

to begin. I wouldn't say this is you

54:58

know the end of the conversation. But

55:00

the beginning of the conversation

55:02

for somebody, I am

55:04

not the first person who calls Ai parrots

55:08

is conversation with

55:12

one thing that I will carry for

55:15

the next my life is a

55:17

I. Tota. The starting question

55:19

we started with was a I. And art

55:21

and creative job van

55:24

logic hold Kotecki

55:27

creative and

55:30

where the point where we ended

55:35

actually justice

55:38

system, labor market.

55:49

So I would like to close

55:51

with you as well and going back to our

55:54

starting questions. What

55:58

do you think after you've heard

56:01

all of this key your art

56:03

club exist exist, what

56:06

I personally feel after this long

56:08

conversation and after learning so much from them

56:10

is that key. We are being targeted

56:13

because we can be targeted because

56:15

some people want a i in everything

56:17

without any regulation and

56:20

we are the easiest people to talk because

56:22

they'll get people like me who will say is

56:24

making a life easier because

56:30

even this actually is so

56:33

interesting that you say, because I consider

56:35

myself a very, you know, tech savvy

56:37

person every time some

56:39

google assistant comes.

56:41

I really, I'm excited, Kara,

56:44

that's why as soon as ai dolly

56:46

all of this came I was very excited and

56:48

obviously I am not alarmist about

56:50

it. I

56:54

had a very favorable view of

56:56

it. So I

57:00

think advertisers

57:08

or any newspaper Mitnick alarmist

57:11

article article

57:15

article because I was talking about some

57:17

actual issues. But Mark,

57:21

I had reproduce

57:23

the results which had read

57:25

in some McKenzie report somewhere

57:30

academic organization that it's a

57:32

company or a company. Yes,

57:40

sensationalism, but or

57:45

boring imagination.

57:53

I wouldn't say

57:59

sensationalism

58:16

trying to do with this podcast.

58:19

The actual issue is

58:22

like much deeper and honestly speaking,

58:24

boring but love Yeah,

58:27

interesting. Which

58:29

is sad. But I think we

58:32

tried our level best to make it an interesting

58:34

conversation and thanks a lot

58:37

for that. Very

58:38

happy to talk to you guys. Also very happy

58:40

to meet you. You look at the channel,

58:42

I find it very nice. So

58:44

I'm glad you are doing what you're doing.

58:46

Thank you. Thank you so much. Thank you. Okay,

58:49

so Anjali thoughts. First

58:51

thoughts more than more than exciting. Jasmine.

58:53

Yeah. What

58:57

what were your expectations before we

59:00

started? I really was expecting

59:03

just you know, tell us about how he's not

59:05

threatened by. I and

59:07

I'm talking about you should be it might

59:09

it might get so powerful that will take over

59:11

your job. But I think it went into a whole

59:14

other direction. Was I was

59:16

expecting like a modification

59:18

of a. Because usually when people

59:20

talk about AI in popular culture entity

59:26

but now my

59:28

opinion on AI has changed

59:31

so much after obviously

59:36

catchphrase of this episode

59:40

and every time, you know, I feel like

59:42

oh it's going to take over the world, it's going to do this and

59:44

that like the image of a cute

59:46

little at

59:51

the same time, I feel that we've discovered it's

59:53

not all that good

59:55

and that it solely depends

59:57

on what it is used for. And I think we've drawn

1:00:00

a very clear distinction between

1:00:02

using our Ai for art and

1:00:04

using ai to replace human

1:00:09

consciousness. I would say. It

1:00:11

was interesting the case study that

1:00:13

tom talked about the 100 trials in

1:00:16

the U. S. Where that

1:00:18

is scary but it's way more

1:00:20

scarier than all your irobot

1:00:22

type stuff and it is such a babe. But

1:00:29

okay, this can have real world implications.

1:00:32

It's definitely something that these

1:00:34

giant corporations are looking at

1:00:37

and we as citizens have to be

1:00:39

aware and we have to be worried about the right

1:00:41

a and listen

1:00:43

to more academicians. Yes.

1:00:46

And that just goes beyond I

1:00:48

think everything that is popularly

1:00:51

discussed, we like

1:00:53

we should not be listening to influencers talk

1:00:55

about it, but listen to academy,

1:00:57

just listen to the big story. Yes,

1:01:00

that's because we will

1:01:02

get you access to these

1:01:05

crazy, crazy intelligent admissions.

1:01:07

So I think that's it for this episode.

1:01:10

It was a great chat chat,

1:01:13

let us know in the comments, what you

1:01:15

thought about it, what you think about, why

1:01:17

do you think under

1:01:19

or what animal do you think it is

1:01:22

and share

1:01:24

this with your friends, share this with your

1:01:26

parents, share this with anyone,

1:01:28

anyone, anyone who talks about,

1:01:35

What are we going to talk about in the

1:01:37

next episode of the big story. So

1:01:40

after the grand opening

1:01:42

of Ai and art and

1:01:45

Policy, let's talk

1:01:47

about media and we want to talk about

1:01:49

a very interesting aspect of media

1:01:51

which is media trials. It's

1:01:53

like this unique combination of

1:01:56

media and the judiciary, what

1:01:58

role they play in each other's functioning

1:02:01

and we're going to break down

1:02:03

the entire concept of popular

1:02:05

media trials that you've seen in the past decade.

1:02:12

Okay, so, tune into the next

1:02:14

episode. We have two amazing

1:02:16

guests with us. So this was

1:02:18

Prateek and this and

1:02:21

this was the big story. Thank you guys,

1:02:23

Bye bye. The

1:02:26

Big Story is a quint original podcast

1:02:29

executive produced by Shelley Value. And this

1:02:32

episode was hosted, produced and edited

1:02:34

by practically do and Anjali payload and

1:02:36

it uses theme music from BMG production

1:02:38

music, A special thanks to our guests,

1:02:41

Dr Anupam Gupta and Sammy Markelis.

1:02:44

Yeah, you

1:02:47

were listening to the Quinns podcast.

Unlock more with Podchaser Pro

  • Audience Insights
  • Contact Information
  • Demographics
  • Charts
  • Sponsor History
  • and More!
Pro Features