Podchaser Logo
Home
Inside the Race to Protect Artists from Artificial Intelligence

Inside the Race to Protect Artists from Artificial Intelligence

Released Monday, 1st April 2024
Good episode? Give it some love!
Inside the Race to Protect Artists from Artificial Intelligence

Inside the Race to Protect Artists from Artificial Intelligence

Inside the Race to Protect Artists from Artificial Intelligence

Inside the Race to Protect Artists from Artificial Intelligence

Monday, 1st April 2024
Good episode? Give it some love!
Rate Episode

Episode Transcript

Transcripts are displayed as originally observed. Some content, including advertisements may have changed.

Use Ctrl + F to search

0:00

This episode is brought to you by Shopify.

0:02

This episode is brought to you

0:04

by Shopify. Forget the frustration of

0:07

picking commerce platforms when you switch

0:09

or business to shopify. The global

0:12

commerce platform that super as you're

0:14

selling where ever you sell with

0:16

Shopify. Kill Harness the same intuitive

0:19

features, trusted apps and powerful analytics

0:21

used by the world's leading brands.

0:23

Sign up today for your one

0:26

dollar per month trial period at

0:28

shopify.com/tech all lower. That's shopify.com/ Tech.

0:31

Generative artificial intelligence tools can now

0:33

instantly produce images from text prompts.

0:36

It's neat tech, but could mean trouble for

0:39

professional artists. Yeah, because those AI tools make

0:41

it really easy to just

0:43

instantly rip off someone's style. That's

0:45

right. Generative AI, which is trained on real people's

0:47

work, can end up really hurting the artists that

0:50

enable its existence. But some have started fighting back

0:52

with nifty technical tools of their own. It

0:56

turns out that the pixel is mightier than

0:58

the sword. I'm Rachel Thultmann, a

1:00

new member of the Science Quickly team. And

1:02

I'm Lauren Leffer, contributing writer at Scientific

1:04

American. And you're listening to

1:06

Scientific American's Science Quickly podcast. So

1:15

I have zero talent as a visual

1:17

artist myself, but it seems like folks

1:19

in that field have really been feeling

1:21

the pressure from generative AI. Absolutely.

1:23

Yeah, I've heard from friends who've

1:25

had a harder time securing paid commissions than ever

1:27

before. You know, people figure they can

1:29

just whip up an AI-generated image instead

1:31

of paying an actual human to do the work. Some

1:34

even use AI to overtly dupe specific

1:36

artists. But there's at

1:38

least one little tiny spot of hope.

1:41

It's this small way for artists to take

1:43

back a scrap of control over their work

1:45

and digital presence. It's like a form of

1:47

self-defense. Right, let's call it self-defense, but it's

1:49

also a little bit of offense. It's

1:52

this pair of free-to-use computer programs called

1:54

Glaze and Nightshade, developed by a team

1:57

of University of Chicago computer scientists in

1:59

collaboration with... artists. Both tools

2:01

add algorithmic cloaks over the tops of

2:03

digital images that change how AI models

2:05

interpret the picture but keep it looking

2:08

basically unchanged to a human eye. So

2:10

once you slap one of these filters

2:12

on your artwork, does that make it

2:14

effectively off-limits to an AI training model?

2:17

Yeah basically, it can't be

2:19

used to train generative image models in the

2:21

same way once it's been glazed or shaded,

2:24

which is what they call an image path

2:26

through Nightshade. And with Nightshade

2:28

specifically, it actually might mess up a model's

2:30

other training. It throws a wrench in the

2:32

whole process. Cool. Yeah, that

2:34

sounds like karma to me. Mm-hmm. I'd love

2:36

to hear more about how that works, but

2:39

before we dig into the technical stuff, I

2:41

have to ask, you know, shouldn't artists

2:43

already be protected by copyright laws? Like,

2:45

why do we need these technical tools

2:47

in the first place? Yeah, great question.

2:50

So right now, whether or not copyright law

2:52

defends against creative work being used to train

2:54

AI, it's this really

2:57

big unresolved legal gray area, kind of

2:59

a floating question mark. There are multiple

3:01

pending lawsuits on the subject, including ones

3:03

brought by artists against AI image generators

3:05

and even the New York Times against

3:07

OpenAI, because the tech company

3:09

used the newspaper's articles to train large

3:11

language models. So far, AI

3:13

companies have claimed that pulling digital content

3:15

into training databases falls under this protection

3:18

clause of fair use. And I guess

3:20

as long as those cases are still

3:22

playing out, in the meantime, artists just

3:24

can't really avoid feeding that AI monster

3:26

if they want to promote their work

3:29

online, which obviously they have to do.

3:31

Right, exactly. Glaze and Nightshade and similar tools,

3:33

there are other ones out there like Mist.

3:36

They aren't permanent solutions, but they're offering artists

3:38

a little bit of peace of mind in

3:40

the interim. Great names all around. How do

3:42

these tools come to be? Let's

3:44

start with a little bit of

3:46

background. Before we had generative AI,

3:48

there was facial recognition AI. That

3:51

laid the technical groundwork for adversarial

3:53

filters, which adjust photos to prevent

3:55

them from being recognized by software.

3:57

The developers of Glaze and

3:59

Nightshade... They'd previously released one of

4:01

these tools called Fox after the

4:03

V for Vendetta guy, Fox.

4:06

Another great name. Yeah, it's very

4:08

into the tech dystopia world. Totally.

4:11

Fox cloaked faces. And in 2023, the

4:13

research team started hearing

4:15

from artists asking if Fox would work

4:17

to help hide their artistic work from

4:19

AI too. Initially, the answer

4:21

was no. But it did prompt the computer

4:24

scientists to begin developing programs that could help

4:26

artists cloak work. So what do these tools

4:28

actually do? Glaze and Nightshade,

4:30

they do slightly different things. But let's start

4:32

with the similarities. Both programs apply

4:35

filters. They alter the pixels in

4:37

digital pictures in subtle ways that

4:39

are confusing to machine learning models,

4:42

but unobtrusive mostly in parathenicals to

4:44

humans. Very cool and very. How

4:47

does it work? OK, so you know how

4:49

with optical illusions, a tiny tweak can suddenly

4:51

make you see a totally different thing. Oh,

4:53

totally. Like that infamous dress that was definitely

4:55

blue and black. And right there with you.

4:57

Not white and gold at all. Yeah, definitely

4:59

blue and black. Yeah, so

5:01

optical illusions happen because human perception

5:03

is imperfect. We have these quirks

5:06

inherent to how our brains interpret

5:08

what we see. For instance, people

5:10

have a tendency to see human

5:12

faces in inanimate objects. So true.

5:14

Every US power outlet is just a scared

5:16

little guy. Absolutely, yeah.

5:19

Power outlets, cars, mailboxes, all of them

5:22

have their own little faces and personalities.

5:25

So computers don't see the world the same

5:27

way that humans do. But they do have

5:29

their own perceptual vulnerabilities. And

5:31

the developers at Lays in Nightshade, they built

5:33

an algorithm that basically figures out those quirks

5:35

and the best way to exploit them, and

5:37

then modifies an image accordingly. It's

5:40

a delicate balancing act. You want to stump

5:42

the AI model, but you want to also

5:44

keep things stable enough that a human viewer

5:46

doesn't notice much of a change. In

5:48

fact, the developers kind of got to that balanced

5:51

point through trial and error. Yeah,

5:53

that makes sense. It's really

5:55

hard to mask and distort

5:58

an image without masking your attention. to

6:00

starting the image. So they're able

6:02

to do this in a way that we can't

6:04

perceive. But what does that look like from the

6:06

AI's perspective? Another great

6:08

question. To train an image

6:10

generating AI model to pump out pictures,

6:12

you give it lots of images along

6:15

with descriptive text. The model learns to

6:17

associate certain words with visual features, like

6:19

think shapes or colors, but really it's

6:21

something else that we can't necessarily perceive

6:23

because it's a computer. And under the

6:26

hood, all of these associations are stored

6:28

within basically multidimensional maps. So

6:30

similar concepts and types of features are

6:32

clustered near one another. With

6:34

the algorithm that underlies Glaze and Nightshade,

6:36

computer scientists strategically force associations between unrelated

6:38

concepts. So they move points on that

6:40

multidimensional map closer and closer together. Yeah,

6:42

I think I can wrap my head

6:44

around how that would confuse an AI

6:46

model. Yeah, it's all still a little

6:48

hand-wavy because what it really comes down

6:50

to is some complex map. Ben

6:53

Zhao, the lead researcher at University of Chicago,

6:55

behind these cloaking programs, said that developing

6:57

the algorithms was akin to solving two sets

6:59

of linear equations. Not my strong

7:01

suit. So I will take his word for it.

7:03

Me either. That's why we're at a podcast instead.

7:06

Exactly. So why two tools? How are they different?

7:08

So Glaze came up first. It was

7:10

kind of the entry, the foray into

7:13

this world. It's very focused on cloaking

7:15

an artist's style. So this thing that

7:17

kept happening to prominent digital artists was

7:19

someone would take an open source generative

7:21

AI model and train it on just

7:23

that artist's work. So that

7:25

gave them a tool for producing style mimics.

7:27

Obviously, this can mean fewer paid opportunities for

7:30

the artist in question, but it also opens

7:32

up creators to reputational threats. You could use

7:34

one of these style mimics to make it

7:36

seem like an artist had created a really

7:39

offensive image or something else that they

7:41

would never make. That sounds like such

7:43

a nightmare. Yeah, absolutely, in the same

7:45

nightmare zone as deepfakes and everything else

7:47

happening with generative AI right now. So

7:49

because of that, Zhao and his colleagues

7:51

put out Glaze, which tricks AI models

7:53

into perceiving the wrong style. So let's

7:55

say your aesthetic is very cutesy and

7:57

bubbly and cartoony. If you glaze your work, it's not

7:59

the right style. an AI model might instead

8:01

see Picasso as cubism. It makes

8:03

it way harder to train style

8:05

mimics. Very cool. And you

8:08

mentioned that these tools can also play

8:10

a little bit of offense against AI

8:12

art generators. Is that where Nightshade comes

8:14

in? Ding, ding, ding. Totally right. An

8:16

image cloaked in Nightshade will teach an

8:19

AI to incorrectly associate not just styles,

8:21

but also fundamental ideas and images. As

8:23

a hypothetical example, it would only take a

8:26

few hundred Nightshade-treated images to retrain a

8:28

model to think cats are dogs. Well, yeah.

8:30

Zhao says that hundreds of thousands of

8:32

people have already downloaded and begun deploying Nightshade.

8:34

And so his hope and his co-researcher's

8:36

hope and the artist's hope is that with

8:39

all of these images out there, it will

8:41

become costly enough and annoying enough for

8:43

AI companies to weed through masked pictures that

8:45

they'll be more incentivized to pay artists

8:47

willing to license their work for training instead

8:50

of just trawling the entire web. And

8:52

if nothing else, it's just very

8:54

satisfying. Yeah, it's catharsis

8:56

at some baseline level. So

8:59

it sounds like the idea is to

9:01

kind of even out the power differential

9:03

between AI developers and artists. Is that

9:05

right? Yeah, these tools, they definitely tip

9:08

the balance a little bit, but they're

9:10

certainly not a complete solution. They're more

9:12

like a stopgap. For one,

9:14

artists can't retroactively protect any art that's

9:16

already been hovered up into AI training

9:19

datasets. They can only apply these tools

9:21

to newer work. Plus, AI technology, it's

9:23

advancing super, super fast. I spoke with

9:25

some AI experts who were quick to

9:28

point out that neither glaze or nightshade

9:30

are future proof. They could be compromised

9:32

moving forward, AI models could just change

9:34

into things that have different structures and

9:36

architecture. Already, one group of

9:38

machine learning academics has partially succeeded at getting

9:40

around the glaze cloak. Whoa, that was fast.

9:42

That was like a few months after it

9:44

came out, right? Yeah, it's quick. Though that's

9:47

kind of the nature of digital security. You

9:49

know, as Zhao told me in his own

9:51

words, quote, it's always a cat and mouse

9:53

game. And I guess even

9:55

if glaze and nightshade continue to work

9:57

perfectly, it's still kind of unfair for

9:59

artists. to have to take those extra

10:01

steps. Yes, absolutely great point. I spoke

10:03

with a professional illustrator, Mignon Zakuga, who's

10:06

really been enthusiastic about glaze and nightshade.

10:08

She was involved in beta testing and

10:10

still uses both cloaks regularly when she

10:12

uploads her work. But even she said

10:14

that passing images through the filter, it's

10:16

not the greatest or easiest process. It

10:18

can take a couple of hours. And

10:21

even though they're not supposed to be noticeable,

10:23

often the visual changes are, at

10:25

least to her, and especially to her as

10:28

the artist who made the image. So Zakuga

10:30

told me it's a compromise she's willing to

10:32

deal with for now, but clearly artists deserve

10:34

better, more robust protections. Yeah, like, I

10:37

know this is wild, but what about

10:39

actual policy or legislation? Yeah, 100%. It'd

10:43

be great to get to a point where

10:45

all of that is clarified, especially in policy

10:47

and law. But for now, no one really

10:49

knows exactly what that shoulder will look like.

10:52

Will copyright end up being enforced against AI?

10:54

Do we need some whole new

10:56

suite of protective laws? But at the very

10:58

least, programs like Lays and Nightshade, they offer

11:01

us a little bit more time to figure

11:03

all that out. Science

11:06

Quickly is produced by Jeff Delgisio, Talika

11:08

Bose, Rachel Saltman, Kelso Harper, and Corinne

11:11

Leong. Our show is edited by Ella

11:13

Fetter and Alexa Lin. Our theme music

11:15

was composed by Dominic Smith. Don't forget

11:17

to subscribe to Science Quickly wherever you

11:19

get your podcasts. For more in-depth science

11:21

news and features, go to scientificamerican.com. And

11:24

if you like the show, give us

11:26

a rating or review. For Scientific American

11:28

Science Quickly, I'm Lauren Leffer. I'm Rachel

11:30

Saltman. See you next time.

Rate

Join Podchaser to...

  • Rate podcasts and episodes
  • Follow podcasts and creators
  • Create podcast and episode lists
  • & much more

Episode Tags

Do you host or manage this podcast?
Claim and edit this page to your liking.
,

Unlock more with Podchaser Pro

  • Audience Insights
  • Contact Information
  • Demographics
  • Charts
  • Sponsor History
  • and More!
Pro Features