Podchaser Logo
Home
Marija Manojlovic Isn’t Kidding Around About Child Safety Online

Marija Manojlovic Isn’t Kidding Around About Child Safety Online

Released Tuesday, 28th November 2023
Good episode? Give it some love!
Marija Manojlovic Isn’t Kidding Around About Child Safety Online

Marija Manojlovic Isn’t Kidding Around About Child Safety Online

Marija Manojlovic Isn’t Kidding Around About Child Safety Online

Marija Manojlovic Isn’t Kidding Around About Child Safety Online

Tuesday, 28th November 2023
Good episode? Give it some love!
Rate Episode

Episode Transcript

Transcripts are displayed as originally observed. Some content, including advertisements may have changed.

Use Ctrl + F to search

0:00

When you listen to Nobody Listens to

0:02

Paula Poundstone, the comedy podcast, you learn

0:04

stuff. I've been learning to throw

0:06

a boomerang, because this is the kind of

0:08

thing that really gets the listeners engaged, you know.

0:11

Interviews with people who will make

0:13

you smarter. Does the amount that

0:15

you learn protect you from cognitive

0:18

decline? Paula, don't touch

0:20

that! Can't people just listen to

0:22

the show? Can't they just enjoy a delightful

0:24

treehouse full of information and co- I think

0:26

I'm bleeding. Join us and be

0:29

a nobody. Just

1:01

as a warning, today we're talking about online safety for

1:03

kids, and with that comes discussion of the types of

1:05

abuse that can happen to children. Discretion

1:07

is advised, especially for individuals who may

1:10

find such topics distressing or triggering. So

1:15

how can we best protect kids online without changing

1:17

the entire internet? You absolutely

1:19

cannot. I think you can,

1:21

you just have to nuke the entire internet

1:23

and build it again, new. Well,

1:26

we just asked if you could do it

1:28

without changing the entire internet. Well,

1:31

you can't. Nuking it is a pretty big change.

1:34

So with our guest today, we're going to try to figure that out.

1:37

Welcome to What the Hack, a show about hackers,

1:39

scammers, and the people they go after. I'm

1:42

Adam Levin. I'm Beau Friedlander.

1:45

And I'm Travis Taylor. What

1:58

the Hack of the

2:00

safe online initiative of the global partnership

2:03

to end violence against children. Maria

2:05

Manojlovich, welcome to What the Hack.

2:09

Where are you coming to us from today? So

2:11

I'm dialing in or

2:13

streaming in from Geneva, from Switzerland,

2:15

but I'm originally from Montenegro. Switzerland.

2:18

I wish I were there right now. Many people do. They're

2:21

the worst places to be. Can you tell

2:23

us a little bit about your background and what

2:25

led you to child online safety? I

2:28

believe that so much of the trauma and the

2:30

damage of the world today really

2:32

stems from the violence of the past. As

2:35

we are seeing children affected at new scales

2:37

and speeds and through new technologies in the

2:39

online world, I think that

2:41

we risk amplifying and recreating cycles of

2:43

that abuse and violence. So

2:45

when I started working on children's rights almost

2:48

two decades ago, I quickly

2:50

realized two things. One is

2:52

that dealing with violence against children

2:54

and solving those issues is one

2:56

of the single most impactful things that

2:58

we can do because preventing

3:00

violence against children will unlock all

3:03

the other areas of their development

3:05

in life. Education, health, socializing, kills,

3:08

work, everything else. If kids are victims of

3:10

violence, they can't do any of those things.

3:13

Number two is that violence

3:16

against children is becoming increasingly

3:18

facilitated or amplified by digital

3:20

technologies. So those are

3:22

the two things that really made me compelled to

3:24

work in this space because I really believe there

3:27

is no single more important kind of a job

3:29

that I can do in the world. And

3:31

before you became an advocate for children's rights

3:34

in the digital space, what inspired you to

3:36

become an advocate for children in general? Well,

3:39

I come from Balkans and Balkans are really,

3:41

really complex. Geography

3:44

and space and growing up I witnessed

3:46

a lot of inequity, poverty,

3:49

civil unrest and wars. And

3:51

again, children suffer the most in

3:53

these situations, being able to really work

3:56

on giving the voice to the voiceless and

3:58

children are usually voiceless. was one

4:00

of the things that I always wanted to

4:02

do. So it's a very personal mission that

4:05

I have. All right. So you're from Montenegro

4:07

and you're no stranger to the

4:09

kinds of dangers that children see in

4:11

real life, not online. How

4:15

do you distinguish between online

4:17

and in real life safety

4:20

for children? I don't like

4:22

to define child online safety. I'd like

4:24

to define child safety as principled human

4:26

right and child right. And

4:29

look at the lens of how that safety

4:31

is ensured in different environments. When

4:34

you think about the online space, the same

4:36

rights that kids have in the offline world,

4:38

applied in the online world, but the settings

4:40

are somewhat different. What

4:48

are some of the biggest threats that children face online

4:50

right now? And what are some of the most common

4:52

threats they face online? Currently, I

4:54

feel there are two major

4:58

harms and risks for kids online.

5:01

One of the ones that is mostly spoken

5:03

about is cyberbullying, because I think it resonates

5:05

with a lot of people and it affects

5:08

a lot of majority of kids online.

5:10

But in addition to cyberbullying, and this

5:12

is what I'm particularly focusing on, we

5:14

are seeing that grooming of children for

5:17

sexual sexual purposes as well as extortion

5:19

are probably the most prevalent threats for

5:21

kids online. In addition to the

5:23

same ones being the most serious ones. If

5:26

you look at the reports of the National Center for

5:28

Miskarexploited Children in the US, you will

5:30

see that in 2022, 32

5:32

million of suspected reports of child

5:34

sexual abuse were made by

5:36

industry to the center. And

5:38

this is the highest number of reports in one year.

5:41

If in 1998, if you look at the figures in

5:43

1998, there were those reports for 3,000, another

5:46

32 million. In 2022 of

5:48

these 32 million reports, they contained

5:50

87 million photos and

5:53

videos of children being sexually abused.

5:56

And oftentimes people say, well, you know, 32

5:58

million is not a crime. a big

6:00

number because this contains images of kids

6:02

who were abused maybe a decade ago.

6:04

It's just the same image being recirculated.

6:07

But the fact that out of these 32

6:09

million reports, 50,000

6:12

were marked as urgent to act because a

6:14

child was in an imminent danger. That

6:16

gives you a bigger scale, a bigger picture of

6:19

how the things actually look like. And

6:22

we've been doing a lot of research on this because again,

6:25

it is not often that people ask kids

6:27

for their experiences and opinions. So we

6:29

have funded a big 25-country

6:32

scale, 25-country research called the South

6:34

End Career. And the research has

6:36

been completed now in 13 countries in southern

6:38

Asia and southeast Africa. And

6:41

what we have discovered is really

6:43

astonishing because in many of these

6:45

countries, up to 20% of kids

6:48

have experienced some form of online child sexual

6:50

exploitation and abuse in one year alone. When

6:53

you scale that to population, this

6:55

is 5 million plus kids in

6:58

2021 alone. So

7:00

then I looked for something of a comparison

7:02

for US context. And this would mean that

7:04

5 million kids is akin to every middle

7:06

and high school student in California. So

7:09

imagine every middle and high

7:11

school student in California being sexually abused

7:13

online. So there's

7:15

a huge, huge, huge pandemic of

7:18

online abuse and we are not tackling it in the

7:20

right ways. What is the

7:22

difference between regional and

7:24

regional? Is it different from

7:27

region to region? Do you see

7:29

something different in developing countries versus

7:31

somewhere like the United States or

7:33

Germany? What are we looking at? Online

7:35

CSCA is a very high-quality CSCA,

7:38

child sexual sexual abuse. It's a

7:40

very complex phenomena. But

7:42

in many ways, some of its manifestations

7:44

are really the same in the developing

7:46

world and in the developed world because

7:49

kids' lives online are somehow a

7:51

little bit different from their local

7:54

realities. And we've seen

7:56

these with research in Ghana,

7:58

in Tanzania. in the

8:01

Philippines, kids who have different

8:03

local, cultural, religious realities in

8:05

the offline world tend to behave differently in

8:07

the online because they have their global community

8:10

online. So their behaviors become much

8:12

more risk-taking, much more different than they

8:14

would behave in situations with their parents

8:16

around, at their schools, or there in

8:18

the communities, or wherever they live. So

8:21

manifestations of these particular issues for kids who

8:23

are using internet are similar. So you will

8:25

have a lot of kids who have

8:27

normalized sharing of nudes. Kids, nudes,

8:30

or so-called explicit material of kids when they

8:32

take a photo of themselves and share with

8:34

their peers of whoever they think they're sharing

8:36

with. And this has become normalized. For

8:38

example, research in the U.S. shows us that around

8:42

one in five kids aged nine to 12

8:44

think it's normal to share nudes with each

8:46

other. One

8:48

in five of these kids have shared

8:50

their own nudes already, and this is

8:53

nine to 12-year-olds. 15% of nine

8:55

to 10-year-olds share their own nudes with somebody.

8:57

In 50% of

8:59

the cases, they've shared it with somebody they don't even know. So

9:02

these things we are seeing repeatedly being

9:04

similar in many contexts. But

9:07

when you look at the abuse of

9:09

younger kids, that's different because

9:11

in places where poverty and deprivation are

9:13

higher, it is more easy to access

9:16

kids in more vulnerable situations and then

9:18

abuse them and then record an abuse

9:20

and put it online. You mentioned CSA, which

9:22

stands for child sexual abuse. How

9:25

worried should we be about the increasing usage of

9:27

AI to be generating fake images of it, but

9:30

still depicting the same level of abuse? Very

9:33

worried. I would

9:35

say particularly because there are two things.

9:37

One is that anybody

9:39

can now create child abuse

9:41

material. And especially the open-source models

9:43

are being used and tweaked in ways that

9:45

can accommodate this very easily. And we

9:47

are already seeing in the Dark Lab reports of

9:50

predators who are exchanging

9:52

images there, exchanging all the

9:54

prompts and manuals on how you can

9:56

actually create this child abuse material. They're

9:59

like... there's

10:01

not an actual person behind them, but

10:03

they're also recreating images of

10:05

known victims and creating new images of known

10:07

victims. So the faces of known victims are

10:09

being used to create new situations when you

10:12

either video the images. There

10:14

are two things that are here critical to know.

10:16

One is that once you have

10:18

your image being

10:20

shared, either it's

10:23

a fake or real image of you

10:25

in these situations, this really re-victimizes the

10:27

victims. The perpetual victimization continues

10:29

and you can't ever get out of

10:31

it. And that's why it's so important

10:33

to remove child abuse material overall from

10:35

the internet, from online world. One

10:38

of the other problems with generative AI too is

10:40

that it requires an existing data. So even if

10:42

you're creating a image

10:44

of a fictitious person, that needs

10:46

to be a composite from previous

10:48

footage, unfortunately. Correct, and a lot of

10:50

these people who have been very

10:53

passionate about collecting

10:55

child abuse material actually share millions

10:58

and millions pieces of imagery that they can

11:00

use to do that. We've

11:03

noticed also that there are, when

11:05

people go to the platforms and they try

11:07

to bring it to their attention, that

11:10

this isn't them. This is something that

11:12

was AI generated. Then there's the

11:14

whole issue of the platform saying, are you

11:16

sure you don't have any photographs of you

11:18

out there? Since you started

11:21

your advocacy work, how

11:23

have the threats changed? And

11:27

do you see a trend in any specific

11:29

direction? Yeah,

11:32

things are, this is I think one of the most dynamic,

11:35

unfortunately, I say dynamic, it sounds exciting, but unfortunately,

11:37

one of the most dynamic fields because really the

11:39

threats to kids is changing from one year to

11:41

another. So for example, three years

11:43

ago, we did not see a lot of issues

11:45

around self-generated. So kids taking photos of themselves and

11:48

that being a huge, huge explosion of that.

11:50

Now we've seen increases of 200, 600%

11:52

from year to year because

11:56

it's becoming so normalized. And then

11:58

some of these images are really consensually shared. kids

12:00

sharing, not knowing what they're doing, sharing,

12:03

dudes with themselves with, through sexual

12:05

exploration, or with their friends, or whatever it

12:07

is, not knowing that it's actually an offense to do

12:09

that. But the second thing is that

12:11

kids are getting much more easily groomed and

12:13

queers and tricked into doing those

12:16

things. Lauren.

12:26

Mike. We host a podcast

12:28

for Wired called Gadget Lab. We do. We

12:31

do. Yes, that is correct. Tell

12:33

the good people some more about it. Well, I think

12:35

the good people should definitely tune in every week because

12:37

they get to hear me roasting you. I know. All

12:40

right, no, really what Gadget Lab is,

12:43

is Mike and I tackling the biggest

12:45

questions in the world of technology. I

12:47

like to think of it as the

12:49

best of Wired's journalism, but in audio

12:52

form. We cover the big news of

12:54

the week in Techland, but we also

12:56

offer our expert analyses and opinions on

12:58

all things consumer tech, whether that's mobile

13:01

apps, hardware, startups, cryptocurrency. Mike,

13:03

what's been a recent highlight episode for

13:05

you? We did a deep dive on the

13:07

group behind the massive Okta hack. We also had

13:09

a great conversation about Web 3 and the

13:11

Metaverse. What stands out for you? Never

13:14

Metaverse you didn't like. I

13:16

really enjoyed our recent podcast about Peloton.

13:19

And recently, the legendary tech journalist, Tara Swisher,

13:21

joined us to talk all about Elon Musk

13:24

and the future of Twitter. So

13:26

I guess we should tell people how they can listen to our pod.

13:28

We release a new episode of Gadget Lab

13:30

every week, and you can listen and follow

13:32

us on Apple Podcasts, Spotify, or wherever you

13:35

pod. This

13:39

episode is brought to you by Shopify. That's

13:42

the sound of switching your business to

13:44

Shopify, the global commerce platform that

13:47

supercharges your selling. Harness the best

13:49

converting checkout and same intuitive features,

13:51

trusted apps, and powerful analytics used

13:53

by the world's leading brands. Stop

13:55

leaving sales on the table. Discover

13:57

why millions trust Shopify to build,

13:59

grow, and run their business. Sign

14:01

up today for your $1 per

14:04

month trial period at shopify.com/tech23.

14:14

Okay, so Everyone is familiar

14:17

with this idea that I can't let go of

14:19

which is that when you're online you're in a

14:21

horrible neighborhood When you are in

14:23

the real world, sometimes you can find yourself

14:25

in a horrible neighborhood and there's some basic

14:28

rules for staying safe Don't make eye contact,

14:30

you know mind your own business Keep

14:32

to yourself And the

14:35

corollary online would be you know, mind

14:37

your privacy Be careful

14:39

about what you click and don't talk to strangers

14:43

But are there specific things that people

14:45

can do specifically children can do Specifically

14:47

things we can teach children to do

14:49

online that will keep them safer than

14:52

other things I always give example

14:54

of because I have a toddler and when I think

14:56

about How did we do like

14:58

baby proofing the apartment right you

15:00

assess what the risks exist in your apartment

15:03

You're looking at your like glass vases. You're

15:05

looking at the electricity outlets and you're like,

15:07

okay So now I'm gonna be not

15:09

waiting for my child to like put a metal

15:11

bar in the outlet I'm gonna put you know

15:13

plug there. Will you fully

15:15

minimize? Risk and will

15:18

the child be super safe? No,

15:21

like she'll fall every day But

15:23

will we minimize the potential for really

15:25

serious harm Yes We will the

15:28

same goes for platforms which feature you apply

15:30

on your platform and who do you allow

15:32

to enter on your platform? Will determine

15:34

the risk the risk profile that

15:36

you have. So for example, if your

15:39

platform is matching strangers to strangers Likely

15:41

should not have kids online there, right? you

15:43

should have some age verification mechanisms or age

15:45

assurance mechanism that can keep kids outside of there because

15:48

He's being served to strangers is not a good thing

15:51

So thinking about having vulnerability

15:53

lens as you develop your platform

15:55

whether you're a startup or you exist in big platform

15:57

applying vulnerability lens and seeing how

16:01

the choices that you're making as a

16:03

platform can impact users on it is

16:06

critical. So it's all about safety by

16:08

design and really investing from upfront into

16:11

considerations like this. Unfortunately, the

16:13

culture, especially in Silicon Valley and around,

16:15

is not like that. It's more

16:17

about actually trusting safety people are being seen

16:19

as the inner devil, right? Oftentimes,

16:21

child-owned safety people, I love that

16:24

we are being called, we are the ones

16:26

who are trying to end privacy online, like stuff

16:28

like that. And it's like really

16:30

not the case, because I

16:33

believe, I'm a huge believer, I always say, but in

16:35

some ways, it can create much more risks than the

16:37

other ways. How do you realistically

16:39

monitor age restrictions? I mean, it's one thing

16:42

to say we should, but it's

16:44

another thing to say we can and will

16:46

make it happen. How do you do that?

16:48

I mean, by investing innovation in that space, like

16:50

you need to be actually, we can't just

16:53

say current state is like this, that we

16:55

can't really do anything. There are a

16:57

lot of good age assurance verification tools that can

16:59

be used. They're less accurate than

17:01

the fuzzy age around like the latest

17:03

teens. So it's difficult to actually like,

17:05

actually assess that particular point. But for

17:07

younger than 38, it's actually really accurate.

17:10

So those tools already exist. And

17:12

there are some privacy advanced ways

17:14

that it can be done, like

17:17

tokenizing, you know, third-party assurance

17:19

mechanisms where you can tokenize each person, you get

17:22

token aim of age, and then you can go

17:24

and use a token across the first. Flushes

17:26

like that are being tested right now. But

17:29

we know that platforms have so much

17:31

data that actually accurately assesses the age

17:33

of users regardless of the information you

17:35

provide to them because they know your

17:37

behavior patterns that they already already

17:39

have access to that data. They're

17:41

using that for micro-targeting, they're using that for

17:43

like, very specific, you know, add provision, everything

17:46

is about personalization nowadays, right?

17:48

So we know that that is possible. But

17:50

we also know that it's not really, you

17:53

know, it's when you look at profit,

17:55

profit interests, people will always choose profits

17:57

over. over

18:00

doing the right thing which is unfortunate.

18:03

So, I think we

18:05

need to be much

18:07

more firm and bolded in our

18:10

request to what platforms they need to do and

18:12

also potentially much more bolded in how we can legislate

18:14

some of these things. You

18:16

can invest in innovation, I understand

18:18

that. But

18:21

it's a pretty blanket phrase, I want

18:24

to invest in becoming

18:26

richer. I mean, okay, so how am

18:28

I going to do that? That

18:30

is a real nitty gritty question, it's

18:32

not an abstract question. And

18:34

I'll give you an example. A friend of mine

18:37

is a gun collector and he

18:39

has his guns in a safe. And

18:43

his kid knows

18:45

the combination to the safe. How

18:48

safe are those guns? They're not safe

18:50

at all because the child knows how to get into the

18:52

safe. Now, there are

18:55

biometric safes that

18:57

he could be using. There

18:59

are biometric tools that we use for

19:02

cybersecurity specifically for

19:04

authentication. That

19:07

said, you're opening up a whole new

19:09

can of worms when you're saying my

19:11

minor child needs

19:13

to provide a big

19:15

company or a small company, any

19:18

company at all, their biometric information.

19:20

Sorry, that's a bigger problem. So

19:22

how do you realistically, just doubling

19:24

down on this question, make

19:27

sure children aren't there when children can be

19:29

sneak children are they make it, they don't

19:31

have jobs, their whole job is being sneaky.

19:34

Listen, kids are sneaky, kids are smart.

19:37

Kids are much better at tech than we will

19:39

ever be. So they will find a ways to

19:41

go around them. Our job is

19:43

to make that experience a frictionful,

19:46

right? The more friction points you

19:48

can create to them entering some unsafe

19:50

space, that's the best way you can do.

19:53

You will never fully safe

19:55

proof any space, so there's physical digital, but your

19:57

job and our job is to make it a

19:59

better place. make that as

20:01

hard for them as possible so

20:04

that they can't actually expose themselves

20:06

to the most harmful experiences. Second

20:09

job you have is if your platform

20:11

is, and there are good examples of

20:13

this, if your platform bears a risk, and

20:16

every platform does, making sure

20:18

that you can actually enforce your terms of use,

20:20

if you have them, if that's what people subscribe

20:22

to, you should be able to guarantee that that's

20:24

what you will deliver to them. The

20:26

platform is a European platform for live streaming

20:28

called Yuba, and Yuba was very

20:31

popular during COVID, continues to be, and it's basically

20:33

just live streaming and everything. People actually love to

20:36

see each other sleep, a big thing on Yuba.

20:38

Literally, that was their biggest friend of the year, was

20:41

people watching each other sleep. So Yuba kids

20:43

realized that unless they're going to get synced

20:46

and closed over child abuse, they

20:48

have to ensure there is no nudity. So

20:51

they said, we can't aid a shirt because

20:53

there's always going to be children on our

20:55

platform, but for us to be able to

20:57

stop that is we're going to actually ban

20:59

for adults and kids nudity. So

21:01

now Yuba is still super used, not probably as much as

21:03

it would be if there was nudity on it, but

21:06

they now have filters for nudity and then

21:08

applied across the platform. And they said, our

21:10

business model depends on us being able to

21:12

actually guarantee there is no child abuse. So

21:15

they've decided something that's going to cut their

21:17

profits, still keep them be

21:19

profitable and use to some extent, but

21:21

much more safe than another platform of

21:24

similar live streaming nature. So

21:26

there was a little thing called

21:28

Chatroulette online about 10 years ago,

21:30

and it really was just people

21:32

live streaming and getting

21:34

sent, you know, Russian roulette style

21:36

to whomever was next. And

21:39

a big feature of Chatroulette was

21:41

you just be sitting there minding

21:43

your own business and a very

21:45

odd person would be appear

21:47

in front of you doing something

21:51

that kids would describe as dirty. And

21:55

I'd have no idea what happened to Chatroulette, but

21:58

it was a daily

22:02

way for people to sexually

22:04

abuse children and other people because they

22:06

were able to be nude, they were

22:09

able to do whatever they wanted nude,

22:11

and you had no control over who

22:13

appeared before you. And it still

22:15

exists. Now that's

22:17

a really extreme example of

22:19

the Wild West years of the

22:22

internet. We're still weirdly so a

22:24

decade later it still feels like

22:26

the Wild West. Now

22:29

is that just profit motive? Because to

22:31

me that's too easy. Is

22:34

it profit motive? I mean if that's the

22:37

case then shame on everybody. I would

22:39

say it's a mix of things like

22:41

definitely the crazier

22:44

and more risk-taking

22:46

or exciting or enraging. The

22:49

platform is a create

22:51

support for people enraged or like

22:54

excited or like adrenaline rush. It

22:56

will have more users. It's driven by by

22:59

increasing number of users right? So increased

23:03

number of users we're gonna get you better ad

23:05

business or better you know data. You can collect

23:07

more data you can do all of these things.

23:09

So it's driven by the need for amassing assets

23:12

whether those assets are profits or data

23:14

that you can then again monetize or or

23:16

it's getting people more engaged you can then

23:18

play more space for whichever other type of

23:20

engagement it is. So like I do

23:23

think there is there is a big

23:26

mentality of scale and magnitude

23:29

of engagement that is that is

23:31

driving a lot of the design choices. And

23:33

the more you can engage the more

23:35

you want the same same same thing

23:37

happening right? And I think

23:40

that we

23:42

don't allow that in the offline space. It's

23:44

good to have people dropping strangers, weird strangers

23:46

in like coats like in front of you like in parks

23:48

I used to see like you know growing up in what

23:50

it said but it's in the war years right? Like

23:53

I don't so people don't do that to you like I

23:56

started thinking you know what else I think I say

23:58

thinking recently about the decay of social media. And

24:00

what i think is gonna happen soon is

24:02

that we already seeing some of the platform

24:04

decaying and being abandoned like facebook for example.

24:07

What the more the case happens the last

24:09

investment continue to trust and safety the last

24:11

you protect your infrastructure. I

24:14

started thinking about is all my spaces that

24:16

is all the abandoned buildings. Like

24:18

you know i like imagine any old

24:20

like a band and amusement park. I'm

24:23

and it's creepy it's dark

24:26

in a band of buildings like. Keep it in happens

24:28

because they're like unknown people are there is like a lot of

24:30

a lot of things like you know things to follow your head

24:32

like and i think about these online

24:34

spaces is this. Disregarded a

24:38

band of neglected space is the soon become

24:40

so much more dangerous and then you will

24:42

have this trust that for movement because. How

24:45

can the being abuse the line doesn't happen all

24:47

the same platform you gonna get a kid

24:49

on the gaming platform you gonna move them to a

24:51

snapchat you gonna move it from a snapchat to what's

24:53

up you're gonna move to a whatsapp to somewhere else

24:55

because. The darker space of the lessons that you have

24:57

the land of the more you can get out of

24:59

the child right. So i don't like there's

25:01

a lot of other than to discuss

25:03

if we think about overall infrastructure the

25:06

internet and how things happen trust platforms

25:08

as well as different platform. I

25:10

think that i'm choices if

25:13

we are to think about safety

25:15

more comprehensive but i agree with you know i

25:17

can buy i don't think that currently many

25:19

of the tools that we speak about are. Really

25:22

advanced if you want them to be but there

25:24

needs to be a consideration investment in making

25:26

safety tools more accessible more privacy preserving

25:28

more focused on the safety of users

25:31

and more. Just enabling the participation of

25:33

people because it is about democracy the end.

25:36

Kids will be losing the opportunity to

25:38

participate to advance to enjoy the good

25:40

side of the digital technology as women are currently

25:42

being expelled from the channel my space

25:44

is because of the harassment and abuse they're facing.

25:47

I'm free at this for me

25:49

the issue of participation and issue of

25:52

democratic choice as much as it

25:54

is safety and protection. You know the

25:56

first thing is that part

25:58

of the structure of the internet. was that

26:00

it was built without identity verification in

26:02

mind. It was supposed to be, you

26:04

know, this wide open, free-spirited

26:07

communication platform for

26:09

everybody. What do you think

26:11

about that? This

26:13

is a great example, Adam, of

26:15

a Utopia, Utopian worldview that

26:19

is waiting for the pop. You know what the

26:21

pop sound is? It's the sound of your head

26:23

coming out of your... Anyway, the... Utopia...

26:27

Utopian people

26:29

have... always have

26:31

that blind spot. What about that?

26:34

You know, like, I think they kind of was built,

26:36

and I'm so sorry to say this again, but it

26:38

was built by a bunch of, like, white men

26:40

who were sitting in, like, you know,

26:43

some places which were, like, echo chambers of,

26:45

like, like-minded people who do not have safe

26:47

world experiences as, like, the rest of the

26:49

world. And you bring

26:52

that in and you say, like, listen, you

26:54

had a really nice idea, and in your world

26:56

this fully applies, but, like, your world is, like,

26:59

tiny, and the actual world is,

27:01

like, so much bigger and so much fuller

27:04

of other things and other choices

27:07

and thoughts and values and cultures and

27:09

religions and approaches to life. And none

27:12

of those things are necessarily bad by themselves, but

27:14

you just open... You open the

27:17

can of worms for, like, so much

27:19

bad stuff. And we are seeing

27:21

that, like, and then add on top of

27:23

that the whole boom around use

27:25

and misuse and, you know, like, free services, and then,

27:27

but the, you know, it's free, but it's actually just

27:29

your data that we want, and then, like, monetizing the

27:31

data. But it becomes... And

27:35

I also conflicted

27:37

in many ways as an individual as

27:39

well as professional. Like, I

27:41

like the anonymity bit of it. Like,

27:43

I think anonymity is really critical for

27:45

democratic processes. Like, being able to anonymously

27:47

share your thoughts and opinions is really critical. But

27:50

then, like, full of anonymity

27:52

in any way, in every way, on

27:54

every possible platform does create a

27:57

lot of risks if you can't verify who your users

27:59

are. on specific circumstances

28:01

or specific services that it becomes neglect

28:04

because no, your customer is something you

28:06

should be doing, right? It's

28:09

also a petri dish for

28:12

disinformation, misinformation. Correct. Can

28:14

I tell you something about that? I was

28:16

just recently in Kyoto for Internet Governance Forum and

28:18

I met my personal

28:20

peer, Maria Reza, who is the Nobel

28:23

Peace Prize winner. She is working a

28:25

lot on misinformation, disinformation. We

28:28

were talking about how we end

28:31

up in these fights between privacy

28:33

and safety because we are being stated

28:35

against each other because we all deal

28:38

with these downstream issues. How

28:40

do you do quantum moderation? And

28:42

then she says quantum moderation is

28:44

such a misleading way of approaching

28:47

things because she says

28:49

quantum moderation is almost imagine a river. There's

28:51

a huge river and then you take

28:54

a small cup of water in front of

28:56

the river and then you clean it and you clean

28:58

a small cup and then you put

29:00

that cup back into the river and

29:03

then the river is still super dirty and

29:05

it's full of these things and you just keep

29:07

on cleaning small bits and pieces of it but

29:09

you're not actually dealing with the very upstream issues.

29:12

If anybody tells you that privacy and safety are

29:14

not both possible, it's because they don't want to

29:16

be dealing with those issues at the same time.

29:19

So that's a great metaphor, obviously. Well, it

29:22

makes sense given that it came from a

29:24

Nobel Laureate. You

29:26

had said earlier that the death

29:30

of privacy means that these big

29:33

platforms, they know who's on the

29:35

computer. They can tell

29:37

by the things that you're searching, by the things that

29:40

you're doing. So

29:42

they have a fairly good idea. Even

29:45

if it's the kid who knows the combination

29:47

to their father's safe, they

29:50

know that that isn't

29:53

the dad looking at risk-a content

29:55

online. That's the kid because the

29:57

kid looks for slightly different things.

30:00

And with the death of

30:03

privacy, all of these things are

30:05

totally possible to gateway and say

30:07

this is a kid, this kid can't see

30:09

this, this is an adult, this person can

30:11

see it. So instead of trying to clean

30:13

the one cup of water, you start to

30:15

think about what is the behavior of the

30:17

organisms in the river and then

30:20

identify them by their behavior and stop

30:23

them from doing certain things. That seems

30:25

to me to be the answer, is

30:28

to program something that can help navigate

30:30

the river. The

30:36

river is messy and connected and you can't

30:38

create parts of the river that are child safe and

30:40

part there or not. So what if you then

30:42

identify a child in this river and you're like you

30:44

would then just play this tiny bit of a

30:46

river and then you can cube them there. How

30:49

do you secure it? I think

30:51

there's something around really

30:55

thoroughly thinking about

30:58

the design choices

31:00

and then every

31:02

river and every platform should have

31:04

mechanism to report abuse, which is

31:07

transparent, which is very clear what it is.

31:09

You should have clear sense of if you report

31:11

abuse, what will happen next? You should have a

31:13

sense of what detection technologies are they using to

31:15

detect abuse. Those

31:17

things are not worked through, they're

31:20

not actually figured out at this

31:22

point in time because many, if

31:24

you think about, at

31:26

the beginning of the podcast I mentioned 32 million reports

31:29

of child sexual abuse happening just last year

31:31

to NICMEC, to the National Center of Missing

31:33

and Exploited Children. Out of those 32 million

31:36

reports, I think around 29 million

31:38

came from meta, from

31:41

Facebook, Instagram and WhatsApp because

31:43

they do the top the most. Do

31:46

you know how many reports came from Apple last year?

31:49

I think it was 173. 170 p. Not

31:54

thousands or millions, 173 because

31:57

Apple not only does not. anything

32:00

on their cloud services anywhere,

32:02

anywhere, anywhere. They also

32:04

don't have a transparent mechanism for their

32:07

users to actually report if they encounter

32:09

abuse or child abuse material. And

32:11

that's a very, very dangerous thing to do. So

32:14

one thing is like privacy at all

32:16

costs. Great. But then at least enable users who

32:18

are using your services to be able to

32:20

report. That's a very transparent to say way of

32:22

doing it, right? It's not privacy invading at all.

32:25

But we also have, there's just

32:27

no unified and

32:30

really consolidated way of thinking about

32:32

prevention of abuse, currently on any

32:34

platform. What does legislation looks

32:36

like that moves us in the right

32:38

direction? I

32:41

think we should not be, first of

32:43

all, scared of legislation because you can

32:45

iterate on it and you can do

32:47

various things with it. I think legislation

32:49

should set standards and guiding principles towards

32:51

what we should be all be aiming

32:53

for. I always advise

32:55

for tech neutral regulation and legislation because

32:57

you don't want to be regulating every,

32:59

every five months or like every three

33:01

years because tech is changing so fast.

33:04

But regulators, so what is principled around?

33:07

Ate appropriate experiences, safety

33:09

measures, design choices, input assessments,

33:12

looking at risk profiles, things

33:15

that can be mandated by law can

33:17

look like use an

33:19

online platform. You have to do this

33:21

type of threat assessment at the beginning of your

33:24

cycle of development. Do

33:26

vulnerability lens analysis of how this can impact,

33:29

you know, engage with the users, engage with

33:31

people who can be affected. Bring

33:33

on board vulnerable communities and people

33:35

like you can mandate processes that

33:38

can result in certain technological choices

33:40

and decisions down the pathway of

33:42

development. I am much more

33:44

against legislating specific, specific technological solutions because

33:47

you will end up in a rabbit

33:49

hole of how does this work actually in

33:51

practice or like how can you apply that technology or

33:53

that tools to a specific platform. But

33:56

we are seeing good developments, as

33:58

I mentioned before, you can. to have a nice

34:01

safety bill was adopted. It has, of course, a lot

34:03

of short-aimings, but it has a lot of good stuff

34:05

as well. EU Digital

34:07

Services Act has really good way

34:09

of looking at platforms of different

34:11

size and different development stage

34:13

to mandate different ways of

34:15

activities across. We've

34:18

seen President Biden recently do the

34:20

AI executive order because

34:23

he has not seen any hopes of

34:25

Congress and we're legislating on that. We

34:27

just need to stop treating online world as something

34:29

of a unicorn. It's so different and so precious

34:31

that we can't really ever touch it because if

34:33

it touches everything, we'll crumble and innovation will die.

34:36

But that's not the case. There are good

34:38

examples of industries being regulated and safety centers

34:40

being introduced for the betterment of the actual industry

34:43

and that we can stand across the car

34:45

industry, you know, hospitality, wherever you

34:47

go, whenever you have standards of safety, airspace.

34:51

Sure, car industry was

34:53

legislated and became

34:55

safer. OSHA

34:58

legislated lots of different workplace

35:01

things to

35:03

make workers safer. These are

35:05

physical dangers. When cars were legislated, they were

35:07

legislated to not have things sticking out on

35:09

the dashboard so that when someone got in

35:11

an accident, it didn't kill them. When

35:14

certain saws were legislated to be safer,

35:16

they had guards put on them. And

35:18

these are physical dangers. Don't deal with

35:21

your children or like don't put kids in touch

35:23

on a jet roulette with somebody who's right. Like

35:26

don't let a jet roulette creep, creep

35:28

at your kid. Like finding ways to

35:30

reduce these type of clear risks that

35:32

have been documented. I mean, the Surgeon

35:34

General of the US has issued advisory

35:37

on the impact of social media, right, of

35:39

kids. And then in a lot of the

35:41

cases, again, people are constantly doubting, well,

35:44

we can't really say social media is

35:46

causing the mental health crisis in our

35:48

adolescence because like that's only correlation, that's

35:50

not causation. Like

35:53

really? I also believe that when

35:56

you talk about legislation and

35:58

OSHA and... Yes, there

36:00

are a lot of those dangers that

36:02

have been over the years

36:06

more regulated now

36:08

and people are better protected.

36:11

But that legislation also involved

36:14

individual units, whereas

36:16

where you're talking now,

36:18

you're talking about something

36:20

that has worldwide impact

36:22

simultaneously. This is

36:25

even more dangerous than anything that we can think

36:27

of to date. Well, that's

36:29

exactly right. And I think that that's

36:31

why I'm trying to get at not

36:35

an abstract notion of what the problem

36:38

is and how to fix it, but specifically, what

36:40

are you going to do? There's

36:44

the view in which we speak

36:46

about current configuration of the Internet and online

36:48

platforms as something that is a given. And

36:50

then we have to adjust our regulation, our

36:53

lives to it, not the other

36:55

way around. How to actually change the configurations

36:57

of the online platforms to meet the needs

36:59

of humans. Why are we

37:01

OK to accept this? Basically,

37:04

surveillance based data collection models are the way

37:06

that we need to live. And now we

37:08

have to find a way to make them

37:10

more safe, but we can't actually change them.

37:13

That's my question. Like, why can't we

37:15

think bigger? Why can't we think about

37:17

changing the actual why? The

37:20

reason why platforms are not kicking kids

37:22

off the platform because they drive engagement. The

37:25

kids, kids are the kids, spend the most time

37:27

online and the kids are one in three users

37:30

of the Internet are children. One in three in

37:32

many places is one in two because there is

37:34

a larger population of kids in many, many places

37:36

in the world. So when you think

37:38

about that, kicking them off the platform is

37:41

not very profitable. Using them to monetize your

37:43

product is so creating like specific products

37:45

for kids, engaging them. We remember also the attempt to

37:47

be certain to the Instagram kids at some point in

37:49

time, which was killed because there was a huge outrage

37:51

over that. They have

37:53

to stop thinking about the online world as

37:56

a beautiful set

37:58

of platforms that we. need to then build

38:01

safety measures around instead of changing some of

38:03

the core functionalities of this platform to change

38:06

those models. And those can be changed as

38:08

well because why are we allowing serving no

38:10

commercial content to kids? That should

38:12

not be allowed. You should not be able to commercialize kids'

38:14

data. That should not be allowed. That shouldn't be illegal. These

38:17

type of things, we need to just

38:20

get some boldness and start regulating because

38:22

soon it's going to be late. We are

38:24

losing a generation of kids in these completely like

38:26

wild west circumstances in

38:28

the online world. Well let's look at it this

38:30

way and that's the solution that's disguised as a

38:32

question. And if you change the

38:34

nature of surveillance, maybe you change

38:37

the online world. First,

38:47

the bad news. SAP Business

38:49

AI won't help you generate cubist versions

38:51

of your family's holiday photos. But

38:54

it will help you understand which supplier is best

38:56

to help you roll out your plant-based packaging in

38:58

Southeast Asia or identify the training your

39:00

junior project manager needs to rise up the ranks

39:03

and automate repetitive tasks while you

39:05

focus on big innovations so

39:07

you can be ready for the next opportunity. Revolutionary

39:10

technology, real-world results. That's

39:13

SAP Business AI. This

39:18

episode is brought to you by the Weather

39:20

Channel app. Did you know the app can

39:22

help you forecast more than just the weather

39:24

with allergy tracking and fluor snapping? So you

39:27

know when to stay inside and load up

39:29

on podcasts as well as air quality and

39:31

UV indexing. So you know when to get

39:34

outside load up on sunscreen and podcasts. Forecast

39:36

more of what you love with the Weather

39:38

Channel app. If

39:47

you could walk into the United... and I know

39:50

you've worked with the United Nations,

39:52

but if you could walk into the United Nations

39:54

and you're up on the podium in front of

39:56

the gathering of all the countries and

39:58

you could give them one piece

40:01

of wisdom that you

40:03

think would help make the world

40:05

more child safe. What

40:07

would you say? I would, I would say,

40:09

and I'm talking to governments,

40:11

right? I would say to governments, make

40:15

sure that your, make sure

40:18

that businesses need to think

40:20

about safety from the get-go, regulate

40:23

that one bit, make risks assessment,

40:25

obligatory part of any business model,

40:28

whether it's looking at children

40:30

or any other type of vulnerable, vulnerable

40:32

population, you need to think about the

40:35

people and having safety by design embedded

40:37

in the processes. And

40:39

that can look like different for different models, different

40:41

platforms is the most important thing that

40:43

you can do in terms of online safety is

40:46

safety by design. What

40:48

if we just said end

40:50

surveillance economies? What

40:55

if that was the ask? I would love that. But

40:57

wouldn't that solve the problem? Why

41:01

do we accept surveillance economies? There

41:03

shouldn't be surveillance economies. I

41:05

do think that surveillance economy or

41:07

whatever you want to call it, that the

41:11

internet currently is

41:14

not designed for the benefit

41:16

of humans. And I think design

41:18

is different benefit of humans and not businesses is

41:20

a critical thing and don't collect the data with

41:22

the data should not be collected and don't monetize

41:24

people's data and don't do any of those things

41:27

that are putting people in danger. But

41:29

again, dangerous practices drive engagement. And that's

41:31

why our kids are being put in

41:33

dangerous situations because dangerous and exciting adrenaline

41:35

building and not raging activities are the

41:38

ones cool. They generate the

41:40

most content. But again,

41:42

that's just me. Thank you

41:44

for that. I like that. I would actually say that at UN. I

41:47

think you should say it. And I think that we're just

41:49

going to say, look, how about this? Let's

41:52

change everything. Say

41:54

you're just walking out of a dress in the UN and now

41:56

you're actually just going to talk to a parent. What

41:58

advice do you have for parents? keep their children

42:00

safe online. That's

42:02

a very good one. So the first most

42:04

important thing is safety by design or end

42:06

miscellaneous in quantity, right? The second one is

42:10

start sharing these conversations with your child as

42:12

early as possible. And don't assume the child

42:14

would not share their news, because they will.

42:17

So just assume, you can just tell your child, once you

42:19

decide to share your news, can you please talk to me

42:21

so we can decide about the safe way to do it,

42:23

because they will do it. Start normalizing

42:26

these conversations as early as possible.

42:29

It's just a regular parenting advice.

42:32

The more you're open to them, and the more and

42:34

the less stigma exists around potential

42:37

harms that can happen to them

42:39

online, the more likely they will be

42:41

to tell you about bad

42:43

experiences. And this is something that

42:45

we have seen really be critical for prevention of

42:48

suicide at the end of the tunnel,

42:50

kids not reporting, living with trauma, getting

42:52

off school, doing all sorts of bad

42:54

things, because they feel so

42:56

much shame and so much unbearable

42:58

sense of failure that they can't

43:01

actually cope with it. So the more you can be

43:03

open. And so my advice is

43:05

never surveil your child. My advice is never

43:07

be controlling. My advice is always just be

43:09

open to whatever the child is doing. But

43:12

be open in a way that they can come to you, because

43:14

the people think about reporting of these crimes,

43:17

there are many ways of disclosing. Reporting is

43:19

not only picking a phone and reporting. Reporting

43:21

is talking to a friend. So really build

43:23

a nurturing environment that are around your child so they

43:25

can be free to tell you about

43:27

their experiences. Which this brings us full

43:30

circle to the whole purpose of this

43:32

program, which is to

43:34

create a shame-free zone where

43:36

people feel it's OK to

43:39

talk about things that have happened to them.

43:41

There's nothing that's wrong. It's

43:43

all about sharing experiences,

43:45

sharing threats, sharing solutions, but

43:48

creating an environment where people

43:50

feel safe. And that's

43:52

what you're talking about in a family. Creating

43:55

an environment where kids feel safe, that

43:57

they can talk to you about whatever

43:59

they're thinking whatever they're doing and not feel

44:01

like they're going to get a 22-hour lecture or

44:03

you're going to send

44:05

them to their room and starve them for the next

44:07

three months. Yeah, deprive them of the ether.

44:10

And Adam, that's a really perfect,

44:12

perfect way of saying it. And I

44:14

do want us to say one more thing because

44:16

there's a huge misconception around who is the danger

44:18

for kids. There's a whole

44:20

narrative from ages ago, stranger danger,

44:23

right? That is so

44:25

false because even in the all-night world, perpetrators

44:27

are usually people that kids know. I

44:31

really learned something today. Thank you so much. Maria,

44:35

you were totally awesome. And

44:37

if people want to find out more about you and the work

44:40

you do, where would they go? Just

44:42

go check us out on safeonline.global.

44:45

Safeonline.global? Dot global. Right.

44:48

Great. Now

44:59

it is time for the tinfoil swan.

45:02

Our paranoid takeaway to keep you safe on

45:04

and offline. Well, the holiday

45:06

season has officially been here since, I don't know,

45:08

Labor Day? As far as some retailers I know

45:10

are concerned, I think it started in late August.

45:14

Yeah, not even Labor Day. Well,

45:16

listen, there's a bunch of scams that

45:18

are a lot harder to spot this

45:20

time of year. They're

45:22

not new scams, but like I

45:24

said, harder to spot. Yeah, this time of

45:27

year you get a lot more unsolicited communication,

45:29

especially about online orders and deliveries. Let's

45:31

do a lightning round. What

45:33

are the, well, this

45:36

is your term, Adam, pantheon of ishings. Go

45:38

ahead. A classic is phishing, and that's where

45:41

you get an email from someone, and it's

45:43

claiming an update for an order that you

45:45

made. But what you don't realize is you

45:47

click on the link, and it's

45:49

either a malicious attachment or it's a phony

45:52

website. As with any email,

45:54

as we continue to tell everyone, check

45:56

the sender and the URL, make sure

45:59

that it's a phishing. there's

48:00

going to be squishing which is where someone says

48:02

you fuck you or come anyway

48:05

thank you thank

48:07

you to death and then you're dead just

48:09

said I do yeah yeah most of these

48:11

will follow similar beats to one another but

48:14

the lessons are all the same you

48:16

got to treat any incoming communication

48:19

with suspicion don't respond in a

48:21

panic especially if it's

48:23

marked urgent and if

48:25

you order something over retail website go

48:28

to the source if you don't want remorse go to directly

48:30

to that site rather than clicking

48:32

on links or downloading attachments please

48:35

okay so if you don't want

48:37

squishing act with suspicion and keep

48:40

an eye on your accounts if you're

48:42

buying gifts for people are making travel

48:45

plans there's gonna be a lot of

48:47

account activity out there especially with money

48:49

coming out of your tracker

48:52

what you spend see if there's

48:54

anything that looks suspicious and just

48:56

go about your life and

48:59

that's our tid foil swan you

49:01

guys have to love squishing and we think of that what

49:17

the heck with Adam Levin is a production of

49:19

loud dream media you can find us online at

49:21

Adam Levin calm and on Instagram Twitter and Facebook

49:23

at Adam K Levin loud

49:34

dream

Rate

Join Podchaser to...

  • Rate podcasts and episodes
  • Follow podcasts and creators
  • Create podcast and episode lists
  • & much more

Episode Tags

Do you host or manage this podcast?
Claim and edit this page to your liking.
,

Unlock more with Podchaser Pro

  • Audience Insights
  • Contact Information
  • Demographics
  • Charts
  • Sponsor History
  • and More!
Pro Features