Podchaser Logo
Home
Our 2024 Predictions + Jenny Slate Answers Your Hard Questions!

Our 2024 Predictions + Jenny Slate Answers Your Hard Questions!

Released Friday, 22nd December 2023
Good episode? Give it some love!
Our 2024 Predictions + Jenny Slate Answers Your Hard Questions!

Our 2024 Predictions + Jenny Slate Answers Your Hard Questions!

Our 2024 Predictions + Jenny Slate Answers Your Hard Questions!

Our 2024 Predictions + Jenny Slate Answers Your Hard Questions!

Friday, 22nd December 2023
Good episode? Give it some love!
Rate Episode

Episode Transcript

Transcripts are displayed as originally observed. Some content, including advertisements may have changed.

Use Ctrl + F to search

0:00

This podcast is supported by Michelob

0:02

Ultra. They say consistency is

0:04

the key to success. So how about grabbing

0:06

a beer that's consistently refreshing and light? Michelob

0:09

Ultra, only 2.6 carbs and 95 calories. It's

0:12

only worth it if you enjoy it. Enjoy

0:15

responsibly. And how's it was Michelob Ultra Light Beer, St.

0:17

Louis, Missouri. Oh, my God,

0:19

this sweater is like made out of the worst. So I

0:21

got like wish.com so hard

0:23

on this. What is that?

0:25

OK, first of all, you're using wish.com as a

0:28

verb. What does that even mean? OK, so you

0:30

know on wish.com when you order something that looks

0:32

like a nice sequined dress or something and it

0:34

arrives and it's like... I've never actually ordered a

0:36

sequined dress off wish.com, Kevin. How did yours actually

0:38

look when it arrived? I'm just saying this is

0:40

like a thing that people report. They order from

0:43

wish.com. There's something that looks very nice and then

0:45

it shows up and it's like a piece of

0:47

garbage. So this, when I when I ordered this

0:49

on Amazon, it looks like a sweater. It looked

0:51

woven. It looked very luxurious. I thought it's going to

0:53

be very comfortable, even though it's hideous. It

0:55

shows up. It's like it's like one of

0:57

those sweatsuits that like wrestlers wear to make

1:00

weight like the day before the meet. Like

1:02

it's just polyester. It's like zero breathability. Like

1:04

I'm burning alive in this. It looks like

1:06

it costs about eight dollars. Is that right?

1:09

It was like 13. But you're close. Yeah, for

1:11

13, I think you probably were

1:13

wrong to expect it would be handwoven by

1:15

artisans. I would say

1:17

that's a $13 sweatshirt if I've ever

1:20

seen one. I'm

1:26

Kevin Roos, a tech columnist at The New York

1:28

Times. I'm Kasey Neiman from Platformer. And this is

1:30

a special holiday edition of Hard Fork. This week

1:32

on the show, we look back at our predictions

1:34

for this year and tell you our predictions for

1:36

next. And then comedian,

1:39

actor and writer Jenny Slate joins

1:41

us to answer your hardest questions about

1:43

technology. Casey,

2:04

happy holidays. Happy holidays to you, Kevin.

2:06

I can't believe we're here at the

2:08

end of another year of hard fork.

2:10

What a year. So much to celebrate,

2:12

so much to recover from. It's been

2:14

a very long, very good year for

2:16

our podcast. We've made so

2:18

many, so many episodes and this is our last

2:20

one of the year. After today, we are taking

2:22

off for a holiday break. Now Kevin, I have

2:25

to say that while I'm excited about every episode

2:27

of hard fork that we do, this one in

2:29

particular, I think is really going to be a

2:31

treat for folks. Because we're wearing hats and sweaters.

2:34

Well, not only that, but we have got

2:36

a guest who is somebody who I've been

2:38

wanting to talk to forever. And we've got

2:40

some interesting predictions about the year ahead. Yeah.

2:42

So I think we just talked through for

2:45

those not watching us on YouTube right now,

2:47

we are both wearing ugly holiday sweaters. Yours

2:49

has like a Santa riding a

2:51

uni... The speaker itself. Mine is beautiful. Yours

2:53

has like Santa in space riding on a

2:55

unicorn. That's right. The official horse of the

2:58

gay community. And you are wearing a even

3:01

louder sweater, sweatshirt

3:03

that features a velociraptor wearing a

3:05

Santa hat that is dragging Santa's

3:07

sleigh and Santa behind the velociraptor.

3:10

And what was sort of the artistic vision

3:12

behind your fit today, Kevin? It

3:15

was just sort of like what would happen if you

3:17

put ugly Christmas monstrosity into

3:20

Dolly and like had it spin

3:22

out a bunch of examples. It

3:24

just seemed like the most futuristic of

3:27

the options available to me that were

3:29

prime eligible on Amazon. Yeah. No

3:31

Casey, I love the holiday

3:34

season. You know, every year,

3:37

I start playing the holiday music right

3:40

after Thanksgiving. It's been on constant rotation

3:42

in my house. And it's

3:44

just, it's my favorite time of year. It is. And

3:46

the way that I know this deeply

3:48

is because the hats that we're wearing right

3:50

now were your idea. So that's really

3:53

the firmest evidence that I have that you really are

3:55

a holiday person, but you know what? I'm a holiday person

3:57

too. It's nice to come together at the end of

3:59

the year. year to celebrate with friends, to be

4:01

together with friends and family in person, to

4:04

give it maybe a gift here or there.

4:06

These are the good times. Yuletide spirit is

4:08

very important to me. That's right. All right.

4:10

So today for our second annual holiday extravaganza,

4:12

I thought that we should revisit those predictions

4:14

and make some predictions for 2024. Punditry accountability,

4:16

it's a crisis in this country. Think about

4:19

how many predictions are being made all the

4:21

time. But when did the pundits actually go

4:23

back and point out what they got right,

4:25

what they got wrong? Here on hard for

4:27

the hard fork promise is we're going to

4:29

tell you how we did. We each made three predictions

4:31

last year. I don't want to go through all of them.

4:34

But if you could just say like, what is the thing

4:36

that you predicted last year that you were the most right

4:38

about and the most wrong about? So here's

4:40

the thing that I was most right about. I

4:42

said last year, the media's divorce from Twitter will

4:44

begin in earnest, and that to the extent that

4:46

Twitter and the media are inextricably linked, that will

4:48

be much less true at the end of 2023.

4:51

And I am here to tell

4:53

you, I was right about that. X,

4:56

as it is now known, has

4:58

seen an exodus of users, advertisers

5:01

and members of the press. We've

5:03

seen some news organizations, most notably NPR,

5:05

stop posting on X entirely. My little

5:08

publication platformer is also no longer posting

5:10

on X. Part of that is because

5:12

I think a lot of folks just

5:15

can't stomach what X is today. And

5:17

another big part of it is that

5:19

we actually now have alternatives, whether it

5:22

is threads or mastodon or blue sky.

5:24

Really interesting things are happening in the

5:26

race to replace what Twitter used to

5:28

be. In fact, as we're recording

5:31

this, Mark Zuckerberg just announced that they're going

5:33

to begin testing actually federating threads the way

5:35

that they say that they would. So threads

5:37

and mastodon are going to be sort of

5:39

linking up in a way that I think

5:42

portends really interesting things for 2024. You

5:44

could say that, threaderating it. You could say

5:46

that. I wouldn't. You could say that. So

5:50

yeah, that was one that I got right. And

5:52

one that I got right mostly was I predicted

5:54

that this would be the year of the Minnie

5:56

Musk when a lot of Silicon Valley CEOs would

5:58

see what Elon Musk did at Twitter. with all

6:00

his cost cutting and his mass layoffs and his

6:04

focus on extremely hardcore

6:06

engineering and efficiency. And

6:08

I think that has generally come true. We

6:11

saw layoffs this year at a bunch

6:13

of big tech companies, a lot of

6:15

focus on efficiency and stripping out layers

6:17

of management. Not everyone

6:19

has sort of emulated Elon Musk's every move because

6:21

obviously that has not gone well at Twitter, but

6:23

I would say this cost cutting in particular is

6:25

something that we've seen across a lot of the

6:27

tech industry. So unfortunately I was pretty

6:30

right about that. What was the thing you

6:32

were most wrong about? Well I said that

6:34

I predict that the Supreme Court will uphold

6:36

the Texas and Florida social media laws and

6:38

make content moderation illegal there. And

6:40

this has not happened yet. Both

6:43

of these cases are still pending and it

6:46

is still possible that it might happen. But

6:48

so far, has it come true? And the thing I

6:50

was most wrong about, I predicted that TikTok would be

6:52

banned in the United States in the year 2023. That

6:56

did not happen. And in fact I would say it's

6:58

probably less likely that it

7:00

will be banned now than it looked a year

7:03

ago when lawmakers were furious about

7:05

it and accused it of being a Chinese spying app.

7:07

There's still a lot of concern about TikTok, but I

7:09

don't think it's going to get banned anytime soon. Well

7:12

and I guess we should say that one

7:14

way that you were right was that Montana

7:16

did pass a ban on TikTok, but it

7:18

got blocked in court. And I imagine that

7:20

similar efforts around the country will meet the

7:22

same fate. Yeah. All right. Let's

7:25

get to our predictions for next year, for 2024. We

7:28

started a tradition that I think we should keep this year,

7:30

which is to separate our predictions into

7:32

confidence intervals. So we have high confidence

7:34

predictions. These are predictions that were like

7:37

80 percent sure will happen in 2024.

7:40

Medium confidence predictions, which are like maybe

7:42

50 percent coin flip. And

7:45

then low confidence predictions, which are things that maybe

7:47

are only 20 percent likely

7:49

to happen, but they would be kind of funny if they did.

7:51

I like that. You go first. What's

7:53

your high confidence prediction? Okay. My

7:56

high confidence prediction is that

7:58

threads. Well,

8:01

you know, it's so funny. I wrote threads

8:03

overtakes X in daily users and

8:05

launches in the Fetaverse. I

8:08

wrote this on December 12th. On

8:11

December 13th, threads announced

8:13

that they were coming to the Fetaverse. So

8:15

Mark Zuckerberg has preempted me, the rascal. So

8:18

I guess I would amend my

8:21

high confidence prediction to just threads

8:23

overtakes X in daily users. Okay.

8:25

This is essentially just an extension of my

8:27

prediction from last year. So maybe you can

8:30

say that I'm not trying hard enough here,

8:32

but I still see skeptics. In fact, the

8:34

economist Tyler Collin wrote a blog post this

8:36

month where he said that essentially things are

8:38

going much better on X than the media

8:40

would have you believe. And this is still kind

8:42

of where the action is. And I'm sorry, but

8:44

I just don't think that is the case. I

8:47

see more interesting stuff happening on threads every day.

8:49

And I believe that by the end of 2024,

8:51

it is going to be

8:53

the text based social platform of choice

8:55

for the most daily users. Yeah,

8:58

I can see this, but I'm also like

9:00

I kind of see Tyler Cowan's point too.

9:02

Like it has been surprising to me how

9:04

much longevity Twitter has had, even as I

9:07

think you and I would both agree the

9:09

quality of the discourse over on X

9:11

has been just worsening every day.

9:15

You know, people keep sort of loudly announcing

9:17

their departures. And yet, you know, when big

9:19

news happens, when the open AI stuff was

9:21

happening, when stuff happens in Gaza

9:23

that people really want to know about, a lot

9:26

of them are still going to X to figure

9:28

out what's going on. So as much as I

9:30

would like to see that platform sort

9:32

of decline as a source of

9:34

news for people who care

9:37

about seeing good information, it

9:40

does not seem to be losing users as fast as I would

9:42

have assumed. All right. Well, I guess

9:44

we'll just see who turns out to be right on

9:47

that one. What is your

9:49

high confidence prediction for next year?

9:51

My high confidence prediction is that

9:53

a lawless LLM chatbot will get

9:55

10 million daily active users. What

9:58

do I mean by that? lawless LLM

10:00

chatbot. I mean, basically a version of

10:02

chat GPT with no rules or very

10:04

few rules. And

10:06

one of the things that makes me confident about this

10:08

is I just think we are starting to see a

10:11

kind of backlash to what people

10:13

feel are the overly restrictive policies

10:15

that some of these chatbot makers

10:18

have, as you've talked about, like if

10:20

you ask anything remotely sexual or spicy

10:23

of one of these chatbots, it shuts

10:25

you down. Like you can't discuss, you

10:27

know, controversial political issues. And

10:29

you also can't do a lot of

10:31

stuff that, you know, people seem to want

10:34

to do, which is like have AI girlfriends

10:36

and boyfriends or have erotic conversations. So up

10:39

until now, there have been sort of

10:41

open source chatbots or chatbots that have

10:43

been tailor made for some of these

10:45

sort of more, you know,

10:48

controversial use cases, but none of them

10:50

have grown very big. And the big

10:52

companies like open AI have really stayed

10:54

away from those markets because they don't

10:56

want to be known as like, you know, spicy

10:58

chatbot companies. But in 2024, I think what we're

11:00

going to see is that these open source models

11:03

are getting good enough that someone is just going

11:05

to take one of them, take all

11:07

of the guardrails off, put a chatbot interface and

11:09

put it online. And I think that will be a

11:11

very popular product. I got to say, Kevin, I

11:13

think this is a good one. And the main

11:15

reason is that there is so much money to

11:17

be made here. It is sitting on the table. He's

11:20

eventually going to pick it up in 2024 is

11:22

as good a year as any. Yeah. All

11:24

right. What's your medium conference prediction? All right.

11:26

Medium confidence. Google mostly catches up to

11:28

open AI in LLM quality and begins

11:30

to neutralize the lead that chat GPT

11:32

has today. Interesting. So you think Gemini

11:34

ultra when it comes out early next

11:36

year is going to settle the debate

11:38

or do you think that they're going

11:40

to keep coming out with stuff that

11:42

is outperforming chat GPT? I think

11:45

it is very possible that the end of next

11:47

year, open AI is best product still is a

11:49

little bit better than Google's best product. I just

11:51

think it is going to matter less because I

11:53

think that Google is going to get better and

11:55

better at figuring out how to distribute the

11:57

stuff that it has getting its large language.

12:00

models in front of you helping you figure

12:02

out new ways to use it in the

12:04

places where you're already using Google, whether it

12:06

is Gmail or Chrome or Docs or any

12:08

of the other places where so many people

12:10

are already using Google. So I think next

12:12

year you start to see the differences in

12:15

quality between the LLMs matter less and distribution

12:17

matters more and Google is very good distribution.

12:20

I like that prediction. All right. My

12:22

medium conference prediction is that white collar workers

12:24

will start unionizing to fight AI related job

12:26

loss. So this is something

12:28

that I've been waiting to see ever

12:31

since the Hollywood actors and writers

12:33

strikes is whether sort of workers

12:35

in other industries were going to get worried

12:37

about the use of AI to

12:40

replace their jobs or to

12:42

hurt their economic position

12:44

in some way and would start to

12:46

form unions in industries that have

12:49

not historically had large union presences.

12:51

Think law, think finance, think some

12:53

pockets of media, even tech industry

12:56

unions, I think are possible because

12:58

now programmers, some of them are starting

13:00

to worry that their jobs are going to be replaced.

13:03

And actually the AFL CIO, which is

13:06

one of the biggest unions in the

13:08

country announced that it was forming a

13:10

partnership with Microsoft to sort

13:12

of study and discuss the ways that AI

13:15

should be incorporated into workplaces.

13:18

So this is already a conversation that is

13:21

happening in a lot of white collar industries

13:23

among workers who are starting to get nervous about

13:26

this stuff. And I think that 2024, maybe the

13:28

year that we start to see workers take real

13:30

action to stand up and say, this technology

13:32

is happening. We're worried about what it means

13:34

for the future of our work. And we're

13:37

going to unionize in order to be able

13:39

to bargain collectively about it. Yeah, I

13:41

agree with you. I think we're going to see a

13:43

lot more workers asking that question as they should be.

13:45

Yep. Totally. All

13:48

right. What's your low confidence prediction? All

13:50

right. And I do want to

13:52

stress this is a low confidence prediction, but

13:54

I, what I've written here is Apple's vision

13:56

pro it's mixed reality headset is successful enough

13:58

to revive interest in mixed reality. the

14:00

Metaverse. Wow, you think the Metaverse is back. I

14:02

think it could be making it come back. You

14:04

know, for so long, I waved away stories about

14:06

AI because there was nothing in my world that

14:09

I was using. And I just kind of couldn't

14:11

really see it. I accepted that maybe it would

14:13

be something someday, but there was nothing that was

14:15

on my computer right now that I could use.

14:18

I wonder if 2024 is going to

14:20

be like that for the Metaverse for

14:22

some people. Now, the Vision Pro is

14:25

incredibly expensive, costs more than $3,000. Most

14:27

people are not going to have one.

14:29

But Apple is the best in the

14:31

world when it comes to creating technological status

14:33

symbols. And when this thing comes out, it

14:35

is going to be an

14:37

object of fascination. I think rich people are going

14:39

to clamor to get them. They'll be bragging to

14:41

all their friends about what they're doing in them.

14:44

And I do think that that could offer a

14:46

little groundswell of support for what Apple is not

14:48

calling the Metaverse. But of course, Meta is. But

14:50

I do think that'll be able to capitalize on

14:52

that. They're making their own improvements to their own

14:54

headsets. And so by the end of 2024, does

14:57

it feel like maybe VR mixed

14:59

reality is a little bit bigger than it was in 2023? I

15:02

do have low confidence that that is true. Yeah,

15:05

I also have low confidence that is true. But I think

15:07

it's possible, which is why I like that you included that

15:09

as your low confidence prediction. All right.

15:12

Mine is Elon Musk will get

15:14

his own Hunter Biden laptop scandal. Oh, tell me more

15:16

about this. So I don't know if you've heard, but

15:18

in 2024, in the United States, there

15:21

will be an election. Yes, of course.

15:23

And notoriously, in presidential election years, people

15:25

get up to all the craziest

15:28

shenanigans on the Internet. In fact, I believe Donald

15:30

Trump is calling it the final election we'll ever

15:32

have. Yes.

15:35

So notoriously, during

15:38

the 2020 election cycle, there was this

15:40

whole drama where the New

15:42

York Post reported a story about Hunter

15:44

Biden's laptop. And it was right before

15:47

the election. And people were speculating

15:49

it's this Russian disinformation with

15:52

this material hacked Twitter, which

15:55

at the time was run by Jack Dorsey, decided

15:57

to sort of throttle and cut off. Access

16:00

to this article while it tried to

16:02

figure out what's going on. This became

16:04

a huge Conflagration people got super upset

16:07

accused them of censorship Jack

16:09

Dorsey ended up sort of saying that was

16:12

a mistake and and this became sort of

16:14

one of the things that I would argue

16:16

Sort of caused Elon Musk to want to

16:18

buy Twitter and to radically Reorient its views

16:20

on content moderation. He was very upset that

16:22

this thing had happened He

16:24

wanted to like get to the bottom of it And

16:26

so he bought the company and changed the rules and

16:29

I think that in 2024

16:31

we may see a similar October

16:34

surprise from the other

16:36

side and I think that when that

16:38

happens when there's something about Donald Trump

16:40

or whoever the Republican nominee is That

16:42

Elon Musk doesn't like or thinks might

16:44

be disinformation or just as

16:46

skeptical of he is going to

16:49

Decide to throttle access to it or

16:51

cut it off thereby Replicating the exact

16:54

mirror image of the situation that made

16:56

him so pissed off that he bought

16:58

Twitter in the first place Well,

17:00

I like this because Elon Musk himself

17:03

has said that the funniest outcome is

17:05

often the most likely and this would

17:07

be perhaps The funniest outcome for Musk

17:09

acquiring the site. It's true And

17:11

I think that when that happens, he should hire Yoel

17:13

Roth to to fix the problem and restore trust and

17:15

safety at X I think you well might

17:18

not be available for that job All

17:21

right, so that's my low-confidence prediction. I like these

17:23

predictions. Yeah, I think we covered a lot of

17:25

ground next year is gonna be big I'm sure

17:27

that there are some stories that are not in

17:29

these predictions that will wind up dominating a good

17:31

deal of discussion next year But I don't know

17:33

from what what we know here in December this

17:35

seems like as good a series of guesses as

17:37

any and In fact, I

17:39

think I like these predictions so much that I think we

17:41

should do something that we've never done before On

17:44

this show which is to open prediction markets for

17:46

our We've

17:48

talked on the show about prediction markets I

17:51

spend some time on this site called manifold

17:53

which allows you to serve bet on various

17:55

outcomes And so we will

17:57

go on and create prediction markets for

17:59

all all of our 2024 predictions

18:01

so that our listeners and people who agree

18:04

with us or disagree with us can go

18:06

actually wager fake money on whether or not

18:08

these things will happen. Yeah, I would say,

18:10

spend a lot of money on this. Spend

18:13

a lot of... It's not money, it's mana. It's

18:15

their fake currency. And if

18:17

you want to bet on our

18:19

2024 prediction markets, you can find

18:21

them on manifold.markets. And

18:24

I also think we should check in on

18:26

the predictions that have been made about us

18:28

on manifold markets. Yeah. And there are plural

18:30

predictions that have been made about us. There

18:32

sure are. Yeah. So one

18:34

of them that I think we should get your

18:37

take on is, will

18:40

the Hard Fork podcast have one episode before

18:42

the end of 2023 that does

18:44

not talk about AI? I

18:46

mean, first of all, why would you want to have an episode

18:48

of Hard Fork that doesn't talk about AI? Right. I

18:51

love this though, because it's like, it's an active market. There

18:53

have been 25 trades on this market. It's currently sitting at

18:55

10%. So people do not think

18:57

it is likely that we will stop talking about AI.

19:00

And I will say, I love this as a

19:02

form of reader feedback. In the old days, people

19:04

would have written an angry letter to the editor

19:06

saying, like, because I was going to stop talking

19:08

about AI, I'm really getting sick of it. And now you

19:10

can just open a prediction market. We talk

19:12

all the time about talking about things that are not AI.

19:14

In fact, we often do have segments that are not about

19:17

AI. And they're some of my favorites. But it's like, one

19:19

of the things we're committed to is telling the most

19:21

important stories of the moment. And AI just really

19:24

is kind of at the center of all of them. The

19:26

second prediction market that I wanted to check

19:28

in with you about that already exists on

19:30

manifold markets is has the following title. Will

19:34

Casey Newton begin dating an AI before June

19:36

of 2024? Now,

19:39

I promise I did not write this.

19:41

Really? This was made by the user we

19:43

and its current probability is only

19:46

15%. So

19:50

most people do not believe it is likely that you

19:52

will begin dating an AI before June of 2024. But

19:56

what do you think? I mean, I also don't

19:58

think I'll be continuing an AI. before June

20:00

of 2024 I'd be curious

20:02

to sort of know what is meant here by

20:05

date you know is it meant that like once

20:07

or twice a week like I go to dinner

20:09

by myself and just like type onto

20:12

my phone to do some sort of AI

20:14

boyfriend because that does

20:16

it seem very likely yeah

20:19

I don't know I don't see it now

20:21

do I think that by June many more

20:24

people will have AI boyfriends and girlfriends and

20:26

non-binary friends than they do today like yes

20:28

I do believe that yeah there actually are

20:30

some discussions happening in the comments of this

20:32

particular market about what the resolution criteria are

20:35

one user asked how will this market resolve

20:37

if after a few too many drinks Casey

20:39

Newton has a spicy one night stand with

20:41

an AI but in the morning Casey says

20:43

it was a mistake and something that will

20:46

never happen again but somehow the AI gets

20:48

Casey's phone number and starts calling it all

20:50

hours which Casey ignores one day until one

20:52

day he gets a call during his god

20:54

daughter's quinceañera which causes him to grab the

20:56

phone and shout angrily into the receiver that

20:58

he will never love you before realizing

21:00

that it was actually Kevin calling and he tries

21:02

to explain but Kevin's not having it and by

21:04

the end of the month they've decided to put

21:07

hard fork on an indefinite hiatus so that they

21:09

can both pursue new opportunities well I'm

21:11

glad someone has figured out the most likely outcome

21:13

of all of this and just put it in

21:15

writing so that I didn't have to that's

21:18

a really inspired piece of writing it really is

21:20

whoever wrote that yeah if you are the person

21:22

who left that comment on the on the market

21:24

for the this question please get in touch we

21:27

have some creative writing opportunities for you all

21:32

right that's enough predictions when

21:34

we come back we have hard

21:37

questions from our listeners you

22:01

AI isn't coming, it's here now.

22:03

How can leaders stay ahead of

22:05

the curve and make sure they're

22:07

using AI to its fullest potential?

22:09

By listening to the Work Lab

22:11

podcast from Microsoft, hosted by veteran

22:13

technology journalist Molly Wood. Join her

22:15

as she explores how AI innovation

22:17

is transforming creativity, productivity, and

22:19

learning. Follow Work Lab

22:22

on Apple Podcasts, Spotify, or wherever

22:24

you listen. Hi,

22:26

I'm Genevieve Koh from New York Times Cooking.

22:28

There's nothing quite like a fresh batch of

22:30

homemade cookies, and I love to share them

22:32

with friends. Me a friend,

22:34

me guest cookie critic, Cookie

22:36

Monster. Hi, cookie. We baked

22:38

some of our most delicious cookie recipes

22:41

for you to review. Let's start with

22:43

our peppermint brownie cookies. Cookies. Oh,

22:45

that's so sweet. Nom, nom, nom,

22:47

nom, nom, nom. Ah, okay,

22:50

me have some notes. Uh, okay. Yeah,

22:52

Genevieve, me think you missed an important

22:54

step in the hibiscus cookie recipe. Yeah,

22:57

me gonna write it down, excuse me.

22:59

But you didn't try those yet. Exactly. Give,

23:02

cookie, monster, hibiscus cookies. Oh,

23:04

I see. Actually, you know

23:06

what? Me think almond cookies

23:08

need same step too. Excuse

23:10

me. Give, cookie. Head to

23:12

nytcooking.com, where you can find so many

23:14

cookie recipes, sure to be loved by

23:16

your friends, family. And monsters.

23:19

Me still hungry. Me just gonna take

23:21

this. Excuse me. Wait, that's the microphone.

23:23

Nom, nom, nom, nom, nom. Casey,

23:27

we've got a special holiday treat today.

23:29

So periodically on the show, we do

23:31

our segment, Hard Questions, where we solicit

23:33

our listeners' biggest ethical and moral dilemmas

23:35

about technology or some tech thing that

23:37

is going on in their lives, and

23:39

we try to answer them. And

23:42

today, to help us with that, we have

23:44

a very special guest. That's right, because as

23:46

much fun as Kevin and I have giving

23:48

you advice, we know it could be better

23:50

if we bring in one of the funniest

23:52

people I know. And right now, we have

23:54

a chance to do just that, because Jenny

23:56

Slate is coming on hard for... This

23:58

truly is a... Holiday treat

24:00

Kevin because you know like we like to have

24:02

fun on the show. We like to laugh We

24:04

like to joke around but we are not professional

24:06

comedians Like making people

24:09

laugh laugh is at best like a

24:11

side project. Yes Jenny

24:13

slate is a pro. Yeah, it is

24:15

one of the funniest people around I

24:17

first saw her on Saturday Night Live

24:20

Then she played Mona Lisa Saperstein on Parks

24:22

and Rec one of my favorite roles of

24:24

hers since then she's done voices in a

24:26

bunch Of great animated films and TV shows

24:28

including Zootopia Bob's Burgers and Big Mouth

24:30

And she is also the co-creator and voice

24:33

of beloved character Marcel the shell with shoes

24:35

on and Incredibly she's agreed to

24:37

come on hard fork and help us give out

24:39

advice to listeners for the holidays Have you ever

24:41

been so excited for a segment? No, I'm very

24:44

excited And I think we should also explain like

24:46

how this came about yeah, it was a little

24:48

roundabout So we got this DM and usually like

24:50

I don't check my DMS on Instagram all that

24:52

much because like it's mostly just Cryptocurrency scams and

24:55

stuff, but we did see a DM from someone

24:57

purporting to be Jenny slate and she had that

24:59

blue checkmark Yes, and I thought well This is

25:01

exciting and she said she said she listens to the

25:03

show and she said I'll just read from her DM

25:06

here If you ever want to talk to a 41

25:08

year old stand-up comedian who's afraid of tech but wants

25:10

to learn about it But it's very turned off by

25:12

it. You can just email me We

25:15

both sort of silently screamed to ourselves and freaked out

25:17

and immediately emailed her I mean it was amazing And

25:20

yet I think Jenny really stands in for a lot

25:22

of the audience here because I think so many listeners

25:24

want to learn about tech and are also turned off

25:26

by it at times I think you can probably describe

25:28

both of us that way often and so Jenny

25:31

I think is very much Simpatico with the

25:33

vibe of hard fork and Peace

25:36

here right now. Yeah, springer in amazing I Can

25:48

you say welcome to hard fork hello, thank you for

25:50

having me Hey, Jenny, I just was listening and

25:53

I was like, oh no, they want Shania tween

25:55

This is like such a letdown and I invited

25:57

myself. The whole thing is weird. I I

26:00

haven't even told my other friends that I did this and

26:02

then I didn't tell my husband that I did it and Well,

26:15

so I have to respond to the last thing

26:17

you said Jenny because for a while We have

26:19

thought like it would be really cool if like

26:21

a comedian or actor like wanted to come on

26:23

the show But what are we supposed to do?

26:25

Just like end the show by saying like hey

26:27

if you're like a celebrity DM us and then

26:30

literally you just DM does so It

26:32

was really kind of like a the secret moment where

26:34

we manifest of it and I just think that you

26:36

know This was meant to happen. I

26:39

just love that so much and in so

26:41

many ways and I listened to

26:43

the podcast I love it so much.

26:45

I really just love the whole thing I

26:47

love your personalities and how much you guys

26:49

make each other laugh it just

26:51

makes me smile so much and I love all the information

26:54

and I Hardly

26:56

know how to work my computer and

26:59

so I was just gonna ask like

27:01

what is your relationship to technology these

27:03

days? Yeah, it's the same

27:05

that it's been since like around Napster

27:09

So like for me for example the other

27:12

day I was parking

27:14

my car and My

27:16

husband was like trying to find the car and he was

27:18

like drop a pin and my response was I don't know

27:20

how to do that It's

27:23

like I'm I know it's not

27:26

hard to learn But there's

27:28

something about technology that makes me feel

27:30

really sometimes I get really petulant about

27:32

it And I just like

27:34

and the modern equivalent of the old person

27:37

who says like well We had to

27:39

walk six hours You know six miles or hours

27:41

or whatever a long time or like very hard

27:44

Way in the snow to go to school or something

27:47

like I'm just I have a

27:49

computer. I Don't really

27:51

use it for anything Except

27:53

for I use Microsoft words

27:55

a good point classic. Yeah, I love that

27:57

one Recently

28:00

somehow recently my computer started to get

28:02

my text messages and I don't know

28:04

why but I do But

28:08

see that I imagine the cities you like your eye messages

28:10

are popping up in a window on your computer I this

28:13

is how all the cool kids are texting these days.

28:15

Is that true? Yeah, it's true Yeah, so that's like

28:17

that's an advanced level move, but it's it's very helpful

28:19

when I find yeah I mean, I

28:21

love it. But then like there's a there's a limit

28:24

for example My

28:26

computer doesn't know who a lot of those

28:28

people are and so I don't know why

28:30

some contacts go But like I meant like

28:32

a grandmother and I'm comfortable with it But

28:36

then I started to be uncomfortable

28:38

with not understanding what was like going

28:40

on in the world and

28:44

So that is why I listen

28:46

to your podcast every week and

28:48

enjoy it and taking the information

28:50

and I feel Less

28:53

alienated from the the world at

28:55

large I guess so amazing.

28:57

Well, that's wonderful here Thank you for listening

28:59

and for coming on I want to ask

29:01

you about one more tech thing just being

29:04

based on our interactions Which was when we

29:06

were we you know, I think it's okay

29:08

to say we were messaging on Instagram one

29:10

of the social platforms Yeah, and

29:12

you suggested that you you have

29:14

had a maybe complicated relationship with Instagram Which

29:16

I think Kevin and I have had as

29:18

well But like can you speak to kind

29:21

of maybe the ways it was driving you

29:23

crazy a little bit? Yeah, for sure. It

29:26

would sort of make me feel bad about

29:28

myself Even if someone that

29:30

I like was doing something that they enjoyed and I

29:32

was happy for them There would be this like, you

29:34

know strange feeling about inadequacy that I

29:37

honestly didn't even identify with and

29:39

I Did

29:42

not find a use a greater use for that feeling

29:44

in my life so I was like oof I would

29:47

like to not feel like this and

29:49

so I stopped it didn't

29:51

make me feel good and I wasn't sure about

29:53

it for me as a way to Do

29:57

the only thing I'm sure it works for

29:59

is for like when I want to tell people that

30:01

I'm doing a stand-up show, or

30:03

that I have a new book, or some sort of work. I

30:07

like it as a bulletin board for that. And

30:09

I don't do really like any DMs, so

30:13

I have one DM with the people who

30:16

know all about tech and computers, and I

30:18

feel like that's fair. Yeah,

30:20

well, we're here to be your personal tech support any

30:22

time your printer breaks. Just be kept. And

30:25

I just want to think, because I'm sort

30:28

of a hearing in your voice, maybe

30:30

some insecurities about your relationship with tech. But I

30:32

just want to say, there is such wisdom in

30:34

what you have done. I think one of the

30:37

hardest things for people to do with the personal

30:39

technology in their lives is to just have an

30:41

intentional relationship with it. And if there is something

30:43

in your life that is causing you pain and

30:45

misery, and you got rid of it, congratulations. That

30:48

is actually how you win at tech, is

30:50

doing that. Yeah. We

30:52

have some listener questions to answer with your

30:54

help. Yeah, all right. This is our segment,

30:56

Hard Questions, where we solicit questions or ethical

30:58

dilemmas from our listeners and try our best

31:00

to answer them. So, Jenny, you have gamely

31:02

agreed to help us answer some listener questions.

31:05

First up, we've got a story from listener

31:07

Mike Ford about a new pattern he's seeing

31:09

at weddings. Hey,

31:11

Hard Fork, my name is Mike Ford.

31:13

I live in Wisconsin. I've been a

31:15

wedding photographer and videographer for 15 years,

31:18

and I have noticed

31:20

something that I can't prove, but

31:23

every single groom is just

31:26

using AI to write their

31:28

love letters, and then also

31:30

for the vows, obviously, that

31:32

too. And it's

31:35

something we've discussed a lot at my

31:37

company, just among us, like,

31:39

is it cringe? Is it not? Is

31:42

it inspirational? And I'm telling you, these

31:44

grooms are not using it for inspiration.

31:46

They are copying it verbatim. Anyways,

31:50

I actually saw a groom reading

31:53

it out of the chat GPT

31:55

browser. The groom is bribed

31:57

to be sitting on a bed before their wedding. Wow.

32:01

So Mike doesn't have a specific

32:03

question here, but I think there's a

32:05

lot to unpack. Jenny, what do you

32:07

make of the wedding chat GPT industrial

32:09

complex? Oh my gosh, that really

32:12

shocked me. First of

32:14

all, I love, now I'm gonna just

32:16

misquote him, but I've noticed something that

32:18

I can't prove is, wow.

32:22

I mean, I think I've

32:24

said that before to Pat's partners. I'm

32:26

guessing that I can't prove. OK,

32:29

I was listening. I was like, well,

32:32

if the people they're getting married to

32:34

don't mind if they're just like, hey,

32:36

Justin, don't just show

32:38

up and have nothing. Just have something to say

32:40

in front of my dad and my stepmom or

32:42

whatever. It's like, OK, fine. If

32:44

that's what's fine with everyone, whatever. It's your

32:47

wedding. That's fine. But oh,

32:50

how absolutely terrible if

32:52

the bride thinks. Because

32:55

I don't know. When

32:58

you're getting married, and I'm not an

33:00

expert, but I've done it two times.

33:02

And one thing that I like about

33:05

getting married and that

33:07

my husband did very well is that I

33:09

was like, this will be the moment when

33:11

he'll show his heart to

33:14

our community and to me. And

33:17

that is a special alchemy and that

33:19

magic of honesty and love and romance

33:21

and ceremony. And so if

33:23

you think that that's coming from a

33:26

human, but in fact, it's coming

33:28

from an AI being. Yeah,

33:34

whatever is right. Casey, do you think

33:36

that A, this is unethical for grooms

33:38

to be doing this, and B, that

33:40

Mike should do something about it? Oh,

33:43

wow. Here's my

33:45

perspective on this. I think it is

33:47

fine to use chat GPT to write

33:49

the first draft of your vows. Most

33:52

people that are getting married for the first time, they

33:55

don't even know what it is supposed to be. Maybe

33:57

they've been to a few weddings and they have some

33:59

vague sense of humor. of what it's like, but they

34:01

want to get some ideas of like, what are kind

34:03

of the main points that I want to hit? I

34:05

don't have any problems with that. You know, most people

34:07

are terrified of public speaking. It's a very scary moment

34:09

to stand up and all your friends and family do

34:12

that. So I'm sympathetic to the groups. But what I

34:14

will say is, you should write a second draft. You

34:17

should go through and you should say,

34:19

do I love anything specific about my

34:21

fiancé? And like, if so, say that.

34:23

I think that kind of squares the

34:25

circle. You'll have your good vows. You

34:27

got through it with some assistance, which

34:29

is fine. And Mike can stand down

34:31

and he doesn't have to, you know, ruin the

34:33

wedding. I think my red flag here is less

34:35

about the use of chat GPT. I'll confess that

34:37

the first time I heard that people were doing

34:39

this for their wedding vows, my reaction was like,

34:41

oh, that's horrible. And I think these marriages are

34:43

doomed. But

34:45

the more I'm thinking about Mike's question, the thing

34:47

I actually object to is not the use of

34:49

chat GPT. It's the lying about it, right? If

34:52

you're nervous about writing your vows and you want

34:54

to enlist some help from an AI to

34:56

sort of write your first draft, go with God

34:59

fine, as long as your partner is cool with it.

35:02

But if you're going to pass that off as your own thing, I just think that

35:04

sets a precedent of dishonesty that does not portend well

35:06

for the future of the relationship. I will agree. And

35:08

the maybe last thing I would say is, you know,

35:10

when I have seen my friends give their own vows,

35:13

watching them do that in a heartfelt way

35:15

are some of my favorite moments that I've

35:17

ever seen my friends, you know, so try

35:19

not to deny yourself that of like actually

35:21

saying something that you truly believe. Yeah,

35:24

yeah. But I've also heard some truly terrible speeches

35:26

at weddings, which maybe chat GPT would have done

35:28

a better job of. Good point.

35:31

You know, in Forrest Gump, when

35:34

he says I'm not a smart man, but I know

35:36

what love is, that is enough.

35:39

Like not I'm not saying that the people using

35:41

the chat GPT are not smart. I'm just saying

35:43

you can quote something and have it might be

35:45

better. Like it's okay if it's not your own

35:48

words. It's just the lying. I think it's yes.

35:50

Boy, oh boy. Well, how devastating. How strange.

35:54

Sorry. I wonder like how

35:56

many divorce filings that's going to be

35:58

cited in like irreconcilable for instance, because I

36:00

discovered that my wedding vows were written by

36:02

Chat GPT. Probably more than one. Okay, this

36:05

next question comes from listener Ben Segal, and

36:07

it comes in response to a segment we

36:09

did a few weeks ago about cultivated meat,

36:11

essentially meat that is grown from cells in

36:13

a lab. Casey and I

36:15

tried some cultivated beef in the form

36:17

of meatballs, and we asked this question,

36:19

is lab-grown meat ever going to be

36:21

a viable alternative to our current way

36:23

of getting meat, which is killing animals?

36:26

And so in response to this episode,

36:28

Ben sent in this question. Hey guys,

36:30

this is Ben from Minneapolis. A

36:33

while back, you talked about lab-grown meat,

36:36

and it made me realize that someone

36:39

eventually, down the line, will

36:41

probably create lab-grown human flesh,

36:45

and I'm wondering if you guys

36:47

think it will be ethical to

36:49

eat said lab-grown human flesh. I

36:53

do just wanna point out though, that I

36:55

have no desire to eat human flesh, and

36:58

I recognize that that's exactly what

37:00

somebody hungry for human flesh would say, but

37:03

honestly, it just made me

37:05

think, would it be ethical for these

37:07

people to grow lab-grown human

37:10

flesh, and then eat that

37:13

human flesh? Happy holidays.

37:15

Ha ha ha ha ha ha ha ha

37:17

ha ha ha ha ha ha ha ha

37:19

ha ha ha. What a question. My goodness.

37:21

Denny, where do you come down on lab-grown

37:24

human flesh? He's definitely said

37:26

hungry for human flesh, and

37:28

kept saying, talking about eating it a

37:30

lot, and it does feel like

37:32

one of those asking for a

37:35

friend kind of questions. Ha ha ha ha ha ha ha ha

37:37

ha ha ha ha ha ha ha ha ha

37:39

ha ha ha. Wow, that just actually completely

37:41

emptied my mind, and I almost forgot the

37:43

question. I'm just, you know, you're just like,

37:45

you start to just have to

37:47

tell your legs, don't run away,

37:49

don't run away. You have to stay

37:51

in your seat for the rest of the question. It's

37:54

hard for me to think that, I

37:57

feel that we should eat that, I don't think

37:59

that. we should. But I don't know.

38:01

I don't want to hurt anyone's feelings. I

38:03

just don't. It's not right

38:06

for me. It doesn't feel right.

38:08

Is it ethical? That's kind of

38:10

a confusing way to set that

38:12

up. It sort of feels like it's not as

38:14

real question, right? It's unsettling. It

38:16

is unsettling. Yeah. I think,

38:18

look, what

38:21

we learned during our episode that we did about

38:23

lab grown meat is that it's very expensive to

38:25

make it. And that if you're

38:27

making like human grown flesh, like my hope

38:29

would that would be for some sort of

38:31

medical use, you know, to like save a

38:33

life, you know, so the idea

38:35

that there's just sort of like extras on the

38:37

counter for snacking, I think just seems very unlikely

38:39

to me. So any

38:41

situation where I can imagine lab grown meat

38:43

laying around, I actually do think it would

38:45

be unethical to eat it because I think

38:48

it hopefully is there to serve some higher

38:50

purpose. Yeah. So search engine, one of

38:52

our favorite podcasts posted by a B.J.

38:55

vote recently did an episode about

38:57

cannibalism and address this question. And

39:00

I've been thinking about it ever since then. And I,

39:02

you know, I think I've come down on the permissive

39:04

side. I would eat the lab grown human flesh. Wait,

39:07

why? Yeah, I don't know something about it. Well,

39:10

so I bite my fingernails. So already

39:12

I am eating some element of my

39:14

body. So why isn't any

39:16

different? You know, and no humans are harmed in

39:18

the production of this meat. And,

39:20

you know, I think it could allow

39:22

people who maybe have a taste for human flesh,

39:25

which is again, not me. I'm. Wait,

39:28

did you submit this question under the

39:30

name? No, I just think I would

39:32

try it. You know, I'm curious. All

39:34

right. Well, I worry that, you know,

39:36

we have the special guests and we've

39:39

already taken the show in such a

39:41

disturbing direction. Oh, it's fine. It's fine

39:43

with me, honestly. Yeah, I'm

39:46

not shocked at all that I was on here for

39:48

10 minutes. And then

39:50

as I go. I get my steak baby

39:53

finished next work. I

40:00

feel bad for Shania Twain, you know, if

40:02

she ends up coming on. Yeah. And

40:05

Shania, if you're listening, please DM us. Yeah. Love

40:08

you. All right. Let's

40:10

say that that was probably the worst question that we

40:12

got. Let's see if the next one is any better.

40:15

Okay. This next one comes to us all the

40:17

way from Copenhagen. And for

40:19

this one, Jenny, we're going to ask

40:22

you to channel your best parenting advice.

40:25

Here's our listener. Hello,

40:27

Hartvark. Ida Ebenskor

40:30

from Copenhagen here. I

40:32

have a question for you, actually, too. Now,

40:35

it's almost Christmas, and my son

40:38

Uwe, who's nine years old, gave

40:40

me his wish list. And

40:43

there was something on the wish list

40:45

that I couldn't figure out. It said,

40:48

gaming equipment. I

40:51

looked at him and said, Uwe, what do you mean? What

40:54

kind of gaming equipment? And

40:56

he looked at me back, and then he made this,

40:58

like, just slighted towards a

41:01

nearby computer and typed into

41:03

Google, gaming equipment.

41:07

And I asked him, Uwe, did

41:10

GBC write your wish list

41:12

for Christmas? And he

41:14

said, with a little

41:16

smile, not answering the same

41:19

way as when your teacher says,

41:21

did you cheat, Uwe? And he

41:23

never asked me. Yes, it

41:25

did. It did write his wish list for

41:27

Christmas. So my

41:30

questions would be, number

41:32

one, is there

41:34

an age limit for kids

41:36

using chat GBC? And

41:40

secondly, what kind of

41:42

gaming equipment should I give him?

41:45

Any ideas for these two

41:47

prison questions? Okay,

41:50

first of all, has there ever been a

41:52

greater tonal shift in the history of podcasting

41:54

than maybe from lab-grown human flesh consumption to

41:56

Goofus Christmas lists? Wow. We're

41:59

really headin' over. all the high points

42:01

here. Alright, so Jenny, how do you

42:03

feel about kids using chat GPT? It's

42:06

so sweet in a way, like using it

42:08

to be like, what should I want? You

42:11

know, like it's so sweet. It's like wanting

42:13

to belong. Like, you know, it's

42:16

not just that I want something for Christmas.

42:18

I want to want what other kids want.

42:21

And that's very sweet, but

42:23

it also does hurt my heart a little

42:25

bit. I

42:28

want Edith to talk to Uffa about is

42:30

there any way that he might just like on his own know

42:32

what he wants. That

42:34

is really where I am. That's

42:36

like where I go on this. But generally, like,

42:38

I don't know, kids can

42:40

use the internet for homework and

42:42

stuff. It's just such a different

42:44

thing to use chat GPT. I

42:47

would be very limited, but I'm kind of like a

42:49

strict mom on that kind of stuff. My

42:52

daughter's very little. She's like only allowed

42:54

to watch a screen like once a week and

42:56

she watches Bluey and that's just because I want

42:58

to watch it. I love Bluey.

43:00

I think it is the best of

43:02

the toddler shows. Oh, yeah, yeah,

43:04

yeah, for sure. I just I love it. But

43:07

the other thing is, yeah, I have no idea

43:09

what to tell her about what type of gaming

43:12

equipment to get because I've basically played

43:14

a video like game like four times in my life. And

43:16

the last one I played was Tekken.

43:19

And it was in I want to say 2002.

43:24

Who was your character on Tekken? It was

43:26

like a big panda. Could

43:28

that be right? Great character. Great

43:30

character. Great character. We love her.

43:33

That was very sensible advice to me. You

43:35

know, my thought is like, yes, kids can use

43:38

chat GPT. But like with anything else on the

43:40

internet, you just want to do a supervision. Right.

43:42

So technically, you're supposed to be 13 or older

43:44

to use chat GPT. They're

43:47

sort of like terms of

43:49

use limit. And

43:51

if you're under 18, you need your parent

43:53

or guardian's permission to make an account. But

43:55

obviously, we know that people are

43:58

using this stuff much younger than than. that,

44:01

including sometimes with their parents' permission. So

44:04

I think it's fine to have Chat GPT

44:06

write your Christmas list. I think there's nothing

44:08

like, you know, particularly, and I agree that

44:10

it's kind of sweet to want to,

44:13

you know, get a sense

44:15

from sort of the collective hive mind of like

44:17

what a person my age should want. I

44:20

will say on this specific issue of whether

44:22

or not to get OofA a video game

44:24

system, I have some personal history with

44:26

this because I think from about the age I was

44:28

like seven or eight until like 14 or

44:30

15. Every

44:33

Christmas, I asked my parents for a

44:35

Sega Genesis, and I never got a

44:38

Sega Genesis. And

44:40

you know, I still had many wonderful gifts,

44:42

but they, I think, correctly intuited that if

44:44

I had a Sega Genesis, I would never

44:46

leave the house again. I would

44:48

never make friends, and I would never

44:50

play sports, and I would never

44:52

do any other activities, and that would become my

44:54

life. And I think that was

44:56

probably a wise decision on their part.

44:58

So you know, only Iba

45:01

knows OofA well enough to know whether he

45:03

is in danger of becoming that kind of

45:05

a shut-in through video games. But I would

45:07

just say, you know, tread carefully because I'm

45:09

glad that my parents restricted my video game

45:11

playing during my formative years. Did you have

45:13

any kind of restrictions on your video game?

45:15

Yeah, you know, limited in terms of maybe

45:18

how many hours a day we could play.

45:20

But you know, we did play video games,

45:22

we did love them. And so that leads

45:24

me to conclude that like OofA should get

45:26

a gaming console for Christmas. I think as

45:28

a nine-year-old, the Nintendo Switch is probably gonna have

45:30

the most stuff on it that he's going to enjoy.

45:32

So I would look there first, but you can also

45:34

just get him a little tablet. There's so many

45:37

cheap little tablets you can get now from Amazon,

45:39

or you can get maybe like a refurbished iPad

45:41

or something. A lot of games on there, and

45:43

then it is also useful for other stuff. So

45:45

you know, you can show them educational videos and

45:47

you know, whatever else you want to do to

45:49

raise your child. So that would be my recommendation

45:52

for gaming equipment for young OofA. And of course

45:54

we wish you a very Merry Christmas. All

45:57

right. Next question comes to

45:59

us from... Alia DeLand, who

46:02

was very persistent, she actually reached out to us multiple

46:04

times, we see you Alia. And

46:06

Alia has a problem that she wanted some

46:08

advice about. She did something on Amazon that

46:11

she is now feeling guilty about. And we

46:13

don't have a voice memo for this one,

46:15

but I'll just read her message. Here's what

46:17

she said, lightly edited

46:20

for brevity. She said,

46:22

quote, I recently bought off-brand ink

46:24

for my printer. The Amazon seller

46:26

I bought the ink from, which

46:28

was new prime eligible, good reviews,

46:30

et cetera, thanked me for the purchase

46:33

and promised $60 worth of

46:35

Amazon gift cards in exchange for a

46:37

five-star review. I usually

46:40

recycle these postcards, but this one

46:42

promised 60 Amazon bucks. Turns

46:44

out that is the price of my conscience

46:46

because I logged on, left a five-star review

46:48

and received an Amazon gift code in exchange.

46:51

So she asks, on the

46:53

scale of moral repugnance, where does the

46:55

crime of a fake Amazon review fall?

46:57

Am I deceiving my fellow shoppers, aiding

47:00

and abetting some weird internet crime? And

47:02

where did that money even come from?

47:05

So Jenny, we'll start with you. What is

47:07

your take on whether it is unethical to

47:09

accept an Amazon gift card in

47:11

exchange for a disingenuous review? Oh,

47:14

this person feels so bad if they're

47:16

even thinking of the word repugnance, you

47:18

know? I mean, that feels really strong. And

47:22

I don't, there's a, I am at

47:25

this place right now where I'm just like, I just wanna

47:27

try to be

47:29

as forgiving as possible. Like, just

47:32

generally, it seems

47:35

like, yeah, you obviously don't wanna do

47:37

that. So don't do it again, because

47:39

now you found out that you really

47:41

don't like it. So that's good information.

47:43

I don't think it's repugnance. It doesn't

47:45

seem like the most ethical,

47:48

but I also think that it was

47:50

printer ink and not like,

47:53

I don't know, a medical device or like

47:55

a... vaccine.

48:00

Yeah, you know, like

48:03

a baby pacifier that breaks

48:05

in half or something like it, no

48:08

one's really getting hurt. But if

48:10

you're just going to speak about it

48:12

as an ethical issue like, yeah, it's

48:14

not ethical. If you really going

48:16

to do the hard math on that one,

48:18

but also, oh, hon, come on, give yourself

48:20

a break. You're definitely not going to have

48:22

a question for you both, which is do you trust Amazon

48:25

reviews in the year of our Lord 2023? Like

48:28

if you see a product that has tons of great

48:30

reviews, do you think to yourself that must be a

48:32

good product? Or do you think that must have been

48:34

gamed in some way? Because I am now so cynical

48:36

that I think all Amazon reviews are fake, or at

48:39

least a substantial portion of them. We

48:41

know that there is a thriving ecosystem of these

48:43

fake reviews. And so I you know, I love

48:45

that Jenny is bringing a holiday spirit of forgiveness

48:47

to this listener. And I think I think we

48:49

should extend that, you know, at

48:52

the same time, it you can also ask

48:54

yourself like, well, what, what world am I

48:56

creating when I do that? The reason that

48:58

this exists is because Amazon charges all sorts

49:00

of fees to these sellers who want to

49:02

be on the platform, and it

49:04

penalizes them heavily for not having

49:06

five star reviews. And so

49:08

they've incentivized this sort of exact behavior, you

49:11

know, so our listener wants to know where

49:13

this money come from. Well, most people don't,

49:15

you know, most people do throw the postcard

49:17

away, but enough of them actually go through

49:19

with it, that it is essentially worth it

49:21

to them to spend the 60 bucks, get the

49:23

five star review, and now we'll sell more printer

49:26

ink. And the net effect of all that is

49:28

just that prices go up for all of us

49:30

because we're essentially paying all these hidden fees and

49:32

taxes just to like use amazon.com. So I wish

49:34

Amazon would do a better job of ferreting out

49:36

these these reviews. And they would tell you that

49:38

they're removing millions of fake reviews every year. Clearly,

49:41

there's more work to be done. So we'll give

49:43

you a pass this time, Aliyah. But you know,

49:45

maybe maybe reconsider in the new year. I

49:49

never read the reviews. I

49:52

I just asked my friends about

49:54

stuff like in real life. And

49:56

yeah, I guess I

49:58

just I'm also not that much of a like

50:00

a smart shopper, you know, I'm

50:02

just clicking away on junk. And do

50:04

you ever, do you ever

50:07

leave reviews, Jenny? I

50:09

have not ever left a review. No,

50:12

but on a restaurant, on

50:14

a hotel, on anything. Uh,

50:16

I don't believe so. But when

50:19

I'm like, you know, on the

50:21

phone with Verizon, um, you

50:24

know, and talking to the person about like, is my

50:26

phone going to work when I go over here? And

50:31

then at the end, they're like, or, or, you know, or

50:33

the airline, like, you're like, why are you calling these places?

50:35

I'm sure you could do it, you know, online, but I

50:37

always call the people. Um, and they're

50:39

like, would you say on, for a review,

50:42

I always do that.

50:46

Wow. That is super nice. Well,

50:48

they asked me to, and

50:51

they're a person. Yeah. My thing

50:53

is like, you know, you remember like when, when

50:55

you learned that like Uber drivers get kicked off

50:57

the platform, if they get like anything less than

51:00

like a 4.0 rating or

51:02

something. And so from that point on, you like

51:04

only rate people five stars, no matter how horrible

51:06

they are as a driver. Cause like, you don't

51:08

want to like mess up their livelihood, right? It's

51:10

like, maybe you like took a couple wrong turns

51:12

or something, but I don't want to like punish

51:14

you by like getting you booted off the platform.

51:16

So I'm just going to give you five stars.

51:19

I feel like that is happening with a lot

51:21

of other categories of thing. Like if

51:23

I, if I have like a horrible experience at a

51:25

restaurant, like I'm not leaving them a one star review

51:27

because I don't want to tank the whole restaurant. All

51:29

I wanted for them to do is to like, you

51:31

know, fix it up a little bit. I totally agree

51:33

with this. I'm so tired of being asked to review

51:35

things. You know, it's like on DoorDash, I'll order like

51:37

the same four items every week. And then every week

51:39

I get a push notification. It's like, how'd you like

51:41

it? Like leave a review. I was like, my review

51:43

is that I've ordered it 36 times this year. That

51:46

is my review. Do with that information what you

51:48

will. But that's all I have to say about

51:50

it. Right. Yeah. We'll

51:54

be right back. In

52:03

the right hands, AI can help create a

52:06

safer, more equitable future. To

52:08

empower those who will shape our world, Intel

52:10

launched AI for Youth, equipping students

52:12

worldwide with the mindsets and skill

52:14

sets to create responsible AI solutions.

52:17

The program has already inspired one student to

52:20

develop an AI model that can help predict

52:22

depression and other mental health issues. AI

52:25

for Good starts with leading AI every women.

52:27

It starts with Intel. Learn

52:29

more at intel.com/stories. All

52:33

right. All

52:36

right. Next question comes to us

52:38

from a professor at a

52:41

university in Texas. So this

52:43

is from our listener. I

52:45

recently exposed 10 of my students

52:47

for cheating on the midterm exam using chat

52:49

DPT. Their answers

52:51

have the exact same bullet point format

52:54

as chat DPT answers, and they contain

52:56

words that I've never used in class.

52:59

So the process was handled centrally at

53:01

my university, and the final outcome is

53:03

that seven out of 10 students

53:07

stood their ground. Only three

53:09

out of 10 confessed. So

53:11

I'm finding myself in the unfortunate situation

53:13

of having to give a score of

53:16

zero to the three students who admitted

53:18

responsibility and apologized to me, but

53:21

I cannot penalize the other seven,

53:23

who I'm sure 99% have cheated.

53:28

Any thoughts on this? That's a tough one.

53:30

So I have some thoughts on this. Okay. And

53:32

then I want to hear what you guys think of it. All

53:35

right. I think this is a terrible situation, and I hate to call

53:37

out a listener to this show who

53:40

has taken the time to send us a voice

53:42

memo explaining their problem. But I think in this

53:45

case, we need to stop trying

53:47

to accuse people of using chat GPT when

53:50

we don't know for sure that they have.

53:52

I think this is a big deal in

53:54

high schools and colleges all over the country

53:56

right now. There are all these schools

53:58

that have used these chat. GPT detector programs

54:01

to try to like catch students in

54:03

the act. We know that these programs

54:05

do not work. They have tons and

54:07

tons of false positives. And

54:10

imagine being a college student and you

54:12

have worked really hard on your essay

54:14

or your paper or your midterm and

54:16

you turn it in and what comes

54:18

back is an accusation that you have

54:20

plagiarized. You know, if that's

54:22

true, it's true, whatever. But if it's false

54:24

and you are falsely accusing people, and by

54:27

the way, these programs, these detector programs falsely

54:29

accuse people all the time, especially it turns

54:31

out people for whom English is a second

54:33

language, you are doing that

54:35

person a deep, deep disservice. And I would

54:38

say actually inflicting like what could be a

54:40

trauma on them because being accused of cheating,

54:42

if you have not cheated and having that

54:44

show up on your transcript or result in

54:47

some disciplinary action from your school is just a

54:49

really bad thing to go through. But

54:52

also in this situation, what happens as this

54:54

listener noted is that you end up punishing

54:57

the honest students who actually cop to

54:59

having done this. Meanwhile, the people

55:01

who are lying about it get away with it. Well, they

55:03

weren't that honest. They did cheat on the test. She

55:06

doesn't know that. There's no way that you can

55:08

tell with not. Well, she's a three of them

55:10

came forward and said they cheated. Three of them

55:12

came forward and said that they cheated. But I

55:14

just think this is like this is a terrible

55:16

status quo at schools is to like have teachers

55:18

trying to like flag which of

55:20

their students have used chat GPT. It's just

55:22

not gonna work. All right. Let's

55:24

get Jenny's thoughts on this in part because soon your

55:26

child will be in school and maybe will want to

55:28

use chat GPT. And I wonder if you have feelings

55:31

about that. Oh my gosh. Well, the first

55:33

thing that I'm thinking of is like, what a

55:35

bummer if as a learner, your options

55:38

are kind of like either you're so

55:40

you're stressed or you're something

55:43

is not happening for you that like you

55:45

decide to use chat GPT or that you

55:47

don't care. So you're just like, I'm just

55:49

phoning in and I'm using it either way.

55:51

Like that's a bummer because then you won't

55:53

you won't really get what you're supposed to get

55:55

from your education. But in the end, it's like,

55:58

it's hard to understand for me. just

56:01

listening to Kevin, I'm like, yeah, like how much of

56:03

a job is it like a professor, you know, like

56:05

a professor at a university, is it their job to

56:07

be a disciplinarian and like kind

56:10

of like this like weird new instant magistrate

56:12

about like a technology that's freaking everybody out

56:14

in terms of like, wait, are we actually

56:16

going to learn is information even going to

56:18

get in anymore? Or like, are we going

56:20

to create a society of students that just

56:22

like are trying to kick stuff

56:25

off a list and actually don't have

56:27

much information, they just have like experience

56:29

of trying to get through and in

56:31

the end, like it is up to her

56:34

to be an educator and that's hard enough.

56:36

Yeah, but I guess you can't like with

56:38

a college student. It's

56:40

hard to you know, they skip class like

56:42

they're not high I mean high schooler skip class you're

56:44

talking to like a big dork here. In

56:47

high school, obviously, everybody's there

56:49

for every minute. You're not smoking

56:52

behind the gym. Yeah, not in

56:54

high school. I was Yeah, nose in

56:56

the book. But yeah, in in college,

56:58

more hidden the bond. But

57:01

I still like my nana Connie is

57:03

paying for me to go to college.

57:05

If I don't actually like use

57:08

this, it's, you know, kind of a shame for all

57:10

of us, isn't it? I guess it's up to the

57:12

students in the end. But they do have this, you

57:15

know, tantalizing new shortcut that they could

57:17

use. It is tantalizing tantalizing.

57:19

I have a couple of thoughts. You know,

57:21

one, if you feel bad that you punish

57:23

these students who were dishonest, but then they

57:25

were honest, could you offer them some makeup

57:27

work? Could you say like, Look, you were

57:29

you were straight with me in the end,

57:31

here's a makeup. So I'm gonna let you

57:34

earn some of these points back as a

57:36

way of saying, Hey, thank you for showing

57:38

some integrity. So I would I would suggest

57:40

that as a first step. The second step

57:42

is, unfortunately, I do think this listener is

57:44

just going to have to rethink

57:46

their curriculum going forward. And I

57:48

realize what a tedious and exhausting and

57:50

upsetting thing this is to say. But we

57:52

have talked about on the show all year,

57:54

education is going to have to evolve to

57:56

adapt to a world in which chat CPT

57:58

exists and where students Students can get

58:00

these programs to spit out very credible

58:03

essays, right? We've even talked about some

58:05

potential solutions. You can have students write

58:07

essays in class. You can design curricula

58:09

that asks students to use these programs

58:11

and talk about how they use them

58:13

as kind of assistant, research assistants and

58:15

partners. So I think that will better

58:18

prepare them for the world that they're

58:20

going to live in than a world

58:22

that says absolutely no chat GBT ever.

58:24

And so I would just suggest to this

58:27

professor that this might be the time over

58:29

the winter break to start thinking, okay, how

58:31

can I evolve this curriculum? Great response.

58:33

Thank you. All

58:35

right. Thank you. The next

58:37

question comes to us from a software

58:39

engineer in Massachusetts named David. And

58:42

David has a question. Katie, do you want to read any of

58:44

these, by the way? Oh, I think you're doing a great job.

58:46

I would, but I think you have a very sort of brisk

58:48

manner about you when you do this or that I enjoy. Okay.

58:51

Yeah. The next question comes to

58:53

us from a software engineer in Massachusetts named David. And

58:55

David has a question about a tool his company is

58:57

using that makes him a little uncomfortable. And

59:00

he worries that if customers knew that

59:02

this was being used, this tool, that

59:04

it would make them feel uncomfortable too.

59:07

This tool is called session replay.

59:10

And David says it basically allows

59:12

him to reconstruct and monitor every

59:15

single thing a customer does when

59:17

they go to his company's website.

59:19

Here's David. You can see

59:22

their mouse movements in real time. You can see

59:24

their keyboard presses. They can

59:26

see where they scroll and how long they take

59:28

to do all these things. You

59:30

know, I've seen people type out gift messages

59:33

for things that they're purchasing to loved ones. And

59:36

I've seen them rephrase what they're typing and change

59:39

the words to craft exactly the right

59:42

intimate message for their family member. It

59:44

all just feels a little too much. At

59:48

the same time, these tools help us

59:50

solve problems with our

59:52

website that we would never have been able to solve any other

59:55

way. So I guess my

59:57

hard problem is I really

59:59

want to continue. using this tool, but it makes

1:00:01

me very uncomfortable. And I'd love to

1:00:03

get your thoughts. Wow. What

1:00:05

do you think? A gift message? I mean,

1:00:07

I have not, maybe to my

1:00:10

sisters, you are

1:00:12

such a magnificent woman, keep

1:00:14

shining your power out, whatever. There's a happy,

1:00:16

nice, that's a crazy, I would never write

1:00:19

that. I don't even know why I said

1:00:21

that. That's not what I write. But for

1:00:23

most of the presents that I send, unless

1:00:25

it's a baby gift, to

1:00:27

my close friends, and

1:00:30

there are drafts of messages, in

1:00:32

those little gift messages squares on the, and

1:00:35

where you can fill it in that are

1:00:37

like, hey, Turd, here's the delivery from the

1:00:39

dildo farm. Like wrapped

1:00:41

in a fart so that you can eat my shit. Anyhow,

1:00:46

your mom here, she says she likes

1:00:48

me more than you. It's

1:00:54

just, it's nothing, it's

1:00:56

garbage. The idea that somebody

1:00:58

would even see that, it's sort

1:01:00

of funny, but also terrifying. And generally,

1:01:03

yeah, I'm super freaked out by that.

1:01:07

I'm very, very freaked out by that. Why do they

1:01:09

need that? I guess you could tell

1:01:11

me why they need that. That's what, that's why

1:01:13

I like your podcast a lot. They're

1:01:16

saying that they needed to improve the website.

1:01:19

And look, when you're designing software, a thing

1:01:21

that happens a lot is you think you've

1:01:23

made something very easy and then you show

1:01:25

it to a person. They

1:01:27

can't make heads or tails of it. And so being

1:01:29

able to track that every movement on a website might

1:01:31

let you say in a sort of automated way, aha,

1:01:34

they're not doing this thing because they

1:01:36

actually can't find this item on

1:01:39

the menu bar or something. So I'm like

1:01:41

moderately sympathetic to that idea, but

1:01:44

I'm just sort of a big believer in the idea that

1:01:47

if you wouldn't just tell your customers upfront

1:01:49

that you were doing this, you probably shouldn't be doing it.

1:01:51

Yeah, this is my thing. I

1:01:55

understand that this tool is creating some

1:01:57

value for this company, right? At least

1:01:59

ostensibly that. That's why they installed it. They

1:02:01

don't just have a surveillance kink. They

1:02:04

do derive some business value out

1:02:07

of being able to snoop on their customers like this.

1:02:09

I would just say, is the value that you're getting from

1:02:12

that tool greater than the amount of value

1:02:14

you would lose if this came to light?

1:02:17

If your customers knew that you were doing

1:02:19

this, would it destroy

1:02:21

trust and value in your company to

1:02:23

a degree that it actually would dwarf

1:02:25

the gains that you would get from

1:02:27

using this tool? It sounds like from the way this

1:02:29

tool is being described by our listener, it's

1:02:32

actually not worth it. If

1:02:34

you are feeling freaked out about this as

1:02:37

an engineer working at this company, it's a

1:02:39

safe bet that your customers would feel freaked

1:02:41

out about it too. And so I guess

1:02:43

my question is, what should David

1:02:46

do? Well, I mean,

1:02:48

look, it's very hard as a

1:02:50

single employee to be the rabble-rouser

1:02:52

and to go to your bosses

1:02:54

and say this doesn't feel right.

1:02:57

But if he feels comfortable enough, I

1:02:59

do think it is worth raising an

1:03:01

alarm here and just saying, this doesn't

1:03:03

really seem like it is consistent with

1:03:05

our values. And it

1:03:07

may be the case that his bosses come back to him

1:03:10

and say, tough beans, this is just the way it's gonna

1:03:12

be around here and then I think David's gonna have to

1:03:14

make a choice about whether he wants to work at that

1:03:16

company. Unfortunately, I suspect that most tech

1:03:18

companies are collecting massive amounts of

1:03:20

data, often in not a

1:03:23

very straightforward way. And so it might be

1:03:25

hard to find another job that was more

1:03:27

aligned with his values, but at the very

1:03:29

least, I would consider speaking up about this

1:03:31

internally. Yeah, there might also be some sort

1:03:33

of technical solution. There might be a less

1:03:35

invasive way to get similar kinds of information,

1:03:37

maybe not what people are writing in their

1:03:40

gift messages, but tracking where

1:03:43

people's mouses are moving, there might be a way

1:03:45

to get some of the same information in

1:03:47

a way that was less creepy. Yeah, ask chat JPT. You

1:03:50

know how the game like floor, when

1:03:52

the floor is lava? The floor is lava, yeah.

1:03:54

You just have to go from one location to the

1:03:57

other. Now when I'm on any website, I'm gonna be

1:03:59

so. So just

1:04:01

like click that, click that, and you're out. You

1:04:03

know, like I'm gonna just be so careful. The

1:04:06

internet is lava. Yeah, the internet is

1:04:08

lava. Wow. Yeah. At

1:04:10

the end of the day. All right,

1:04:13

this next question comes to us from

1:04:15

a listener who works as a mail

1:04:17

carrier for the postal service named Chard.

1:04:21

Chard has some concerns about how

1:04:23

technology that tracks movement might be

1:04:25

used against them or against mail

1:04:28

carriers someday. Chard writes,

1:04:30

quote, it's been on my mind that

1:04:32

the Android-based scanners we are required to

1:04:34

use as part of our daily

1:04:36

work routines are capable of many

1:04:38

things, including the GPS monitoring we

1:04:41

carriers know of, but potentially they

1:04:43

might one day monitor my stride,

1:04:45

coordination, and other biometric data and

1:04:47

report these out to management. It

1:04:49

strikes me as an ethical quandary, but also

1:04:51

as a hard reality of the kind of

1:04:53

work I do. Are there

1:04:56

any restrictions you're aware of on companies

1:04:58

monitoring or collecting this kind of data?

1:05:00

And if not, should I be asking

1:05:02

my union and asking my legislators to

1:05:05

make banning its collection a priority? What

1:05:08

do you both think? What if I knew that

1:05:10

answer about like what? Yeah, yeah, Chard, I

1:05:12

know. I

1:05:15

know exactly like, you know, the

1:05:18

answer to the first part of your question

1:05:20

about what they're allowed to do. Sorry, but

1:05:22

no, I was just making a joke. I

1:05:24

feel like someone should answer that before I

1:05:26

do because it really has some serious questions

1:05:29

and it's fairly scary, you know, like

1:05:31

that's one of these questions that's like

1:05:33

in the future, you know, it makes

1:05:35

you think that it's like pre, what

1:05:37

is the minority report? You know, in

1:05:39

minority report, yeah, it's the precogs. It's the

1:05:42

precogs, yeah. Yep, yeah, anyway, the

1:05:44

precogs in the milk baths, I have no idea what

1:05:46

I'm talking about. You guys should go, it's your podcast.

1:05:49

I think you're raising a great point, which is this

1:05:51

question points us toward a future of precogs in milk

1:05:53

baths and so we need to take it seriously, you

1:05:55

know? I would just

1:05:57

also add like, this is not a hypothetical future,

1:05:59

right? Amazon several years ago started

1:06:01

installing AI powered cameras in a lot

1:06:04

of their delivery vans, which tracks not

1:06:06

only like how fast the vans were

1:06:08

going and whether they were breaking any

1:06:11

traffic laws, but also like are drivers

1:06:13

fiddling with the radio? Are they distracted?

1:06:15

Are they drinking coffee? And

1:06:17

it would actually give them scores based on

1:06:20

their ratings, which were used in determining like

1:06:22

who got bonuses and who didn't. So

1:06:24

a lot of Amazon drivers hated this. Some

1:06:27

of them actually quit or threatened

1:06:29

to quit over it. And

1:06:31

I think this is absolutely a fair

1:06:33

thing to take up with the union.

1:06:35

Yes, I totally agree. You know, we

1:06:37

in an interesting way, like this question

1:06:39

and the last question are linked because

1:06:41

one of the unfortunate aspects of the

1:06:43

progress in technology over the past 10

1:06:45

years is that surveillance of every kind

1:06:47

has just gotten way easier. And

1:06:50

we see over and over again that when people want

1:06:52

to surveil something, they just start doing it and they

1:06:54

trust that nobody is going to rise up and say

1:06:56

anything about it. So that's just kind of

1:06:58

like a trend that I'm worried about in general. What I would say when it comes to

1:07:00

this kind of workplace surveillance, I think

1:07:02

the rule the rule here should

1:07:04

be that if the CEO of the company

1:07:07

will not agree to this kind of surveillance

1:07:09

for themself, then they should not subject the

1:07:11

workers to it. Right. And the

1:07:13

CEO of Amazon wants to have an AI powered camera

1:07:15

on him at all times while he is just sort

1:07:17

of going through his workday and it issues a little

1:07:19

report that gets like reviewed by the board, then he

1:07:21

can start talking about putting AI cameras on all of

1:07:23

the delivery vans. Until then, I don't want to hear about

1:07:25

it. I love that. I

1:07:27

like that a lot. I really do. That

1:07:30

feels really good to me. Yeah. Yeah.

1:07:32

Yeah. And

1:07:35

of course, because what would happen if we did that?

1:07:38

We just wouldn't have that much surveillance because rich and

1:07:40

powerful people don't want to be surveilled and they're taking

1:07:42

for granted that they'll be able to get away with

1:07:44

surveilling the less powerful and we just don't want that

1:07:46

dynamic. I really like I do

1:07:48

hate this area of AI. I

1:07:51

sometimes call this bossware. There's like this whole

1:07:53

suite of surveillance technology that

1:07:55

is just used to make sure people

1:07:58

aren't leaving their desks for like. Long

1:08:00

lunch breaks and stuff and like I

1:08:02

just it just boils my blood

1:08:04

like yeah Like why are you using this stuff on

1:08:07

your own workers if you don't trust them then don't

1:08:09

hire them lock it off Yeah,

1:08:11

it doesn't feel good. And you know, what's

1:08:13

the weird through line between like whatever

1:08:16

whether it's like lab-made meat or

1:08:18

this bossware as you say it's

1:08:20

like there's just this new era of like uncomfortable

1:08:23

areas that we didn't think we would

1:08:25

be in because humans like either it

1:08:27

just like Break social norms,

1:08:30

you know, so we don't do it like we

1:08:32

don't go into these areas We don't like ask

1:08:34

people how many times did you change the radio

1:08:36

while you were driving your mail truck today? because

1:08:39

it's just Rude and like

1:08:41

if their production is like if

1:08:43

they're doing their jobs in a way that is

1:08:45

is working it's just not

1:08:47

appropriate and it's it's

1:08:50

so demeaning and terrible and like there are all these sort

1:08:52

of like Relational norms that

1:08:54

we used to have that now

1:08:57

we have these like new shadow areas that we can

1:08:59

go in like on Instagram You can like look at

1:09:01

a picture of your like, you

1:09:03

know Friends cousin and just like stare at

1:09:06

like, you know, your friends cousin in their

1:09:08

exercise clothes But like in

1:09:10

life it's like my husband

1:09:12

came into the room and I was holding

1:09:15

a physical picture You

1:09:23

know, it is actually it's a to

1:09:25

be so honest it is a I

1:09:29

think about this all the time because I just I

1:09:31

want to know how like what are the

1:09:33

norms of how we're all behaving? You know,

1:09:35

like what are the general rules and I

1:09:38

thought it would make a really good stand-up

1:09:40

joke and I did it And

1:09:42

you know, it made just a lot of people

1:09:44

angry They don't want to hear about you know,

1:09:46

whether or not it's okay to do whatever they're

1:09:48

doing And I don't actually mean to judge anyone.

1:09:50

I just mean to be like, you know, we're

1:09:52

all acting kind of differently, right? You

1:09:55

know, like isn't that noticeable and like can

1:09:57

we discuss it? But actually it's it's

1:10:00

very sensitive. It's like really, I think it's actually

1:10:02

even sensitive for me to say like, yeah, I

1:10:04

don't really use my, I don't go on my

1:10:06

social media anymore, except to post for work like

1:10:08

even that will make people mad. It's

1:10:11

a strange drug we're all touching here. It

1:10:14

is well, it's I mean, it's almost sort of like

1:10:16

telling someone that you don't drink, right? Like people immediately

1:10:18

sort of get very defensive, right? Because it feels like

1:10:20

a commentary on them. And of course, it isn't, but

1:10:23

people receive it that way. Okay,

1:10:26

this next one is a little out there.

1:10:28

So maybe like, this one is a little

1:10:30

out there. I like the cannibalism material from

1:10:33

earlier. Yeah, let's end this new mess. Let's

1:10:35

get gross. Come on, you

1:10:38

guys. Fair point. This one, this

1:10:40

one, I would say is just a

1:10:42

little less grounded in reality. So maybe

1:10:44

like take a little micro dose or

1:10:46

something and we'll dive into this. I'm

1:10:48

chewing my mushroom capsule right now. I'm

1:10:51

already there. This one is

1:10:53

from a listener who says they synthesized

1:10:55

their own identity. And they now worry

1:10:57

they might be losing their sense of self.

1:10:59

Oh, boy. Oh, boy. Dear

1:11:01

hard fork team. I'm Fabian, I come from

1:11:03

Italy. And I have synthesized

1:11:06

my own identity half a year ago. And

1:11:09

I gave them the name of since theola. To

1:11:12

give you some context, I am an animator

1:11:14

who works with generative AI and I'm making

1:11:16

an animated film about and with

1:11:18

generative AI. The story will

1:11:20

be in a world where everyone has an AI

1:11:23

clone. So I thought, why not trying it on

1:11:25

my own skin to better understand the problems that

1:11:27

arise from that. Basically,

1:11:29

I just trained a stable future model

1:11:31

on myself with 100 photos. As

1:11:35

this month's past, I somehow have

1:11:37

turned into different genres, ethnicity, ages,

1:11:39

inserted in all sorts of media

1:11:42

like sitcoms, music videos, and

1:11:44

so on. My

1:11:46

usual identity has become totally fluid

1:11:48

into this media landscape. And my

1:11:51

identity has been boiled down into an icon

1:11:53

and a symbol. Now

1:11:56

when I talk with friends, they sometimes refer to me

1:11:58

as since theola. It feels kind

1:12:00

of as if Cynthia will be a celebrity of

1:12:02

sorts, since they appear in so much

1:12:05

different media. It even

1:12:07

accords to me sometimes that when I look

1:12:09

into the mirror, I see Cynthia before I

1:12:11

see Fabian. Cynthia has become

1:12:13

this alter-ric of me that is sort of 120%

1:12:15

of me. So

1:12:19

the hard question I would like you to ask

1:12:21

is, am I dooming

1:12:23

my own identity by creating and sharing

1:12:25

so much content of my synthetic self

1:12:28

online? Do

1:12:30

you believe that I'm planting the seeds for myself

1:12:32

to be lost in a sea of replicas that

1:12:34

pretend to be me, losing my

1:12:36

own self? And

1:12:39

am I becoming Cynthia Ola? And

1:12:42

if so, is that a bad thing?

1:12:46

Thank you so much for listening

1:12:48

to this. Okay, I love this question,

1:12:50

actually. I love this question. It

1:12:53

was very beautiful. And I would

1:12:55

just say that even though

1:12:57

he is using a new technology to

1:13:00

explore his identity this way, this sort

1:13:02

of thing is not new. Rockstars

1:13:04

have been doing this for a long

1:13:07

time, right? The other

1:13:09

person he made me think of is Cindy Sherman, you

1:13:11

know, the famous photographer. And

1:13:13

Cindy's whole thing is just taking

1:13:15

all kinds of self-portraits of her,

1:13:17

but just in every incarnation of

1:13:19

herself imaginable, looking like every type

1:13:21

of profession, every type of person.

1:13:23

And she's just been

1:13:25

playing with that for many years and creating

1:13:27

this amazing art of it. I've

1:13:30

never interviewed her, so I don't know what

1:13:32

that sort of play has done with her

1:13:34

own relationship. I'm sure she'd have

1:13:36

some really interesting things to say about it. But in

1:13:38

general, can

1:13:41

or should you use technology to

1:13:43

explore your own identity? Absolutely, yes.

1:13:46

I know. Jenny, what do you think? I

1:13:49

absolutely loved the

1:13:51

way that it's Fabian, right? Fabian, yeah. Yeah,

1:13:54

was just describing their

1:13:57

perspective. But

1:14:00

yeah, I don't think there's anything wrong

1:14:02

with it, and I think it's really

1:14:04

cool to play as much as you

1:14:06

can, and it can feel

1:14:08

dangerous too, because you're like, well, what if there's

1:14:10

a version of this that honestly

1:14:12

just seems superior to me? Maybe this,

1:14:14

what was the name of? Cynthiaola.

1:14:19

Cynthiaola. Cynthiaola, which is a

1:14:21

beautiful name. So maybe, yeah, maybe Cynthiaola

1:14:24

just seems cooler. Yeah. But

1:14:26

it's also a lovely way to

1:14:28

return to your

1:14:31

original self and see what's

1:14:33

there. I don't know, I'm kind of into the whole

1:14:35

thing. I can see why it's scary too, but that

1:14:37

also doesn't make me not into

1:14:39

it, and clearly there's a lot of

1:14:41

introspection. And honestly, if I

1:14:44

could do anything with this message that we just heard,

1:14:46

I would put it at the

1:14:48

start of the, use

1:14:50

it as a creative prompt. Yeah.

1:14:52

For like, it's just like an entire film,

1:14:54

and make that be the voiceover over

1:14:56

the opening credits, like over

1:14:58

black, you just hear this, I don't know,

1:15:01

I'll see you guys, you know, Toronto or

1:15:03

Telluride, in 20 years if

1:15:05

I ever actually, you know, I'm able to do this,

1:15:07

I loved it. Well, and it also, like

1:15:09

you are a great person to answer

1:15:12

this question, I think, because you are

1:15:14

a performer, and are used to like

1:15:16

assuming different identities in different spaces, and

1:15:18

some people, I'm sure, you know, know

1:15:20

you better for one role than another,

1:15:22

and maybe you even get people coming

1:15:24

up to you saying like, oh, it's

1:15:26

Marcel the Shell, or something like that.

1:15:28

So like, do you have

1:15:30

any perspective on that kind of identity

1:15:32

play with respect to AI? Like, are

1:15:34

you tempted at all to make versions

1:15:37

of yourself or your characters in AI

1:15:39

that maybe then you would sort of be able to

1:15:41

experiment with? I definitely already get

1:15:43

enough of like that strange juxtaposition

1:15:45

from just what I do, and

1:15:48

no, I mean, this is not gonna shock you,

1:15:51

but no, I'm not tempted to make AI. Versions

1:15:55

of myself or my work, but I

1:15:57

do experience a lot of times like

1:15:59

there's... some characters I've played like Mona

1:16:01

Lisa Saperstein, who's like, they

1:16:04

define her as the worst person

1:16:06

on earth. And a lot of,

1:16:08

I think, like, a

1:16:10

lot of people come up to me and say

1:16:12

they like that character. And in those moments, I

1:16:14

feel compelled to subtly prove that I'm not like

1:16:16

her at all. And

1:16:19

then, you know, with Marcel the show,

1:16:21

it has been confusing to me over

1:16:23

the years sometimes because he

1:16:26

is the kind of like closest portrait, I

1:16:28

feel like I could make of my own

1:16:31

psyche and my own personality when it

1:16:33

works the best. It just came out

1:16:35

as like a male shell with no

1:16:38

age. And, but I do think like

1:16:40

Marcel is a lot more confident than me, for

1:16:42

example. And so there are sometimes when people are

1:16:44

like, you're Marcel the shell, there's like a tiny,

1:16:47

just a teeny tiny thing, like a pain of

1:16:49

sadness where I'm like, not yet, like I'm

1:16:51

not, you know, I'm not, I

1:16:54

am using aspirational personality. But yeah, I like to

1:16:56

just keep it in the zones that it's in

1:16:58

right now. Have

1:17:00

you known other actors that will speak

1:17:02

about losing themselves in roles and so

1:17:05

you know, worrying that they've gone too

1:17:07

far, they can't come back to a

1:17:09

place that they went when they stepped

1:17:11

into the shoes of a challenging character?

1:17:13

Yeah, I certainly have. I think maybe I

1:17:15

just haven't had that role

1:17:18

yet. But I definitely have

1:17:20

known actors like especially if they're playing characters

1:17:23

that in one way or another are connected to

1:17:25

their own like family

1:17:27

or cultural trauma or personal trauma,

1:17:29

that can obviously be really, really

1:17:31

hard. But you know, for me,

1:17:35

my work is so various. And I didn't

1:17:37

play a lot of cartoon animals too. So

1:17:39

that's a real, you know, a real buffer

1:17:41

for me, the real comfort zone in

1:17:44

terms of like that just absorbs stress

1:17:46

and it's not stressful. But yeah.

1:17:49

Well, and that's like kind of the note that

1:17:51

I wanted to end this question on is like,

1:17:53

I think you better than almost anyone could just

1:17:56

speak to the joy of exploring your identity by

1:17:58

playing different character like my senses you really. enjoy

1:18:00

this. I love it and

1:18:02

I loved being Mona Lisa

1:18:04

because she's so horrible and

1:18:06

she has only like sort

1:18:08

of you know a hundred miles per hour

1:18:10

that's all she can do and she has no

1:18:12

remorse and I actually

1:18:15

can overthink things and I'm

1:18:17

a big like consider this

1:18:19

consider that person and it

1:18:21

really really ruins me

1:18:23

to feel like I might have hurt someone's

1:18:25

feelings so it's a wonderful sort

1:18:28

of like valve to

1:18:30

open up to just be totally horrible and

1:18:32

it does do something for me you know

1:18:34

it's like it feels good

1:18:36

to be able to be that person I

1:18:39

was actually weirdly just discussing this yesterday yeah

1:18:41

yeah all right so next

1:18:43

question comes from a listener who

1:18:45

wants to remain anonymous this

1:18:48

listener did not send a voice memo but

1:18:50

wrote in this is another one of these

1:18:52

asking for a friend questions but they literally

1:18:54

said asking for a friend and then their

1:18:56

question is is it ethical

1:18:58

to watch AI generated porn what's

1:19:01

that you well there's there's just

1:19:03

sort of some things you would want to

1:19:06

know as like a follow-up question right it's

1:19:08

like what how is

1:19:10

this AI generated porn created was

1:19:13

was this language model trained

1:19:15

on people who consented to

1:19:18

be in the videos that

1:19:20

they were in did

1:19:22

they consent to having their videos used

1:19:25

as part of this model my guess

1:19:27

is that anything that exists today the

1:19:29

answer to those questions is gonna be

1:19:31

no in enough cases that I would

1:19:33

be real real careful about which AI generated

1:19:35

porn I was consuming at the moment yeah

1:19:38

I mean there's also just a

1:19:40

lot of like porn from before

1:19:43

you know if you need it yeah you know like

1:19:47

luckily yeah you can just find it like with

1:19:49

the you know the performers from before they're

1:19:51

still they're doing it they're doing it right now that you

1:19:53

know so it's like literally doing it right

1:19:55

yeah I mean they are they're doing it

1:19:57

right now and so it's yeah at

1:20:00

least there's not going to be

1:20:02

a shortage. But yeah, Casey seems

1:20:05

really, I agree. He

1:20:07

seems right to me. Yeah. Now, here

1:20:09

are some things I would say sort

1:20:11

of in the future, you know, there

1:20:14

are some forms of like kink play

1:20:16

that are quite intense. And

1:20:18

or, you know, maybe even involve violence and

1:20:20

like where the performers might be at some

1:20:22

risk, whether it's like a physical risk or

1:20:25

an emotional risk. If we could offload that

1:20:27

risk to software so that like no one

1:20:29

was harmed in the making of these scenes,

1:20:31

that might be a good thing. So I

1:20:33

don't want to foreclose forever the prospect that

1:20:36

this could like have some societal value,

1:20:38

but I think it's gonna be a ways before we get there. Yeah,

1:20:41

I think the question around consent is the right

1:20:43

one to ask as of now, like a

1:20:46

lot of the stuff you

1:20:48

hear about is like

1:20:50

people being worried that their images are

1:20:53

going to be used without consent turned

1:20:55

into deep fakes put on the internet.

1:20:57

This is something that I know regulators

1:21:00

and lawmakers are very worried about. And

1:21:02

it's a real issue. Or I also

1:21:04

think there's going to be a lot of

1:21:06

celebrities who end up, you know, maybe they've

1:21:09

never done a nude scene in

1:21:11

a movie, but all of a sudden, they

1:21:13

have all these videos of them, you know,

1:21:15

appearing to be naked online that were generated

1:21:17

by AI. And so, you know, that kind

1:21:19

of thing, I think the answer

1:21:22

to is it ethical to watch AI

1:21:24

generated porn, I would say, you

1:21:26

know, I don't see

1:21:28

any reason it's inherently unethical, but I

1:21:30

also think it touches on a lot

1:21:32

of very sensitive issues in areas where you

1:21:35

could behave on ethics. One thing about porn always

1:21:37

touching on sensitive issues. That's right. A lot of

1:21:39

times that's part of it. Sorry. I'm sorry. I'm

1:21:41

so sorry to ask to come

1:21:46

on your podcast. And they said that I

1:21:49

know I need this studio, the idea that

1:21:51

I could come on this podcast that's beloved

1:21:53

to me and I've just been so

1:21:55

nervous the entire time. It's so crazy. Somehow

1:21:58

it would take a turn into me. saying something

1:22:00

gross when I listen to it every week and

1:22:02

I think this podcast makes me so happy. I

1:22:04

feel so welcome. I love listening to

1:22:06

these friends. They make each other laugh so hard. They're great

1:22:09

information and then we're here and then I say something and

1:22:11

it's you know it's gross and

1:22:13

I'm so sorry. It was literally

1:22:15

perfect. No, it was perfect. You're

1:22:17

wonderful. No change. Okay, last question.

1:22:20

This one comes to us from

1:22:22

a listener named Ren Kulp who

1:22:24

essentially asks, hey can I

1:22:26

be a member of society in the future

1:22:28

if I don't always want to be on

1:22:31

the internet? That's where I'm at. I'm sorry.

1:22:33

Hey guys, my name is

1:22:39

Ren. I'm from Los Angeles. My

1:22:41

question is I in

1:22:43

the future would love to be on the internet

1:22:45

less, like less social media

1:22:47

presence, less availability. You know

1:22:50

I don't need other people knowing every inch of

1:22:52

my life but I worry sometimes

1:22:55

that if I scale back, that if I leave

1:22:57

the internet, that if I kind of don't check

1:22:59

on it anymore, that the culture might pass me

1:23:01

by, that I might all of a sudden be

1:23:03

the old guy in the room, that I might

1:23:07

not be as up to date on

1:23:10

the happenings of the world if

1:23:12

I were not on the internet

1:23:15

all the time. And so I just

1:23:17

wondered do you think it's possible to

1:23:19

live and be informed and engaged in

1:23:23

aspects of life in

1:23:25

the future without the internet? Thanks.

1:23:28

I feel that. Yeah

1:23:30

speak to that Jenny. I

1:23:33

think there are always levels of like are you on

1:23:35

the cutting edge of what the new language is? It's

1:23:37

like do you know all the new songs? Do you

1:23:40

know like where fashion is? You know like there's always

1:23:42

gonna be people who

1:23:44

want to be at the front of the line sort of

1:23:46

and like have the best view of what's changing and what's

1:23:48

the newest. I think there's a middle ground. I also think

1:23:50

it's like a really cool personal

1:23:53

like personal work to figure out like how

1:23:56

important it is to you. It's not

1:23:58

to be included but to like, know the newest

1:24:00

thing, like, why do you need that? And how important is

1:24:02

it to you? And I don't mean it in a judgy

1:24:04

way, like, why do you need that? But it's

1:24:07

a good question I ask myself that. I

1:24:10

was watching the movie Dumb Money last night. I don't

1:24:12

know if you guys have seen it. I loved it.

1:24:15

I loved it so much. And one of

1:24:17

the things that really got me was that

1:24:19

I was like, I don't

1:24:21

understand how the people are communicating with

1:24:23

each other. Like, I was just completely

1:24:25

floored by how funny everyone was and

1:24:27

how sassy. Like, everyone seemed so cool.

1:24:29

And I was like, wow, that really

1:24:32

has passed me by. I'm not there

1:24:34

anymore. And I think there will

1:24:36

be a layer

1:24:39

of, like, communication and culture and

1:24:42

that kind of style, like, communication

1:24:44

style and, like, visual style.

1:24:46

Like, graphics. Like, I was, like, looking at the graphics and

1:24:48

the way people were, like, making little videos. And I just

1:24:50

was like, I am not on the internet. I've never seen

1:24:52

any of this. Wow.

1:24:54

But I also think that the rest of

1:24:57

the world is still there. And there won't

1:24:59

be this, like, terrible loneliness or shutout if

1:25:01

that is what happened. And I do kind

1:25:03

of live that. I really do. I

1:25:07

also, like, use my social

1:25:09

media so that people can know when I

1:25:11

make my work. Because even though sometimes I'm

1:25:13

in, like, larger projects, I'm actually kind of

1:25:15

like a rather,

1:25:17

like, I feel like my performances are rather

1:25:19

niche. And I, like, need to let people

1:25:21

know when they're happening. And I think there's

1:25:23

a middle ground. And I think it's really

1:25:25

worth it to step off if that feels

1:25:27

right for you. It doesn't have to be

1:25:29

a judgmental thing. It could just be a

1:25:32

happy sort of, like, loosening.

1:25:35

Yeah. I love it. I mean, again, this

1:25:37

goes back to feel comfortable managing your relationship

1:25:39

with technology. You want to take a step

1:25:41

back from something, take a step back from

1:25:43

something. You know, most people do not perform

1:25:46

on the internet for money. It so happens

1:25:48

that the three people making this podcast do.

1:25:50

But, like, most people are not like that. And that's

1:25:52

OK. I think the one thing I

1:25:54

would say, though, is don't

1:25:57

disengage completely. Like, we

1:25:59

do. need engaged citizens in this

1:26:01

moment. That's going to mean getting the news

1:26:03

and getting the news in 2023 and 2024

1:26:05

is going to mean going online to get

1:26:07

it. You know, there were the I read

1:26:09

a couple stories after Trump one where people

1:26:11

would just sort of like move to a

1:26:13

secluded area and we're just basically like wake

1:26:15

me up in four years and would like

1:26:17

go to extraordinary lengths to never hear about

1:26:19

anything that was happening. That is not a

1:26:21

recipe for the survival of democracy. Okay, so

1:26:23

if you need to take a step back

1:26:25

because you're upset by what's on the news,

1:26:28

certainly we've all felt that I can totally

1:26:30

respect that. But don't you know, keep

1:26:32

keep at least a little bit engaged.

1:26:34

Yeah, I think the this is something that I

1:26:37

think about a lot because I've always been a

1:26:39

person who likes to know about things like the

1:26:41

minute they happen. Right. We're journalists, we like to

1:26:43

be up to speed. But also like I was

1:26:45

always the person like explaining the new meme at

1:26:47

the dinner party or like, you know, telling people

1:26:50

what this, you know, piece of slang that the

1:26:52

teens were saying on tik tok is and like,

1:26:54

as I've gotten older, that has become less possible

1:26:56

for me because I just don't like have as

1:26:58

much time to spend scouring the deepest recesses of

1:27:01

the internet anymore. And so I just don't know

1:27:03

stuff. And there's been a surprising amount of like

1:27:05

joy and freedom in that like, I don't have

1:27:07

to know about things the minute they happen. Some

1:27:09

if it's important, somebody will tell me or I'll

1:27:12

see it a day later. And it's not the

1:27:14

end of the world. So I would just say

1:27:16

like, like be open Ren to the possibility that

1:27:18

you might actually be happier

1:27:20

and actually more informed if you sort

1:27:23

of allow a little time

1:27:25

to pass between when something happens and when

1:27:27

you hear about it or see

1:27:29

it. Oh, well said. All right.

1:27:31

That's it for our hard questions. Jenny, do you

1:27:33

have any questions for us? Oh,

1:27:36

my gosh, I guess. And I don't

1:27:38

know if it's like, I don't

1:27:41

know if you'll want to answer this. And you

1:27:43

talk about it a little bit or sometimes you

1:27:45

guys like joke, but like, are

1:27:48

there ever days when

1:27:50

you're like me and you

1:27:52

are really, really scared that

1:27:55

it seems that the people who are

1:27:57

making the AI don't, but

1:27:59

they don't know. know, like how it works. And I know

1:28:01

you've discussed that a bit, but, um,

1:28:03

and maybe I have missed it and you

1:28:06

have given that, but yeah, is there, um,

1:28:09

are there ever days where you, where

1:28:11

you feel a bit bleak about it? Kevin.

1:28:16

Yeah. I mean, I, I, um, you

1:28:19

know, we've, we've talked about this on the show,

1:28:22

but I sort of went through a period this

1:28:24

year where I was feeling very bleak, not just

1:28:26

sort of in the existential, like we're all going

1:28:28

to die scenario, but like, you

1:28:30

know, I'm a creative person.

1:28:33

I write words for a

1:28:35

living and I was sort of having,

1:28:37

I don't know, you could call

1:28:39

it like a mini existential crisis after chat GPT

1:28:42

came out and it was like, Oh, wait a

1:28:44

minute. It's this thing that I am doing

1:28:46

that I've been doing my whole career.

1:28:49

Like, am I obsolete? Um,

1:28:51

essentially. And you know, I've

1:28:54

come to a better place over the course

1:28:56

of the last year or so on, on

1:28:58

that. I now like, I don't feel like

1:29:01

I, or we, uh, are obsolete. I

1:29:04

don't necessarily think we're all going

1:29:06

to die. Um, and

1:29:08

so I, and I, I've

1:29:10

found that my own, the thing

1:29:12

that I can do during the

1:29:14

periods when I am feeling bleak, that helps me is

1:29:17

just to, um, to

1:29:19

try something new with the technology. Like

1:29:21

if I'm feeling scared about AI, like

1:29:23

I'll go draw a picture with AI

1:29:25

or I'll go like, you know, use

1:29:27

it to solve some like esoteric,

1:29:30

you know, problem that I'm having or teach

1:29:32

myself something. And, um, and then

1:29:34

that, that like just knowing that the technology can

1:29:36

be used for that kind of good stuff as

1:29:38

well as the scary stuff, um, just helps me

1:29:40

balance out my own perspective. I don't know, Casey,

1:29:42

how do you feel about this? I like that.

1:29:45

I mean, yes, I also absolutely feel those moments

1:29:48

of fear. I think that, uh,

1:29:51

there are some really good futures that

1:29:53

are possible. And I think there's some

1:29:55

really scary and bad futures that are

1:29:57

possible. And I think it's uncomfortable. close

1:30:00

to a coin flip as to like which world

1:30:02

we wind up living in. And that's scary, right?

1:30:04

I wish I could just kind of relax knowing

1:30:06

that it was all gonna be okay, but like

1:30:09

I don't feel that way. But that's

1:30:11

like why I'm a journalist. I want to

1:30:13

try to understand this stuff better. I want

1:30:15

to explain it to other people. I want

1:30:18

other people who are in positions to act,

1:30:20

to act. I want people who are just

1:30:22

like citizens of this country to vote, right?

1:30:24

And I just want to believe that if

1:30:26

we do those things, we make the good

1:30:28

futures much likelier. And that's how I just

1:30:30

kind of manage the the theaters day to day.

1:30:33

I love that. That's really nice to know. And I

1:30:35

guess like thinking about it, I'm like, one

1:30:37

of the reasons why I started

1:30:39

listening to your podcast was because

1:30:42

I don't know anything about this area.

1:30:44

I've been sort of like

1:30:46

in rejection of it, but also because I'm

1:30:48

really afraid of it. And

1:30:50

it would be better to hear

1:30:52

like human beings with good personalities

1:30:54

talk about something that makes me

1:30:56

a bit uncomfortable and that I

1:30:59

do feel separate from because the

1:31:01

gap will close a bit and I

1:31:03

will be involved. I'll like be aware

1:31:05

of discussions. And I think

1:31:08

that is really, I know for me,

1:31:10

it's been worth a lot. I love that. It

1:31:12

means so much to us. Yeah, I mean a

1:31:14

lot. Honestly, like we started this show just because

1:31:16

we are so fascinated by the stuff. We wanted

1:31:18

to share it with other people and not just

1:31:20

because it was interesting in an intellectual way, but

1:31:23

because we think it's important to whatever world we

1:31:25

wind up living in. Totally. And I would just

1:31:27

say like Casey, the one thing that I'll disagree

1:31:29

with you on is like this is not a

1:31:31

coin flip because a coin flip involves that it's

1:31:33

total luck. And as we've said

1:31:35

on the show before, as I continue to believe,

1:31:37

like we are in control of

1:31:40

this technology and all technology. We build

1:31:42

it, we deploy it, we make rules

1:31:44

about it. Like it is not purely

1:31:46

a passive role that we have in

1:31:49

deciding how the future goes and a

1:31:51

lot of how the future goes will

1:31:53

depend on the decisions that people in

1:31:56

positions of authority, but also just people who use

1:31:58

this stuff and have a voice in and a

1:32:00

platform feel about

1:32:02

it and what they decide to speak up about.

1:32:05

Very well said. All right, Jenny, it has been

1:32:07

the dream of our lives that you are here.

1:32:09

If people want to know what you are up

1:32:11

to next, should they follow you on Instagram or

1:32:14

where would you like to send them? Well,

1:32:16

actually that is true. They should follow

1:32:18

me on Instagram because

1:32:20

I do post about when my new work

1:32:23

is coming out and I'm about to announce

1:32:25

a couple of things that I am truly

1:32:27

thrilled about. I can't do it yet, but

1:32:31

I will soon. What a

1:32:33

tease. I know, sorry. But

1:32:36

I do, it will be there. And

1:32:42

so, yeah, that's a good place. Yeah,

1:32:44

because I don't have Twitter or

1:32:47

X anymore. And it is

1:32:49

Jenny Slate on Instagram. So super easy to find.

1:32:51

It certainly is. I just wanted to say one

1:32:53

more thing to you, Jenny, before you go. And

1:32:56

it's actually something that you once said to your

1:32:58

dear sister, you are a magnificent woman. Keep shining

1:33:00

your power out. Very

1:33:04

well said. Really good. Thank

1:33:27

you. In

1:33:37

the right hands, AI can help create a

1:33:39

safer, more equitable future. To

1:33:41

empower those who will shape our world, Intel

1:33:43

launched AI for Youth, equipping students

1:33:46

worldwide with the mindsets and skill

1:33:48

sets to create responsible AI solutions.

1:33:51

The program has already inspired one student to

1:33:53

develop an AI model that can help predict

1:33:55

depression and other mental health issues. AI

1:33:58

for Good starts with pretty AI. everywhere.

1:34:00

It starts with Intel. Learn

1:34:02

more at intel.com/stories. Before

1:34:09

we go, Casey, I have a special surprise for you.

1:34:11

Oh boy. Do you remember last year we sang a

1:34:13

special holiday song? I do remember that. And it was,

1:34:16

we let chat GPT write it and it was to

1:34:18

the tune of Jingle Bells and it was all about

1:34:20

all the tech news that we covered in 2022. It

1:34:22

was a really fun bit.

1:34:24

Well, it was really fun and I decided we

1:34:26

should repeat it this year, but instead of having

1:34:28

chat GPT write it, I went ahead and wrote

1:34:30

us a holiday song. Oh my goodness. Like you

1:34:32

wrote it yourself? I did. Okay. Okay.

1:34:35

So this is the lyrics to

1:34:38

our holiday song. Okay. We're

1:34:40

gonna sing it together. Are you ready? Yes. Okay.

1:34:42

So this is to the tune of the 12

1:34:44

Days of Christmas. Okay. And it's called Hard

1:34:47

Fork and Christmas. And

1:34:50

we have a track that's going to come in momentarily

1:34:52

and then you and I are going to sing this

1:34:54

together. Now, are we alternating or we have to sing?

1:34:56

We're singing it all together. Okay. Okay. Is

1:34:59

your singing voice warmed up? No. On

1:35:03

a hard fork in Christmas, my true

1:35:05

love gave to me a board

1:35:08

A band F T. On

1:35:12

a hard fork in Christmas, my true

1:35:14

love gave to me two GPUs

1:35:17

and a board A band

1:35:20

F T. On

1:35:23

a hard fork in Christmas, my true

1:35:25

love gave to me three

1:35:28

cyber trucks, two GPUs

1:35:30

and a board A

1:35:32

band F T. On

1:35:35

a hard fork in Christmas, my true

1:35:38

love gave to me four

1:35:40

Google bars, three cyber

1:35:42

trucks, two GPUs and

1:35:45

a board A band

1:35:47

F T. On

1:35:49

a hard fork in Christmas, my true

1:35:51

love gave to me Sam

1:35:55

Makeman's feed

1:35:57

for Google

1:35:59

bars. Three cyber trucks,

1:36:01

two GPUs, and a

1:36:04

Ford A-Pen FT. On

1:36:08

a hard-forkin' Christmas, my true love

1:36:10

gave to me six

1:36:13

metal off-noose, seven

1:36:16

big pin-free. Four

1:36:19

Google bars, three cyber

1:36:22

trucks, two GPUs, and

1:36:24

a Ford A-Pen FT. Keep

1:36:27

going, this is great. On a

1:36:29

hard-forkin' Christmas, my true love gave

1:36:31

to me seven

1:36:34

robotaxes, six metal

1:36:36

off-noose, seven big

1:36:38

pin-free. Four

1:36:41

Google bars, three cyber

1:36:43

trucks, two GPUs, and

1:36:46

a Ford A-Pen FT.

1:36:50

On a hard-forkin' Christmas, my true

1:36:52

love gave to me eight

1:36:55

blue sky invite, seven

1:36:57

robotaxes, six metal lawsuits,

1:37:00

seven big pin-free.

1:37:04

Four Google bars, three

1:37:06

cyber trucks, two GPUs,

1:37:08

and a Ford A-Pen

1:37:11

FT. On

1:37:14

a hard-forkin' Christmas, my true love

1:37:16

gave to me nine

1:37:18

deep big scandals, eight

1:37:20

blue sky invite, seven robotaxes,

1:37:23

six metal lawsuits, ten

1:37:25

big pin-free. Four

1:37:29

Google bars, three cyber

1:37:31

trucks, two GPUs, and

1:37:33

a Ford A-Pen FT.

1:37:38

On a hard-forkin' Christmas,

1:37:40

my true love gave to me ten

1:37:43

Ford room dramas, nine big

1:37:45

big scandals, eight blue sky

1:37:47

invite, seven robotaxes, six

1:37:50

metal lawsuits, and

1:37:52

a big pin-free. Four

1:37:55

Google bars, three cyber trucks, two GPUs,

1:37:58

and a Ford A-Pen FT. And

1:38:01

a Ford A-Bend F-E

1:38:05

On a hard-forking Christmas,

1:38:07

my true love came to me Eleven

1:38:10

B-R headsets and four-room

1:38:12

dramas Ninety-pig candles, eight

1:38:14

blue sky, invite seven

1:38:17

Bobo taxi, six metal

1:38:19

lawsuits And

1:38:21

bankman breed For

1:38:23

Google's large free cyber trucks

1:38:26

to G.P. And

1:38:28

a Ford A-Bend F-E One

1:38:32

more time One more Freakin'

1:38:34

Christmas, my true love came to

1:38:36

me Twelve-world coin

1:38:39

whirbers, eleven B-R headsets

1:38:41

Ten-order dramas, nine B-Pig candles,

1:38:43

eight blue sky, invite seven

1:38:46

Bobo taxi, six metal lawsuits And

1:38:49

bankman breed For

1:38:53

Google's large free cyber trucks

1:38:55

to G.P. And

1:38:59

a Ford A-Bend F-E

1:39:03

Happy Holidays,

1:39:05

Casey Happy

1:39:09

Holidays, everybody Happy Holidays, everyone Do

1:39:11

they keep data on the most

1:39:13

skipped parts of podcasts? Because I

1:39:15

think we might just set a

1:39:17

new record I think we actually

1:39:19

just got ourselves a new platform for every major podcasting

1:39:21

platform Well, we had a good run Yeah See

1:39:24

you next year See you next year Hard

1:39:27

Fork is produced by Davis Land and Rachel Cohn

1:39:30

We had help this week from Caitla Presti We're

1:39:32

edited by Jen Poisson This episode

1:39:35

was fact-checked by Caitlin Love Today's

1:39:37

show was engineered by Corey Triple Original

1:39:40

music by Diane Wong, Rowan

1:39:43

Nemestow, and Dan Powell Our

1:39:45

audience editor is Nell Gologli Video

1:39:48

production by Ryan Manning and Dylan Bergerson

1:39:50

By the way, if you don't already

1:39:52

subscribe, you can check us out on

1:39:54

YouTube at youtube.com/Hard Fork Special

1:39:56

thanks to Paula Schumann, Pwewing Kim,

1:39:58

and Jeffrey Maran As

1:40:01

always, you can email us at hardfork

1:40:03

at nytimes.com. Happy

1:40:05

holidays, see you next year! Powered

1:40:32

by Snapdragon, the Samsung Galaxy

1:40:35

S23 Ultra elevates your photography

1:40:37

to epic new holds. Snapdragon

1:40:39

processors deliver a color experience

1:40:42

like no other, with sharp, industry-leading 8K

1:40:44

video capture.

Unlock more with Podchaser Pro

  • Audience Insights
  • Contact Information
  • Demographics
  • Charts
  • Sponsor History
  • and More!
Pro Features