Podchaser Logo
Home
#383 – Mark Zuckerberg: Future of AI at Meta, Facebook, Instagram, and WhatsApp

#383 – Mark Zuckerberg: Future of AI at Meta, Facebook, Instagram, and WhatsApp

Released Thursday, 8th June 2023
 2 people rated this episode
#383 – Mark Zuckerberg: Future of AI at Meta, Facebook, Instagram, and WhatsApp

#383 – Mark Zuckerberg: Future of AI at Meta, Facebook, Instagram, and WhatsApp

#383 – Mark Zuckerberg: Future of AI at Meta, Facebook, Instagram, and WhatsApp

#383 – Mark Zuckerberg: Future of AI at Meta, Facebook, Instagram, and WhatsApp

Thursday, 8th June 2023
 2 people rated this episode
Rate Episode

Episode Transcript

Transcripts are displayed as originally observed. Some content, including advertisements may have changed.

Use Ctrl + F to search

0:00

The following is a conversation with Mark

0:02

Zuckerberg, his second time on this podcast.

0:05

He's the CEO of Meta that owns

0:07

Facebook, Instagram, and WhatsApp, all

0:10

services used by billions

0:12

of people to connect with each other. We

0:15

talk about his vision for the future of Meta and

0:17

the future of AI in our human

0:20

world.

0:22

And now a quick few second mention of each

0:24

sponsor. Check them out in the description.

0:26

It's the best way to support this podcast.

0:29

We got Numeri for the world's hardest

0:31

data science tournament, Shopify

0:34

for e-commerce, and BetterHelp

0:36

for mental health. Choose wisely,

0:38

my friends.

0:39

Also, if you want to work with our amazing team,

0:41

we're always hiring. Go to LexFriedman.com

0:44

slash hiring. And now onto the

0:46

full ad reads. As always, no ads

0:48

in the middle. I find those annoying, but

0:51

these here ads, I try to make interesting.

0:54

Though you may skip them, if you must,

0:56

my friends, but please

0:58

still check out the sponsors. They

1:00

help this podcast out. I enjoy their stuff.

1:02

Maybe you will too.

1:04

This show is brought to you by Numeri,

1:07

a hedge fund that uses AI

1:09

and machine learning to make investment decisions.

1:12

I'm a huge fan

1:14

of real world data sets and real

1:16

world machine learning competitions to

1:19

figure out what works.

1:21

This is not ImageNet.

1:23

This is not an artificial toy

1:25

data set for the development of

1:28

toy systems that illustrate toy

1:31

concepts. Those are the early, early,

1:33

early stages of research. But

1:35

when you really want to see what works,

1:38

you want benchmarks that

1:40

have stakes, that have the highest of stakes,

1:42

especially ones that have money involved. So I'm

1:45

a huge fan, money

1:47

or not, of data sets that represent

1:49

the real world and demonstrate that the system

1:51

can operate in the real world at the highest of stakes.

1:53

That's why I was really interested in autonomous

1:56

vehicles when the stakes are

1:58

life and death. It's safety

1:59

systems, incredibly exciting

2:02

to work on systems that are

2:04

truly real-world datasets.

2:06

Anyway, if that kind of thing interests you, if you're

2:08

a machine learning engineer, head over to

2:11

numer.ai slash

2:13

lex to sign up for a tournament and

2:15

hone your machine learning skills. That's

2:18

Numer.ai

2:20

slash lex for a chance to play against

2:23

me and win share of

2:25

the tournament prize pool.

2:28

This show is also brought to you by Shopify,

2:31

a platform designed for anyone to sell anywhere

2:33

with a great looking online store that brings your

2:35

ideas to life and tools to manage the

2:37

day-to-day operations.

2:40

Operations is such a badass word. I feel

2:43

like you're running things. Anyway, a few

2:46

folks asked me about merch. I'm a huge fan

2:48

of buying merch for the

2:50

podcasts, shows, bands

2:52

I love. I love the commodity

2:54

of merch. I think Shopify is

2:56

a great place to sell merch. I'm definitely

2:59

going to put out some merch. I'm really sorry it's

3:01

been taking forever. I've been working with this incredible

3:04

artist. I just love art. I love

3:07

artistic representation of the funny,

3:09

the profound on a t-shirt that allows

3:11

you to celebrate with others, something super cool. I

3:14

love it. To me, there's nothing promotional

3:17

about it, all that kind of stuff. It's just sharing

3:20

your happiness. Anyway, so I'll definitely

3:22

use Shopify to create

3:24

a merch store

3:25

so that people can share a bit of their happiness

3:28

with others.

3:29

If you have stuff to sell or you have merch to

3:31

sell or you want to share some of

3:33

your happiness with others, sign up for a $1 per

3:36

month trial period at Shopify.com slash

3:38

Lex. That's all lowercase.

3:41

Go to Shopify.com slash

3:43

Lex to take your business to the next

3:46

level.

3:48

This episode is also brought to you by BetterHelp,

3:51

spelled H-E-L-P,

3:53

help. They figure out what you need to match

3:55

you with a licensed professional therapist in under 48

3:57

hours.

3:59

I do a podcast, obviously I'm a

4:02

big fan of talk therapy.

4:04

In fact, when I just listen to podcasts, it's a kind

4:06

of talk therapy because I'm having a

4:08

conversation with the people I'm listening to in my mind.

4:11

Whenever it's an interview shown as two

4:13

folks talking, I'm always the third person

4:15

in the room,

4:16

kind of almost participating in the conversation.

4:19

And there's something therapeutic about that.

4:22

So if you're listening to two other people, tell their life

4:24

stories, and you

4:27

be able to project your trauma, your struggles,

4:30

your hopes, your dreams, your triumphs, all that kind

4:32

of stuff onto their life and kind of dance

4:34

with that. Of course, to do that rigorously

4:36

and really just put it all

4:38

out there in a raw and honest way, I think

4:41

that's what therapy is about. There's a

4:43

lot of things you can do for your mental health, but therapy

4:46

is one of the obvious things you should have

4:49

in the

4:49

toolkit of lifestyle

4:52

flourishing. Anyway, BetterHelp

4:55

just makes the whole thing super easy. Super easy

4:57

to sign up, super easy to find,

4:59

licensed therapist, all of that. It's

5:02

obviously discreet, it's easy, it's

5:04

affordable, it's available anywhere.

5:07

Check them out at betterhelp.com slash

5:09

Lex and save in your first month. That's

5:11

betterhelp.com slash Lex.

5:16

This is the Lex Friedman podcast.

5:18

And now, dear friends, here's Mark

5:21

Zuckerberg. So,

5:23

you competed in your first Jiu-jitsu tournament, and

5:26

me, as a fellow Jiu-jitsu practitioner and competitor, I

5:29

think that's really inspiring, given all the things you

5:31

have going on. So, I got to ask, what was that

5:34

experience like? Oh, it was

5:36

fun. It was fun. I

5:38

was like, oh, I'm gonna go with that. I'm gonna go with that.

5:41

So, I'm gonna go with that. I'm gonna

5:43

go with that. I'm

5:46

gonna go with that. I'm gonna go with that. I'm

5:49

gonna go with that. I'm gonna go with that.

5:51

It was fun.

5:52

I know. Yeah, I mean, well, look, I'm a

5:54

pretty competitive person. Yeah. Doing

5:58

sports that basically require your full attention. I

6:00

think is really important to my mental health

6:04

and the way I just stay focused at doing everything I'm

6:06

doing. So I decided to get into martial

6:08

arts and it's awesome. I

6:11

got like a ton of my friends into it. We all trained

6:13

together. We have like a mini

6:15

academy in my garage. And

6:18

I

6:18

guess one of my friends was like, hey,

6:21

we should go do a tournament. I was like, okay,

6:23

yeah, let's do it. I'm not gonna shy away from

6:25

a challenge like that. So yeah, but

6:28

it was awesome. It was just a lot of fun. You

6:30

weren't scared? There was no fear? I don't

6:32

know. I was pretty sure that I'd

6:34

do okay. I like the confidence. Well,

6:37

so for people who don't know, Jiu-Jitsu is a martial

6:39

art where you're trying to break

6:42

your opponent's limbs or choke

6:44

them to sleep

6:46

and do so with grace and

6:48

elegance and efficiency

6:51

and all that kind of stuff. It's

6:53

a kind of art form, I think, that you can do for

6:55

your whole life. And it's basically a game,

6:58

a sport of human chess. You can think of,

7:00

there's a lot of strategy. There's a lot of sort

7:02

of interesting human dynamics of using

7:05

leverage and all that kind of stuff. And

7:07

it's kind of incredible what

7:08

you could do. You could do things like a small

7:11

opponent could defeat a much larger opponent. And

7:13

you get to understand like the way the mechanics of

7:15

the human body works because of that. But

7:17

you certainly can't be distracted. No,

7:21

it's 100% focused. To

7:24

compete, I needed to get around the

7:26

fact that I didn't want it to be like this big

7:28

thing. So I basically just, I rolled up

7:30

with a

7:31

hat and sunglasses and

7:33

I was wearing a COVID mask. And I

7:35

registered under my first and middle name. So Mark

7:37

Elliott. And it wasn't until

7:40

I actually pulled all that stuff off right before I got on the

7:42

mat that I think people knew it was me. So

7:44

it was pretty low key. But you're still a

7:46

public figure. Yeah, I mean, I didn't wanna lose.

7:49

Right. The thing you're partially afraid

7:51

of is not just the losing but being almost like embarrassed.

7:54

It's so raw the sport in that like

7:57

it's just you and another human being. There's a primal

7:59

aspect there.

7:59

Oh yeah, it's great. For a lot of people it can be terrifying,

8:02

especially the first time you're doing the competing,

8:04

and it wasn't for you. I see the look

8:07

of excitement in your face. Yeah, I don't know. It wasn't,

8:09

no fear. I just think part of learning is failing.

8:12

Okay. Right, so, I mean, the main

8:14

thing, like people who train jiu-jitsu,

8:16

it's like you need to not have pride

8:18

because I mean, all the stuff that you were talking about before about

8:21

getting choked or getting a

8:24

joint lock, it's,

8:26

you only get into a bad situation

8:28

if you're not willing to tap once you've already lost.

8:31

Right, and, but obviously when

8:33

you're getting started with something, you're not gonna be an expert

8:35

at it immediately. So you just need to

8:38

be willing to go with that. But I think this is like,

8:40

I don't know. I mean, maybe I've just been embarrassed enough times

8:43

in my life. Yeah. I do think that

8:45

there's a thing where like, you know, as people grow

8:47

up, maybe they don't wanna be embarrassed or anything.

8:49

They've built their adult identity and they

8:51

kind of have a sense of who they are

8:55

and what they wanna project. I

8:58

don't know, I think maybe to some degree, your

9:02

ability to keep doing interesting things

9:04

is your willingness to

9:07

be embarrassed again and go back to

9:09

step one and

9:10

start as a beginner and

9:12

get your ass kicked and, you

9:15

know, look stupid doing things. And,

9:17

you know, I think so many of the things that we're doing, whether

9:19

it's this, I mean, this is just like

9:21

a kind of a physical

9:22

part of my life, but

9:25

running the company, it's like we just take on new

9:27

adventures and,

9:30

you know, all the big things that we're doing, I

9:32

think of as like 10 plus year missions

9:34

that we're on, where, you know, often early

9:37

on, you

9:37

know, people doubt that we're gonna be able to do it and the

9:40

initial work seems kind of silly and our

9:42

whole ethos is we don't wanna wait until something is perfect

9:44

to put it out there. We wanna get it out quickly

9:47

and get feedback on it. And so I don't know. I

9:49

mean, there's probably just something about how I approach

9:51

things in there, but I just kind of

9:53

think that the moment that you decide you're gonna be too

9:55

embarrassed to try something new, then you're not gonna learn

9:57

anything anymore. But like

9:59

I mentioned,

9:59

that fear, that anxiety

10:02

could be there, it could creep up every once in a while. Do

10:05

you feel that in especially stressful

10:07

moments sort of outside of the jigismet,

10:10

just at work?

10:12

Stressful moments, big decision days,

10:15

big decision moments, how do you deal with

10:17

that fear? How do you deal with that anxiety? The

10:19

thing that stresses me out the most is always

10:22

the people challenges. You know, I kind

10:24

of think that, you

10:25

know, strategy questions,

10:28

you

10:29

know, I tend to have enough conviction

10:32

around the values of what we're trying to do

10:34

and what I think

10:36

matters and what I want our company to stand for that

10:39

those don't really keep me up at night that

10:41

much. I mean, I kind of, you know, it's not that

10:43

I get everything right, of course I don't,

10:46

right? I mean, we make a lot of mistakes,

10:50

but I at least have a pretty strong

10:53

sense of

10:54

where I want us to go on that. And

10:57

running a company for almost 20 years now, one

11:00

of the things that's been pretty clear is when

11:02

you have a team that's cohesive,

11:06

you can get

11:07

almost anything done. And,

11:09

you know, you can run through

11:12

super hard challenges, you

11:14

can make hard decisions and push really

11:17

hard to do the best work even, you know,

11:19

and kind of optimize something super

11:21

well. But when there's that

11:23

tension, I mean, that's when things get

11:25

really tough. And, you know, when I talk

11:28

to other

11:28

friends who run other companies and things like that,

11:30

I think one of the things that I actually spend

11:33

a disproportionate amount of time on in running this company

11:35

is just fostering a

11:38

pretty tight core group of

11:40

people who are running the company with

11:43

me. And

11:44

that to me is kind of the thing

11:47

that

11:48

both makes it fun, right, having friends

11:51

and people you've worked with for a while and new people and new

11:53

perspectives, like a pretty tight group who

11:55

you can go work on some of these crazy things with.

11:59

But to me, that's also...

11:59

the most stressful thing is when there's tension, that

12:03

weighs on me. I

12:06

think it's maybe

12:09

not surprising. I mean, we're like a very people focused

12:11

company and it's the people

12:13

is the part of it that weighs

12:16

on me the most to make sure that we get right. But yeah,

12:19

that I'd say across everything that we do is probably the

12:22

big thing. So when there's tension in

12:24

that inner circle of

12:27

close folks. So

12:29

when you trust those

12:31

folks to help you make difficult

12:33

decisions about

12:35

Facebook, WhatsApp, Instagram,

12:40

the future of the company and the metaverse

12:42

with the AI, how do

12:44

you build that close-knit group of folks to

12:48

make those difficult decisions? Is there people that you

12:51

have to have critical voices, very different

12:53

perspectives on focusing on

12:56

the past versus the future, all that kind of stuff?

12:58

Yeah, I mean, I think for one thing, it's just spending

13:01

a lot of time with whatever the group

13:03

is that you wanna be that core group,

13:06

grappling with all of the biggest challenges.

13:08

And that requires a fair amount of openness.

13:11

And so I mean, a lot of how I

13:14

run the company is, and it's like every Monday morning,

13:16

we get our,

13:17

it's about the top 30 people together.

13:20

And this

13:22

is a group that just worked together for a long period

13:24

of time. I mean, people rotate in,

13:26

I mean, new people join, people leave the company,

13:29

people go to other roles in the company. So it's not the

13:31

same group over time, but then we spend,

13:34

a lot

13:35

of times a couple of hours, a lot of the time

13:37

it's,

13:38

it can be somewhat unstructured. Like

13:40

I'll come with maybe a few topics that are

13:43

top of mind for me, but

13:45

I'll ask other people to bring things and people

13:47

raise questions, whether it's, okay, there's an issue

13:50

happening in some country with

13:53

some policy issue. There's like a new technology

13:55

that's developing here. We're having an issue with this partner.

13:59

You know, there's a... design trade-off and WhatsApp

14:01

between two things that end

14:04

up being values that we care

14:06

about deeply and we need to kind of

14:08

decide where we want to be on that. And I just think over time,

14:11

when

14:12

by working through a lot of issues with people

14:14

and doing it openly, people develop

14:17

an intuition for each other and a bond and

14:19

camaraderie. And

14:22

to me, developing that is

14:24

like a lot of the fun part of running a company

14:26

or doing anything, right? I think it's like having people

14:29

who are kind of along on the journey that you

14:31

feel like you're doing it with, nothing is ever just one

14:34

person doing it. Are there people that

14:36

disagree often within that group? It's

14:38

a fairly combative group. Okay, so

14:40

combat is part of it. So this is making

14:43

decisions on design,

14:44

engineering, policy,

14:47

everything. Yeah, everything, everything.

14:50

Yeah. I have to ask just

14:52

back to you, just for a little bit, what's your favorite

14:54

submission? Now that you've been doing it, how

14:57

do you like to submit your

14:59

opponent Mark Zuckerberg? I'm

15:02

in.

15:03

Well, first of all, do

15:06

you prefer no gi or gi jujitsu?

15:09

So gi is this outfit you wear that

15:13

maybe mimics clothing so you can

15:15

choke. Well, it's like a kimono. It's like the traditional

15:18

martial arts or kimono. Pajamas.

15:20

Pajamas that

15:22

you could choke people with, yes. Well, it's got the lapels.

15:25

Yeah.

15:27

So I like jujitsu.

15:29

I also really like MMA. And

15:31

so I think no gi more

15:34

closely approximates MMA. And

15:36

I think my style is

15:38

maybe

15:39

a little closer to an MMA style. So

15:42

like a lot of jujitsu players are

15:44

fine being on their back, right? And obviously having a good guard

15:47

is a critical part of jujitsu.

15:49

But in MMA, you don't want to be on your

15:51

back, right? Because even if you have control, you're just taking

15:54

punches while you're on your back. So

15:56

that's no good. Do you like

15:58

being on top? My style as I'm

16:00

probably more pressure and I'd

16:04

probably

16:06

rather be the top player. But I'm

16:09

also smaller. I'm not like a

16:11

heavyweight guy. So from that

16:13

perspective, I think, especially

16:17

because if I'm doing a competition, I'll compete with people

16:19

who are my size, but a lot of my friends are

16:21

bigger than me. So back

16:23

takes probably pretty important because that's where

16:25

you have the most leverage advantage where

16:29

people,

16:30

their arms, your arms are very weak behind

16:32

you. So being

16:34

able to get to the back and take that pretty

16:36

important. But I don't know, I feel like the right strategy

16:38

is to not be too committed to any single submission.

16:41

But that said, I don't like hurting people. So

16:45

I always think that chokes are a

16:47

somewhat more humane way to go

16:49

than joint locks. Yeah,

16:51

and it's more about control. It's less dynamic.

16:54

So you're basically like a Habib Nurmagomedov

16:57

type of fighter. So let's go back

16:59

take to a rear naked choke. I think it's like the clean

17:01

way to go. Straightforward answer right

17:03

there. What advice would you give

17:06

to people looking to start learning jiu-jitsu?

17:09

Given how busy you are, given

17:12

where you are in life, you're able to

17:14

do this, you're able to train, you're able to compete

17:16

and get to learn

17:18

something from this interesting art.

17:21

Why do you think you have to be willing to

17:25

just get beaten up a lot? But

17:28

I mean, over time, I think that there's a flow to

17:30

all these things. One of my experiences

17:32

that I think kind of

17:39

transcends

17:41

running a company and the different

17:43

activities that I like doing are, I

17:46

really believe that if you're going to accomplish whatever

17:49

anything, a

17:50

lot of it is just being willing to

17:52

push through and having the grit

17:54

and determination to push

17:57

through difficult situations. of

18:00

people that

18:01

ends up being sort

18:03

of a difference maker between the people who

18:06

kind of get the most done and not.

18:08

I mean, there's all these questions about like,

18:11

how many days people wanna work

18:13

and things like that. I think almost all the people who like

18:15

start successful companies or things like that are just are

18:18

working extremely hard. But I think one

18:20

of the things that you learn both by doing

18:22

this over time or very acutely with

18:25

things like jiu-jitsu or surfing is

18:29

you can't push through everything.

18:31

And I think that that's,

18:36

you learn this stuff very acutely doing

18:39

sports compared to running a company because running a company,

18:42

the cycle times are so long. It's like you start

18:44

a

18:45

project and then it's

18:47

like months later or if you're

18:50

building hardware, it could be years later before you're

18:52

actually getting feedback and able to make

18:54

the next set of decisions for the next version of the thing

18:56

that you're doing. One of the things that

18:58

I just think is mentally so nice about

19:00

these

19:01

very high turnaround

19:03

conditioning sports, things like that, is

19:06

you get feedback very quickly. It's like, okay, I

19:08

don't counter something correctly, you get punched in the face.

19:11

So not jiu-jitsu, you don't get punched in jiu-jitsu,

19:13

but an MMA. There are all these

19:15

analogies between all these things that I think

19:17

actually hold that are like

19:20

important life lessons. It's

19:22

like, okay, you're surfing a wave.

19:26

Sometimes you can't go in the

19:28

other direction on it. There

19:32

are limits to what, it's

19:34

like a foil, you can pump

19:36

the foil and push pretty hard in

19:38

a bunch of directions, but at some level, the

19:42

momentum against you is strong enough, that's

19:45

not gonna work. And I do think that

19:48

that's sort of a humbling,

19:50

but also an important

19:53

lesson for, and I think people who are running

19:55

things or building things, it's like, yeah, a

19:58

lot of the game is just being.

19:59

able to kind of push and work

20:02

through complicated things, but you also need to

20:05

kind of have enough

20:06

of an understanding of like which things you just can't push through

20:08

and where the finesse

20:11

is more important. Yeah. What

20:13

are your jujitsu life lessons? Well,

20:17

I think

20:19

you made it sound so simple and

20:22

were so eloquent that it's easy to

20:24

miss, but basically

20:27

being okay

20:28

and accepting the wisdom and the

20:31

joy in the getting

20:33

your ass kicked in the full

20:35

range of what that means. I think that's

20:37

a big gift of the being humbled.

20:41

Somehow being humbled, especially physically

20:43

opens your mind to the

20:46

full process of learning what it means to learn,

20:48

which is being willing to suck

20:51

at something. I think jujitsu is just

20:53

very repetitively, efficiently

20:57

humbles you over and over and over

20:59

and over to where you can carry that lessons

21:02

to places where you don't get

21:04

humbled as much, whether it's research or running

21:06

a company or building stuff, the

21:09

cycle is longer. And jujitsu, you can just

21:11

get humbled in a period of an hour, over

21:13

and over and over and over, especially when you're a beginner,

21:16

you'll have a little person, just

21:19

somebody much smarter than you, just

21:21

kick your ass repeatedly,

21:25

definitively, where there's no argument

21:28

and then you literally tap because

21:30

if you don't tap, you're going to die. So

21:33

this is an agreement, you

21:35

could have killed me just now, but we're friends, so

21:37

we're going to agree that you're not going to. And

21:39

that kind of humbling process, it just does

21:42

something to your psyche, to your ego

21:44

that puts it in its proper context to realize

21:46

that everything

21:49

in this life is like a journey from

21:52

sucking through a hard

21:55

process of improving rigorously.

22:00

after day after day after day, any kind of

22:02

success requires hard work. Yeah,

22:04

you get some more than a lot of sports, I

22:07

would say, because I've done a lot of them, it really

22:09

teaches you that. And you made it sound so simple.

22:11

Like, it's okay,

22:14

it's part of the process, you just get humble, get

22:16

you rasket. I've just failed and been embarrassed so

22:18

many times in my life that like, you know, it's

22:21

a core competence of this. It's a core

22:23

competence. Well, yes, and there's

22:25

a deep truth to that. Being able to, and you

22:27

said it in the very beginning, which is,

22:29

that's the thing that stops us, especially

22:32

as you get older, especially as you develop expertise

22:34

in certain areas, the not

22:37

being willing to be a beginner in a new area.

22:40

Yeah. Because that's

22:42

where the growth happens, is being willing

22:45

to be a beginner, being willing to be embarrassed,

22:47

saying something stupid, doing something stupid. A

22:51

lot of us that get good at one thing, you wanna show that

22:53

off, and it sucks

22:56

being a beginner, but

22:58

it's where growth happens. Yeah.

23:01

Well, speaking of which, let me ask

23:04

you about AI. It seems like this year,

23:06

for the entirety of the human civilization, is

23:08

an interesting year

23:10

for the development of artificial intelligence. A

23:13

lot of interesting stuff is happening. So,

23:16

Meta is a big part of that. Meta's

23:19

developed LAMA, which is a 65

23:21

billion parameter model.

23:23

There's a lot of interesting

23:25

questions they can ask here, one of which has to do with

23:28

open source. But first,

23:30

can you tell the story of developing of

23:33

this model and making

23:35

the

23:36

complicated decision of how

23:38

to release it?

23:39

Yeah, sure. I think you're right, first

23:42

of all, that in the last year, there have been

23:44

a bunch of advances

23:46

on scaling up these large

23:49

transformer models. So there's the language equivalent

23:51

of it with large language models, the

23:53

sort of the

23:54

image generation equivalent with

23:56

these large diffusion models.

23:59

fundamental research that's gone into this. And

24:03

meta has taken the approach of

24:07

being quite open and

24:09

academic in our development

24:13

of AI.

24:14

Part of this is we want to have the best

24:16

people in the world researching this and

24:20

a lot of the best people want to know that they're going to be able

24:22

to share their work. So that's part of the

24:24

deal that we have is that we

24:27

can get, if you're one of the

24:29

top AI researchers in the world and come here, you

24:31

can get access to industry scale

24:34

infrastructure. And part

24:37

of our ethos is that we want to share

24:39

what's invented broadly.

24:42

We do that with a lot of the different AI tools

24:44

that we create.

24:45

And Llama is the language model

24:48

that our research team made. And we

24:50

did a

24:51

limited open source

24:53

release for it, which was intended for

24:55

researchers to be able to use it.

25:01

But

25:02

responsibility and getting safety

25:04

right on these is very important. So we didn't

25:06

think that

25:09

for the first one, there were a bunch of questions around whether

25:11

we should be releasing this commercially.

25:14

So

25:15

we kind of punted on that for V1 of Llama and just

25:17

released it from research. Obviously,

25:21

by releasing it for research, it's

25:23

out there. But companies know that they're

25:26

not supposed to kind of put it into commercial releases.

25:29

And we're working on the

25:31

follow-up models for this and thinking

25:33

through

25:36

how exactly this should work for follow-on

25:39

now that we've had time to work on a lot

25:41

more of the safety and the

25:43

pieces around that. But overall, I mean,

25:45

this is...

25:47

I just kind of think that

25:51

it would be good if there

25:54

were a lot of different folks who had the ability

25:58

to build state-of-the-art.

26:00

technology here, you know,

26:02

it's not just a small

26:04

number of big companies. But to

26:06

train one of these AI

26:08

models, the state of the art models, is,

26:11

you

26:12

know, just takes,

26:13

you know, hundreds of millions of dollars of infrastructure,

26:15

right? So there are

26:17

not that many organizations in the

26:19

world that

26:20

can do that at the biggest scale today. And

26:24

no, it gets more efficient every day.

26:26

So I

26:28

do think that that will be available to more

26:30

folks over time. But I just think like there's

26:33

all this innovation out there that people can create. And

26:36

I

26:37

just think that we'll also learn

26:39

a lot by seeing what the whole community

26:42

of students and hackers

26:45

and startups and different folks build

26:47

with this. And that's kind of been how we've approached

26:49

this.

26:51

It's also how we've done a lot of our infrastructure. And we took

26:53

our whole data center design and our server

26:55

design and we built this open compute project

26:58

where we just made that public. And part

27:01

of the theory was like, all right, if we make it so that more people can

27:03

use this server design, then that'll enable

27:06

more innovation.

27:07

It'll also make the

27:09

server design more efficient. And that'll make

27:11

our business more efficient, too. So that's worked.

27:13

And we've just done this with a lot of our infrastructure.

27:17

So for people who don't know, you did the limited release,

27:19

I think, in February of this year

27:21

of LAMA. And

27:24

it got, quote unquote, leaked. Meaning

27:27

like it escaped

27:31

the limited

27:33

release aspect. But it was

27:36

something you probably anticipated, given

27:38

that it's just released to researchers. We shared it with researchers.

27:42

It's just trying to make sure that there's a slow release.

27:47

But from there, I just would love to get your comment

27:49

on what happened next, which is like there's a very

27:51

vibrant open source community that just builds stuff on

27:53

top of it. There's LAMA

27:56

CPP, basically stuff

27:58

that makes it more efficient to run on the cloud.

27:59

on smaller computers. There's

28:02

combining with reinforcement

28:05

learning with human feedback, so some of

28:07

the different interesting fine-tuning mechanisms.

28:09

There's then also fine-tuning in a GPT-3

28:12

generations. There's a lot of GPT

28:15

for all, Alpaca, Colossal

28:17

AI, all these kinds of models just kind of spring up,

28:19

like, run on top of

28:21

wood. What do you think about that? No, I think

28:23

it's been really neat to see. I mean, there's

28:26

been folks who are getting it to run on

28:28

local devices, right? So if you're

28:29

an individual who just wants to

28:32

experiment with this

28:34

at home, you

28:35

probably don't have a large budget to get

28:37

access to a large amount of cloud compute,

28:39

so getting it to run on your local laptop

28:43

is pretty good and

28:45

pretty relevant. And

28:47

then there were things like, yeah, LAMA, CPP reimplemented

28:51

it more efficiently, so now even

28:54

when we run our own versions of it, we

28:56

can do it on way less compute and it's just way

28:58

more efficient, save a lot of money for

29:00

everyone who uses this, so that is good.

29:06

I do think it's worth calling out

29:08

that

29:09

because this was a relatively early release,

29:12

LAMA isn't

29:14

quite

29:15

as on the frontier

29:17

as, for

29:18

example, the biggest open AI models

29:20

or the biggest Google

29:23

models, right? I mean, you mentioned that the

29:25

largest LAMA model we released had 65

29:28

billion parameters and

29:30

when no one knows I

29:33

guess outside of open AI exactly

29:35

what the specs are for

29:37

GPT-4, but I think the, you know,

29:39

my understanding is it's like 10

29:41

times bigger and I think Google's

29:43

Palm model is also I think has

29:45

about 10 times as many parameters. Now, the

29:47

LAMA models are very efficient, so they perform

29:49

well for something that's around 65 billion

29:52

parameters. So for me, that was also part

29:54

of this because there's a whole debate around,

29:56

you know, is it good for

29:59

everyone in the world to... to have access

30:01

to the most

30:03

frontier AI models. And

30:05

I

30:06

think as the

30:08

AI models start approaching something

30:10

that's like

30:12

a super human intelligence, that's

30:14

a bigger question that we'll have to grapple with. But right

30:16

now, I mean, these are still

30:18

very basic tools.

30:22

They're powerful in the sense that a lot

30:24

of open source software like databases

30:27

or web servers can enable a lot

30:29

of pretty important things.

30:32

But I don't think

30:34

anyone looks at the current generation

30:36

of LAMA and thinks it's, you know,

30:39

anywhere near a super intelligence. So I think that

30:41

a bunch of those questions around like, is

30:43

it good to kind of get out there?

30:46

I think at this stage, surely, you

30:48

want more researchers working on it for

30:50

all the reasons that open

30:52

source software has a lot of advantages.

30:54

And we talked about efficiency before, but another one is just

30:57

open source software tends to be more secure

31:00

because you have more people looking at it openly and

31:02

scrutinizing it and finding holes

31:04

in it. And that makes it more safe.

31:07

So I think at this point, it's more,

31:09

I think it's generally

31:11

agreed upon that

31:13

open source software is generally more secure

31:15

and safer than things that

31:17

are kind of developed in a silo where people try

31:19

to get through security through obscurity. So

31:21

I think that for the scale of what

31:24

we're seeing now with AI,

31:26

I think we're more likely to get to, you

31:28

know, good alignment and good understanding

31:32

of kind of what needs to do to make this work well by

31:35

having it be open source. And that's something that I think

31:38

is quite good to have out there and happening

31:40

publicly at this point. Meta released a lot

31:42

of models as open source.

31:44

So the Massily Multilingual

31:47

Speech Model, the NHYI model. Yeah, that

31:49

was neat. I mean, I'll ask

31:51

you questions about this, but the point is you've

31:55

open sourced quite a lot. You've been spearheading

31:56

the open source movement. Where's, that's

31:59

really... positive inspiring

32:01

to see from one angle, from the research angle,

32:03

of course, there's folks who are really terrified about

32:05

the existential threat of artificial intelligence.

32:08

And those folks will say that, you

32:11

know,

32:13

you have to be careful about the open sourcing step,

32:16

but where do you see the future of open source

32:18

here as part of meta?

32:21

The tension here

32:23

is, do you wanna release the magic

32:25

sauce? That's one tension.

32:28

And the other one is, do you wanna

32:30

put a powerful tool in the hands of

32:34

bad actors, even though it probably has

32:36

a huge amount of positive impact also?

32:38

Yeah, I mean, again, I think for the

32:40

stage that we're at in the development of AI, I

32:43

don't think anyone looks at the current state of things

32:45

and thinks that this is super intelligence. And,

32:49

you know, the models that we're talking about,

32:51

the llama models here are, you

32:54

know, generally an order of magnitude smaller

32:56

than what OpenAI or Google are doing. So

32:59

I think that at least for the stage that

33:01

we're at now,

33:02

the equities

33:04

balance strongly in my view towards

33:06

doing this more openly. I

33:08

think if you got something that was closer to

33:11

super intelligence, then I think

33:13

you'd have to discuss that more and think

33:16

through that a lot more. And

33:18

we haven't made a decision yet as to what we would

33:20

do if we were in that position, but I don't think, I

33:22

think there's a good chance that we're pretty far off from that

33:24

position. So I'm not, I'm certainly

33:27

not

33:28

saying that the position

33:31

that we're taking

33:33

on this now applies

33:35

to every single thing that we would ever do. And, you

33:38

know, certainly inside the company, and we probably

33:40

do more open source work than, you know,

33:42

most of the other big tech companies,

33:44

but we also don't open source everything. We're in a

33:46

lot of our, the core kind

33:48

of app code for WhatsApp

33:50

or Instagram or something. I mean, we're not open sourcing

33:53

that. It's not like a general enough

33:55

piece of software that would be useful for a lot of

33:57

people to do different things.

33:59

Whereas the

34:02

software that we do, whether it's like an open source

34:05

server design or

34:07

basically things like memcache,

34:10

or like a good, it was probably

34:12

our earliest project that

34:15

I worked on, it was probably one of the last things that I

34:17

coded and led directly for the

34:19

company. But

34:21

basically this caching tool for

34:25

quick data retrieval,

34:27

these are things that are just broadly useful

34:30

across anything that you want to build. And

34:33

I think that some of the

34:34

language models now have that feel,

34:37

as well as some of the other things that we're building, like the translation

34:39

tool that you just referenced. So

34:42

text to speech and speech to text,

34:44

you've expanded it from around 100 languages to

34:46

more than 1100 languages. And

34:49

you can identify more than, the model can identify

34:51

more than 4,000 spoken languages, which

34:54

is 40 times more than any known previous technology.

34:57

To me, that's really, really, really exciting

35:00

in terms of connecting the world,

35:02

breaking down barriers that language creates.

35:04

Yeah, I think being able to translate between all of these different

35:08

pieces in real time, this

35:10

has been a

35:12

kind of common

35:13

sci-fi idea that

35:16

we'd all have, whether it's

35:18

earbud or glasses

35:20

or something that can help translate in real time, between

35:24

all these different languages. And that's one that I think technology

35:26

is

35:27

basically delivering now. So

35:30

I think, yeah, I think that's pretty exciting. You

35:33

mentioned the next version of llama. What can

35:35

you say about the next version of llama? What

35:38

can you say about like what were

35:41

you working on in terms of release, in terms of

35:43

the vision for that?

35:45

Well, a lot of what we're doing is taking

35:47

the first version, which was primarily

35:50

this research version

35:52

and trying to now build a version

35:55

that has

35:57

all of the latest state-of-the-art safety

36:00

precautions built in. And

36:04

we're using some more data to train it

36:06

from across our services, but

36:09

a lot of the work that

36:11

we're doing internally

36:12

is really just focused on making sure that this

36:15

is

36:15

as aligned

36:18

and responsible as possible.

36:21

And we're building a lot of our own, we're

36:24

talking about kind of the open source infrastructure, but

36:28

the main thing that we focus on building here, a

36:30

lot of product experience is to help people connect and express

36:32

themselves. So we're gonna,

36:34

I've talked about a bunch of this stuff, but

36:38

you'll have

36:39

an assistant that you can talk to in WhatsApp,

36:42

I think in the future, every creator

36:45

will have kind of an AI agent that

36:48

can kind of act on their behalf that their fans

36:50

can talk to. I wanna get

36:52

to the point where every small business basically has

36:54

an AI agent

36:56

that people can talk

36:58

to for, to do commerce and customer support and things like

37:00

that. So there are gonna be all these different

37:02

things. And

37:05

Lama or the language model underlying this is

37:08

basically gonna be the engine that powers that. The

37:10

reason to open source it is that, as

37:14

we did with the first version,

37:17

is that it basically

37:19

it unlocks a lot of

37:21

innovation in the ecosystem, will

37:23

make our products better as well. And

37:26

also gives us a lot of valuable feedback on security

37:28

and safety, which is important for making

37:30

this good. But

37:31

yeah, I mean, the work that we're doing to advance

37:34

the infrastructure, it's

37:37

basically at this point taking it beyond a research

37:39

project into something which is ready to

37:41

be kind of core infrastructure,

37:44

not only for our own products, but hopefully

37:47

for a lot of other things out there too. Do you think

37:49

the Lama or the language model

37:52

underlying that

37:53

version two will be open

37:55

sourced? Do

37:58

you have internal debate around that? the pros

38:00

and cons and so on. We

38:02

were talking about the debates that we have internally and I think

38:07

the question is how to do it. I

38:09

think we

38:11

did the research license for V1

38:14

and I think the big thing that

38:16

we're thinking about is basically

38:18

what's the right

38:20

way. So there was a leak that happened,

38:23

I don't know if you can comment on it for V1.

38:25

We released it as a research

38:27

project

38:28

for researchers to be able

38:31

to use, but in doing so we put it out

38:33

there. So we were

38:35

very clear that anyone who uses the

38:38

code and the weights doesn't have a commercial license

38:40

to put into products and we've

38:42

generally seen people respect that. It's like you don't

38:44

have any reputable companies

38:46

that are basically trying to put this into

38:49

their commercial products. But yeah, by

38:51

sharing it with

38:53

so many researchers, it did

38:56

leave the building. But what have

38:58

you learned from that process that you might be able to

39:00

apply to V2 about how

39:03

to release it safely, effectively,

39:06

if you release it? Yeah, well, I think a lot

39:08

of the feedback, like I said, is just around

39:11

different things around how

39:13

do you fine tune models to make them

39:16

more aligned and safer. And you see all

39:18

the different data recipes that

39:21

you mentioned a lot of different projects

39:24

that are based on this. I mean, there's one at Berkeley,

39:26

there's just like all over.

39:30

And

39:32

people have tried a lot of different things and we've

39:34

tried a bunch of stuff internally. So kind

39:36

of where we're making progress

39:39

here, but also we're able to learn from some

39:41

of the best ideas in the community. And I

39:43

think we want to just continue

39:45

pushing that forward. I don't have

39:47

any news to announce if that's what you're asking.

39:50

I mean, this is a thing that

39:54

we're

39:57

still kind of

39:58

actively

39:59

working through the right way to move

40:02

forward here. The details of the secret

40:04

sauce are still being developed.

40:06

I see. Can you comment

40:08

on what do you think of the thing that

40:11

worked for GPT, which is the reinforcement

40:13

learning with human feedback? So doing

40:15

this alignment process, do you find it

40:18

interesting? And as part of that, let me

40:20

ask, because I talked to Jan Lekun before talking to

40:22

you today, he asked me

40:24

to ask,

40:25

or suggested that I ask, do you think LLM

40:28

fine tuning will need to be crowdsourced

40:31

Wikipedia style?

40:32

So crowdsourcing. So

40:35

this kind of idea of how to

40:37

integrate the human in

40:39

the fine tuning of these foundation

40:42

models. Yeah, I think that's a really

40:45

interesting idea that I've talked to Jan about

40:47

a bunch.

40:50

And we

40:51

were talking about

40:53

how do you basically train these

40:55

models to be

40:57

as safe and aligned

40:59

and responsible as possible. And different

41:01

groups out there who are doing development

41:03

test different data recipes

41:06

in fine tuning. But this

41:08

idea that you just mentioned is

41:12

that at the end of the day,

41:14

instead of having kind of one group fine

41:16

tune some stuff and another group

41:19

produce a different fine tuning recipe, and then

41:22

us trying to figure out which one we think works best

41:24

to produce the most aligned model.

41:28

I do think that it

41:31

would be nice if you could get to a point where

41:33

you had a

41:34

Wikipedia style collaborative

41:38

way for a

41:40

kind of a broader community to

41:43

fine tune it as well. Now, there's a lot of challenges

41:45

in that, both

41:47

from an infrastructure

41:48

and like a community management

41:51

and product perspective about how you do that. So I haven't

41:54

worked that out yet. But

41:57

as an idea, I think it's quite compelling and I

41:59

think it, goes well with the ethos of

42:01

open sourcing the technology is also finding

42:03

a way to have a community-driven

42:06

training of it.

42:10

But

42:11

I think that there are a lot of questions on this. In general,

42:14

these questions around what's the

42:17

best way to produce aligned AI models,

42:20

it's very much a research area. And

42:22

it's one that I think we will need to make

42:24

as much progress on as the core

42:27

intelligence capability of the models

42:29

themselves. Well, I just

42:32

did a conversation with Jimmy Wales, the founder

42:34

of Wikipedia. And to me, Wikipedia

42:37

is one of the greatest websites ever created

42:39

and is a kind of a miracle that

42:41

it works. And I think it has to do with something

42:43

that you mentioned, which is community. You

42:45

have a small community of editors

42:48

that somehow work together well. And

42:52

they handle very controversial topics

42:56

and they handle it

42:57

with balance and with grace, despite sort

42:59

of the attacks that will often

43:01

happen. A lot of the time. I mean, it's not,

43:04

it has issues just like any other human system.

43:06

But yes, I mean, the balance is,

43:08

I mean,

43:10

it's amazing what they've been able to achieve. But it's

43:12

also not perfect. And I think that that's,

43:15

there's still a lot of challenges. Right.

43:18

The more controversial the topic, the more

43:21

difficult

43:24

the journey towards quote unquote

43:26

truth or knowledge or wisdom that

43:28

Wikipedia tries to capture. In the same way,

43:30

AI models, we need to

43:33

be able to generate those same things,

43:35

truth, knowledge and wisdom. And how

43:37

do you align those models that they

43:40

generate something

43:43

that is closest

43:45

to truth? There's these concerns

43:47

about misinformation, all this kind of stuff that

43:50

nobody can define. And

43:53

it's something that we together as a human species

43:56

have to define.

43:57

Like what is truth and how to help AI

43:59

systems. generate that. One of the things

44:01

that language models do really well is

44:03

generate convincing sounding things that

44:06

can be completely wrong.

44:08

And so how do you align

44:10

it to

44:12

be less wrong? And

44:16

part of that is the training and part of that is the alignment.

44:19

And however you do the alignment stage. And

44:21

just like you said, it's a very

44:23

new and a very open research

44:25

problem. And I think

44:27

that there's also a

44:29

lot of questions about whether the

44:31

current architecture for LLMs

44:35

as you continue scaling it,

44:37

what happens?

44:39

I mean,

44:41

a lot of what's been exciting in the last year

44:43

is that there's clearly a qualitative breakthrough

44:46

where with some of the GPT models

44:49

that I put out and that

44:51

others have been able to do as well, I

44:54

think it reached a kind of level of quality where

44:56

people are like, wow, this feels different

44:59

and

45:01

it's gonna be able to be the foundation for building

45:03

a lot of awesome products and experiences

45:06

and value. But I think that the other realization

45:08

that people have is, wow, we just made a breakthrough.

45:13

If there are other breakthroughs quickly,

45:16

then I think that there's the sense that maybe we're

45:18

closer to

45:20

general intelligence. But I think that idea

45:22

is predicated on the idea that I

45:25

think people believe that there's still generally a bunch of additional

45:27

breakthroughs to make and that

45:30

we just don't know how

45:31

long it's gonna take to get there. And

45:33

one view that some people have, this

45:36

doesn't tend to be my view as much, is that

45:38

simply scaling

45:40

the current LLMs and

45:42

getting to higher parameter count models by

45:45

itself will get to something that

45:47

is closer to general

45:49

intelligence. But I don't

45:52

know, I tend to think that there's

45:55

probably more

45:56

fundamental steps that need to be taken

45:59

along the way there.

46:00

But still the leaves taken

46:02

with this extra alignment

46:04

step is quite incredible, quite

46:07

surprising to a lot of folks. And

46:09

on top of that,

46:10

when you start to have hundreds

46:13

of millions of people potentially using a product

46:15

that integrates that,

46:17

you can start to see civilization

46:19

transforming effects before you

46:21

achieve super, quote unquote

46:23

super intelligence. It could be

46:26

super transformative

46:28

without being a super intelligence. Oh

46:30

yeah, I mean, I think that

46:32

there are gonna be a lot of amazing products

46:35

and value that can be created with the current level

46:37

of technology.

46:40

To some degree, I'm

46:41

excited to

46:43

work on a lot of those products over the next few

46:45

years. And I think it would just create a tremendous

46:48

amount of whiplash if the number

46:50

of breakthroughs keeps, like if they're keep

46:53

on being stacked breakthroughs, because I think to some degree, industry

46:56

in the world needs some time to kind

46:58

of build these breakthroughs into

47:00

the products and experiences that we all use that we can

47:02

actually benefit from them.

47:05

But I don't

47:08

know, I think that there's just a,

47:11

like an awesome amount of stuff to do. I mean, I think about

47:13

like all of the,

47:15

I don't know, small businesses or individual entrepreneurs

47:18

out there who,

47:20

you know, now we're gonna be able to get

47:22

help coding the things that they need to go

47:24

build things or designing the things that they need

47:28

or we'll be able to use these models

47:30

to be able to do customer support for the people that

47:33

they're serving over WhatsApp without having

47:35

to, you know, I think that's

47:37

just gonna be, I just think that this is all

47:40

gonna be

47:41

super exciting. It's gonna create

47:43

better experiences for people and just unlock

47:45

a ton of innovation and value. So

47:48

I don't know if you know, but you know,

47:50

what is it, over 3 billion

47:52

people use WhatsApp, Facebook,

47:54

and Instagram. So

47:58

any

47:59

kind of, AI fueled products

48:01

that go into that, like we're talking about

48:03

anything with LLMs, will have

48:05

a tremendous amount of impact. Do you have

48:07

ideas and thoughts about possible

48:10

products that might

48:13

start being integrated into

48:16

these platforms used by so many people?

48:19

Yeah, I think

48:20

there's three main categories of things that

48:22

we're working on. The

48:24

first that I think is probably

48:26

the most interesting

48:33

is there's

48:35

this notion of like,

48:38

you're gonna have an assistant or an agent

48:40

who you can talk to. And I think probably the

48:42

biggest thing that's different about my

48:45

view of how this plays out from what I see with

48:49

OpenAI and Google and others is, everyone

48:51

else is building like the one

48:54

singular AI, right? It's like, okay, you talk

48:56

to chat GPT, or you talk to Bard,

48:59

or you talk to Bing. And

49:02

my view is that there are going

49:04

to be a lot

49:05

of different

49:08

AIs that people are gonna wanna engage with, just like

49:10

you wanna use a

49:12

number of different apps for different things and you

49:14

have relationships with different people in

49:16

your life who fill different emotional roles

49:19

for you.

49:21

And so I think

49:24

that they're gonna be, people

49:27

have a reason that I think you don't just want like

49:29

a singular AI. And that

49:32

I think is probably the biggest distinction in

49:34

terms of how I think about this. And a

49:36

bunch of these things, I think you'll want an assistant.

49:40

I mean, I mentioned a couple of these before. I think like every creator

49:42

who you interact with will ultimately want

49:45

some kind of AI that can proxy

49:47

them and be something that their fans

49:49

can interact with or that allows them to

49:52

interact with their fans.

49:55

This is like the common creator promise. Everyone's trying

49:57

to build a community and engage

49:59

with people.

49:59

want tools to be able to amplify themselves more

50:02

and be able to do that. But you only

50:06

have 24 hours in a day. So

50:09

I think having the ability to basically

50:12

bottle up your personality or

50:17

give your fans information about when you're performing a concert

50:20

or something like that, that

50:22

I think is going to be something that's super valuable. But it's not

50:24

just that, again, it's not this idea

50:26

that people are going to want just one singular

50:28

AI. I think you're going to want

50:30

to interact with a lot of different entities. And

50:32

then I think there's the business version of this too, which we've

50:34

touched on a couple of times, which is I

50:39

think every business in the world is going to want

50:41

basically an AI

50:44

that you have your page on

50:46

Instagram or Facebook or WhatsApp

50:48

or whatever. And you want to point people to

50:50

an

50:51

AI that people can interact with, but

50:54

you want to know that that AI is only going to sell your products.

50:56

You don't want it recommending your competitor

50:58

stuff. So it's not like there can

51:00

be just one singular

51:03

AI that can answer all the questions

51:05

for a person because that AI

51:08

might not actually be aligned with you as a business to

51:12

really just do the best job providing support for

51:15

your product. So I think that there's going to be a clear

51:17

need

51:19

in the market and in people's lives

51:22

for there to be a bunch of these. Part

51:24

of that is figuring out the

51:26

research, the technology that

51:29

enables the personalization that you're talking about.

51:31

So not one centralized God-like

51:34

LLM, but one just a huge

51:37

diversity of them that's fine

51:39

tuned to particular needs, particular styles,

51:41

particular businesses, particular

51:44

brands, all that kind of stuff. And also enabling,

51:46

just enabling people to create them really easily

51:49

for your own business

51:52

or if you're a creator to be able to help

51:54

you

51:54

engage with your fans. So yeah, I think that

51:56

there's

51:59

There's a clear kind of interesting product

52:02

direction here that I think is fairly

52:05

unique from what any

52:07

of the other big companies are taking. It

52:09

also aligns well with this sort of open source

52:12

approach because again, we sort of believe in

52:14

this more community oriented,

52:17

more democratic approach to building out the

52:20

products and technology around this. We don't think that there's going

52:22

to be the one true thing. We think that there should be

52:24

kind of a lot of development.

52:26

So that part of things

52:28

I think is going to be really interesting. We could go, probably

52:30

spend a lot of time talking about that and the

52:33

kind of implications of

52:35

that approach being different from what

52:37

others are taking. Then

52:40

there's a bunch of other simpler things that I think we're also going

52:42

to do. Just going back to your question around how

52:45

this finds its way into like what do we build?

52:48

There are going to be a lot of simpler things around, okay,

52:51

you

52:54

post photos on Instagram and

52:56

Facebook and

52:58

WhatsApp and Messenger and you

53:00

want the photos to look as good as possible. So having

53:03

an AI that you can just take a photo and then

53:05

just

53:05

tell it, okay, I want to edit this thing or describe

53:08

this. It's like, I think we're going to have tools that are just way

53:11

better than what we've historically had

53:13

on this. That's

53:15

more in the image and media generation side than the

53:17

large language model side, but it

53:20

all kind of plays off of advances

53:22

in the same space.

53:24

There are a lot of tools that I think are just going to get built into every

53:27

one of our products. I think every single

53:29

thing that we do is going to basically get

53:31

evolved in this direction. It's like in the future,

53:33

if you're advertising on our services,

53:35

like do you need to make your

53:38

own kind of ad creative? No,

53:40

you'll just tell

53:42

us, okay, I'm a

53:45

dog walker and I'm

53:47

willing to walk people's dogs and

53:49

help me find the right people

53:52

and create the ad

53:54

unit that will perform the best and give

53:57

an objective to the system.

54:00

that connects you with the right people. Well, that's

54:02

a super powerful idea of

54:05

generating the language, almost like

54:09

rigorous A-B testing for

54:12

you.

54:13

That works to find

54:15

the best customer for your thing. I

54:17

mean, to me, advertisement when done

54:20

well, just

54:21

finds a good match between a human

54:24

being and a thing that will make that human

54:26

being happy.

54:27

Yeah, totally. And do that as efficiently

54:30

as possible. When it's done well, people actually

54:33

like it. I think that there's a

54:35

lot of examples where it's not done well and it's annoying,

54:37

and I think that that's what kind of gives it a bad

54:39

rap. But yeah,

54:42

and a lot of the stuff is possible today. I mean, obviously, A-B

54:44

testing stuff is built into a lot of these

54:46

frameworks. The thing that's new is having

54:48

technology that can generate the ideas for

54:51

you about what to A-B test, something like that's

54:53

exciting. So this will just be across everything

54:56

that we're doing. We're at all the metaverse stuff that we're doing,

54:59

you wanna create worlds in the future, you'll just

55:01

describe them and then it'll create the code

55:03

for you. So natural language

55:05

becomes the interface we use for

55:09

all the ways we interact with the computer,

55:11

with the digital.

55:14

More of them, yeah, yeah, totally. Yeah,

55:16

which is what everyone can do using

55:18

natural language. And with translation, you can do it

55:20

in any kind of language.

55:23

For the personalization, it's really,

55:25

really, really interesting.

55:27

Yeah. It unlocks so many possible

55:30

things. I mean, I, for one, look forward

55:32

to creating a copy of myself. I

55:34

know we talked about this last time. But this has,

55:37

since the last time, this becomes-

55:40

Now we're closer.

55:41

Much closer. Like I

55:43

can literally just having interacted with some of these

55:45

language models, I can see the absurd

55:47

situation where I'll have a

55:49

large or a Lex language model

55:53

and I'll have to have a conversation with

55:55

him about like, hey, listen,

55:57

and I'll have to have a conversation with him

55:59

Like you're just getting out of line and having

56:02

a conversation where you fine tune that thing to be a little

56:04

bit more respectful or something like this. I

56:06

mean, that's going to be

56:08

the, that seems

56:11

like an amazing product

56:14

for businesses, for humans, just

56:18

not just the assistant that's facing

56:20

the individual, but the assistant

56:23

that represents the individual to the public, both

56:26

directions. There's

56:28

basically a layer that is

56:30

the AI system through which

56:33

you interact with the outside

56:35

world, with the outside world that has

56:37

humans in it. That's really interesting.

56:40

And you that have social

56:42

networks that connect billions

56:44

of people, it seems like a heck

56:48

of a large scale place to

56:50

test some of this stuff out. Yeah,

56:52

I mean, I think part of the reason why creators

56:55

will want to do this is because they already have the communities

56:57

on our services. Yeah.

57:00

And a lot of the interface for this stuff today

57:03

are chat type interfaces. And

57:06

between WhatsApp and Messenger,

57:09

I think that those are just great ways to

57:12

interact with people.

57:13

So some of this is philosophy, but do

57:16

you see a near term future where you have

57:19

some of the people you're friends with

57:21

are AI systems on the

57:24

social networks, on Facebook,

57:26

on Instagram, even on

57:28

WhatsApp, having conversations

57:31

where some heterogeneous,

57:33

some human, some AI.

57:35

I think we'll get to that. And

57:39

if only just empirically looking at,

57:43

then Microsoft released this thing called Shaoice

57:46

several years ago in China, and

57:48

it was a

57:49

pre-LLM chatbot technology

57:52

that it was a lot simpler than

57:54

what's possible today. And

57:56

I think it was like tens of millions of people were using

57:59

this and just...

57:59

really became

58:02

quite attached and built relationships

58:04

with it. And I think that there's services

58:07

today like replica, where people

58:10

are doing things like that. And

58:13

so I think that there's certainly

58:15

needs

58:16

for companionship

58:18

that people have, older people. And

58:24

I think most people, probably

58:26

don't have as many friends as they would like to have. If

58:28

you look at, there's some interesting

58:30

demographic studies around that

58:33

like the average person

58:35

has,

58:36

the number of close friends that they have is fewer

58:40

today than it was 15 years ago. And

58:42

I mean, that gets to like, this

58:45

is like the core thing that I think

58:48

about in terms of building services that

58:50

help connect people. So I think you'll get

58:52

tools that help people connect with each other are

58:55

gonna be, the primary thing that we wanna

58:57

do. So you can imagine, AI

59:01

assistants that,

59:02

just do a better job of reminding you when it's your friend's

59:05

birthday and how you could celebrate them. It's

59:07

like right now we have like the

59:08

little box in the corner of the

59:10

website that tells you whose birthday it is and stuff

59:12

like that.

59:14

But it's,

59:14

at some level you don't want to just wanna like,

59:16

send everyone a note that says the same note

59:18

saying happy birthday with an emoji. So

59:22

having something that's more of an, a

59:24

social assistant in that sense that

59:27

can update you on what's going

59:29

on in their life and like

59:31

how you can reach out to them effectively,

59:34

help you be a better friend. I think that that's something that's super

59:36

powerful too. But

59:39

yeah, beyond that, and

59:41

there are all these different flavors

59:44

of kind of personal

59:46

AI is that I think could exist. So I think

59:48

an assistant is sort of

59:50

the kind of simplest one to wrap your head around,

59:52

but I

59:53

think like a mentor

59:56

or a life coach, if someone

59:58

who can give you advice.

1:00:00

who's maybe like a bit of a cheerleader who

1:00:02

can help pick you up through all the challenges that inevitably

1:00:06

we all go through on a daily basis and

1:00:08

that there's probably some role for

1:00:10

something like that. And then all the

1:00:13

way, you can probably just go through a lot of the different

1:00:15

type of functional relationships

1:00:18

that people have in their life. And I would

1:00:21

bet that there will be companies out there that take a crack at

1:00:25

a lot of these things. So I don't know,

1:00:27

I think it's part of the interesting innovation that's going to

1:00:29

exist is that there are certainly

1:00:31

a lot like education

1:00:34

tutors. I just look at my

1:00:36

kids

1:00:38

learning to code and

1:00:39

they love it,

1:00:40

but it's like they get stuck

1:00:43

on a question and they have to wait till

1:00:45

I can help answer it or someone else who

1:00:47

they know can help answer the question in the future. There

1:00:50

will be like a coding assistant that they have that

1:00:52

is designed to

1:00:55

be perfect for teaching a five and a seven year old

1:00:57

how to code and they'll just be able to ask questions

1:01:00

all the time and it will be extremely

1:01:03

patient. It's never going to get annoyed at them. I think

1:01:07

that there are all these different kind of

1:01:09

relationships or functional relationships that we have

1:01:11

in our lives that are

1:01:14

really

1:01:15

interesting. And I think one of the big questions is

1:01:17

like, okay, is this all going to just get bucketed

1:01:19

into one

1:01:20

singular AI? I just

1:01:23

don't think so. Do you think about, this

1:01:26

is actually a question from Reddit, what

1:01:28

the long-term effects

1:01:30

of human communication when people can talk

1:01:33

with, in quotes, talk with others

1:01:35

through a chatbot that augments their language automatically,

1:01:38

rather than developing social skills by making

1:01:41

mistakes and learning?

1:01:42

Will people just communicate

1:01:45

by grunts in a generation? Do

1:01:47

you think about long-term effects at scale,

1:01:50

the integration of AI in our social interaction?

1:01:54

Yeah, I mean, I think it's mostly

1:01:56

good. I mean, that question

1:01:58

was sort of framed in a...

1:01:59

negative way, but I mean, we were talking before about

1:02:02

language models helping you communicate with, it

1:02:05

was like language translation, helping you communicate with

1:02:07

people who don't speak your language. I

1:02:09

mean, at some level, what all

1:02:11

this social technology is doing is helping

1:02:14

people

1:02:17

express themselves

1:02:19

better to people in situations

1:02:22

where they would otherwise have a hard time doing that. So

1:02:25

part of it might be, okay, because you speak a language

1:02:27

that I don't know, that's a pretty basic one that I

1:02:29

don't think people are going to look at that and say,

1:02:31

it's sad that we have the capacity

1:02:34

to do that because I should have just learned your language,

1:02:36

right? I mean, that's pretty high bar.

1:02:38

But

1:02:40

overall, I'd say there are

1:02:43

all these impediments,

1:02:46

and language is an imperfect way

1:02:48

for people to express

1:02:50

thoughts and ideas.

1:02:52

It's one of the best that we have. We have that, we have art,

1:02:55

we have code. So language is

1:02:57

also a mapping of the way you think, the way

1:02:59

you see the world, who you are. And

1:03:02

one of the applications, I've recently talked to

1:03:04

a person who's actually a

1:03:06

jiu-jitsu instructor. He

1:03:09

said that when he

1:03:11

emails parents

1:03:13

about

1:03:14

their son and daughter, that

1:03:16

they can improve their

1:03:18

disciplining class and so on. He

1:03:21

often finds that he comes off a bit

1:03:23

of more of an asshole than he would like. So

1:03:25

he uses DPT to translate

1:03:27

his original email into a nicer

1:03:30

email. We

1:03:33

hear this all the time. A lot of creators on our services

1:03:35

tell us that

1:03:36

one of the most stressful things is

1:03:40

basically negotiating deals with brands

1:03:42

and stuff like the business side of it. Because they're like, I mean,

1:03:45

they do their thing, right? And the creators,

1:03:47

they're excellent at what they do, and they just

1:03:49

want to connect with their community. But then

1:03:51

they get really stressed. They go into their

1:03:53

DMs and they see some

1:03:55

brand wants to do something with them and they

1:03:58

don't quite know how to negotiate or how to

1:03:59

to push back respectfully. And so

1:04:02

I think building a tool that can

1:04:05

actually allow them to do that well is the

1:04:07

one simple thing that I think is just

1:04:09

like an interesting thing that we've heard from a

1:04:11

bunch of people that they'd be interested

1:04:13

in. But I'm going back to the broader

1:04:15

idea.

1:04:19

I don't know. I mean, I just,

1:04:22

Priscilla and I just had our third daughter. Congratulations,

1:04:25

brother. Thank you. Thanks. And

1:04:28

it's like one of the saddest things in the world is

1:04:30

like seeing your baby cry, right? But

1:04:33

it's like, why is that? Right?

1:04:36

It's

1:04:36

like, well, because babies don't generally have much

1:04:39

capacity to tell you what

1:04:41

they

1:04:42

care about otherwise. And it's not actually

1:04:44

just babies, right? It's

1:04:46

my five-year-old daughter cries too, because

1:04:48

she sometimes has a hard time expressing what

1:04:52

matters to her. And

1:04:54

I was thinking about that. And it's like, well, you know, actually

1:04:57

a lot of adults get very frustrated too, because they

1:04:59

have a hard time expressing things in

1:05:01

a way that,

1:05:02

going back to some of the early themes, that

1:05:06

maybe is something that, you

1:05:08

know, is a mistake, or maybe they have pride, or something

1:05:10

like all these things get in the way. So

1:05:12

I don't know. I think that all of these different technologies

1:05:15

that can help us navigate the

1:05:17

social complexity

1:05:19

and actually be able to better express what

1:05:21

we're feeling and thinking, I think

1:05:23

that's generally all good. And

1:05:27

there are always these concerns, like, okay, are people gonna

1:05:29

have worse memories because you have Google to look

1:05:31

things up? And I think in general,

1:05:33

a generation later, you don't look back

1:05:35

and lament that. I think it's just

1:05:38

like, wow, we have so much more capacity to

1:05:40

do so much more now. And I think that that'll be the case

1:05:42

here too. You can allocate those cognitive

1:05:44

capabilities to like deeper,

1:05:46

more

1:05:47

nuanced thought. Yeah. But

1:05:51

it's change. So with,

1:05:55

just like with Google search, the

1:05:58

additional language model, large language

1:06:00

models, you basically don't have to

1:06:03

remember nearly as much, just

1:06:06

like with Stack Overflow for programming, now that

1:06:09

these language models can generate code right

1:06:11

there, I mean I find that I write like maybe 80%,

1:06:13

90% of the code I write is non-generated first,

1:06:19

and then edited, so you don't

1:06:21

have to remember how to write specific to different

1:06:23

functions. Oh, but that's great, and it's also,

1:06:25

it's not just the specific

1:06:29

coding, I mean in the context of

1:06:32

the large company like this, I think before an engineer

1:06:34

can sit down to code,

1:06:36

they first need to figure

1:06:38

out all of the libraries and dependencies

1:06:40

that tens of thousands of people

1:06:43

have written before them, and one

1:06:45

of the things

1:06:48

that I'm excited about that

1:06:49

we're working on is it's not just tools

1:06:52

that help engineers code, it's tools that can help

1:06:54

summarize the whole knowledge base and help

1:06:57

people be able to navigate all the internal information,

1:06:59

I mean that's, in

1:07:03

the experiments that I've done with this stuff, I mean that's,

1:07:06

on the public stuff, you just ask

1:07:10

one of these models to

1:07:11

build you a script that does anything and it

1:07:13

basically already understands what the best

1:07:16

libraries are to do that thing and pulls them in automatically,

1:07:19

I think that's super powerful, that was always the

1:07:21

most annoying part of coding, was that you

1:07:24

had to spend all this time actually figuring out what the resources

1:07:26

were that you were supposed to import before you could actually

1:07:28

start building the thing. Yeah, I mean

1:07:30

there's, of course, the

1:07:32

flip side of that, I think for the most part is positive,

1:07:34

but the flip side is

1:07:36

if you outsource that

1:07:39

thinking to an AI model,

1:07:42

you might miss nuanced

1:07:44

mistakes and bugs, you lose

1:07:46

the skill to find

1:07:48

those bugs, and those bugs

1:07:51

might be, the code

1:07:53

looks very convincingly right,

1:07:55

but it's actually wrong in a very subtle

1:07:57

way, but... That's

1:08:01

the trade off that

1:08:04

we face as human civilization when we build

1:08:06

more and more powerful tools. When we stand

1:08:08

on the shoulders of taller and

1:08:10

taller giants, we could do more,

1:08:13

but then we forget how to do all the stuff that

1:08:15

they did.

1:08:16

It's a

1:08:18

weird trade off. Yeah, I agree.

1:08:20

I mean, I

1:08:21

think it is very valuable in your life to be able

1:08:23

to do basic things too.

1:08:25

Do you worry about some of the

1:08:29

concerns of bots being

1:08:31

present on social networks? More

1:08:33

and more human-like bots

1:08:35

that are not necessarily

1:08:38

trying to do a good thing, or they might

1:08:40

be explicitly trying to do a bad thing,

1:08:42

like phishing scams, like

1:08:44

social engineering, all that kind of stuff. Which

1:08:46

has always been a very difficult problem

1:08:49

for social networks, but now it's becoming

1:08:51

almost a more and more difficult problem.

1:08:53

Well, there's a few different parts

1:08:55

of this. So one

1:08:57

is there

1:09:00

are all these harms that we need to basically fight

1:09:02

against and prevent. And that's

1:09:05

been a lot of our focus

1:09:07

over the last

1:09:09

five or seven years is basically ramping

1:09:11

up

1:09:11

very sophisticated AI systems,

1:09:14

not generative AI systems, more kind of classical

1:09:16

AI systems to be able to

1:09:20

categorize and classify.

1:09:21

And I identify,

1:09:24

okay, this post looks like it's

1:09:26

promoting terrorism. This one is exploiting

1:09:29

children.

1:09:30

This one looks

1:09:33

like it might be trying to incite violence. This one's

1:09:36

an intellectual property violation. So

1:09:38

there's like 18 different

1:09:41

categories of violating

1:09:44

harmful content that we've had to build specific

1:09:47

systems to be able to track. And I

1:09:51

think it's certainly the case that

1:09:53

advances in generative AI

1:09:56

will test those.

1:10:00

But at least so far it's been the case and

1:10:02

I'm optimistic that it will continue to be the case That

1:10:06

we will be able to bring more computing power

1:10:08

to bear

1:10:09

to have even stronger ais that can help defend

1:10:11

against those things so

1:10:13

We've had to deal with some adversarial

1:10:16

issues before right? It's I mean for

1:10:18

some things like hate speech It's like people

1:10:20

aren't generally getting a lot more sophisticated Like

1:10:23

the average person who let's say,

1:10:25

you know, if someone's saying some kind

1:10:27

of racist thing, right? It's like they're not necessarily

1:10:29

getting more sophisticated at being racist, right?

1:10:32

It just it's okay So that the system can just

1:10:34

find but then there's other

1:10:36

adversaries who? Actually are very

1:10:38

sophisticated like nation-states doing things

1:10:41

and you know, we find

1:10:43

and whether it's Russia or you know,

1:10:45

just different countries that are basically standing up these networks

1:10:48

of bots

1:10:50

or You know inauthentic

1:10:52

accounts is what we call them because they're not

1:10:54

necessarily bots that some of them could actually be real

1:10:57

people who are kind masquerading as other as

1:10:59

other people but they're acting in a

1:11:01

coordinated way and

1:11:04

Some of that behavior has gotten very sophisticated

1:11:07

and it's very adversarial So they you

1:11:09

know each iteration every time we find something

1:11:11

and stop them They kind of evolve

1:11:13

their behavior. They don't just pack up their bags and go

1:11:16

home and say, okay We're not gonna try,

1:11:18

you know at some point they might decide doing it on meta

1:11:20

services is not

1:11:21

worth it they'll go do it on someone else if it's easier to

1:11:23

do it in another place, but but

1:11:27

We have a fair amount of experience dealing with

1:11:31

Even those kind of adversarial attacks where they just

1:11:33

keep on getting better and better and I do

1:11:35

think that as long as we can keep on putting more

1:11:37

compute power against it and and And

1:11:40

if we're kind of one of the leaders in developing some

1:11:42

of these AI models I'm quite optimistic that

1:11:44

we're gonna be able to keep on

1:11:46

pushing against the

1:11:49

kind of normal categories of harm that you talk

1:11:51

about fraud scams

1:11:53

spam IP

1:11:56

violations things like that. What about

1:11:58

like creating narratives?

1:11:59

and controversy. To me, it's

1:12:02

kind of amazing how a small collection

1:12:05

of, what did you say, inauthentic

1:12:07

accounts, so it could be bots, but it

1:12:10

could be human. Yeah, we have sort of this funny name for it, but we call

1:12:12

it coordinated inauthentic behavior. Yeah.

1:12:14

It's kind of

1:12:16

incredible how a small collection of folks can

1:12:19

create narratives, create stories,

1:12:23

especially if they're viral. Especially

1:12:25

if they have an element that can be

1:12:28

a viral. Yeah. And I

1:12:31

think there the question is, you have to be, I think

1:12:34

very specific about what is bad about

1:12:36

it. Right? Because I think a set of people

1:12:39

coming together or organically

1:12:43

bouncing ideas off of each other and a narrative

1:12:45

comes out of that

1:12:47

is not necessarily a bad thing by itself.

1:12:49

If it's kind of authentic and organic,

1:12:52

that's like a lot of what happens in how culture gets created

1:12:54

and how art gets created and a lot of good stuff. So that's

1:12:57

why we've kind of focused on this sense

1:12:59

of coordinated inauthentic behavior. So

1:13:01

it's like if you have a network of, whether

1:13:04

it's bots, some people masquerading

1:13:06

as different accounts,

1:13:08

but

1:13:09

you have kind of someone pulling the strings behind

1:13:11

it and trying to

1:13:13

kind of

1:13:15

act as if this is a

1:13:17

more organic set of behavior, but really it's not.

1:13:19

It's just like one coordinated thing. That

1:13:22

seems problematic to me. I mean, I don't

1:13:24

think people should be able to have coordinated

1:13:26

networks and not disclose

1:13:28

it as such.

1:13:30

But that again, we've been able to deploy

1:13:33

pretty sophisticated AI and counter-terrorism

1:13:37

groups and things like that to be able to identify a fair

1:13:40

number of these coordinated

1:13:41

and authentic

1:13:43

networks of accounts and take them

1:13:45

down. We

1:13:47

continue to do that. And I think we're, we've, it's

1:13:50

one thing that if you'd told me 20 years ago, it's like, all right,

1:13:52

you're starting this website to help people

1:13:54

connect at a college and, you know,

1:13:56

in the future you're going to be, you

1:13:58

know, part of your organization is going to be a counter

1:13:59

terrorism organization with AI to

1:14:02

find coordinated and authentic. I would have thought

1:14:04

that was pretty wild. But

1:14:08

I think that's

1:14:10

part of where we are. But look, I think that these questions

1:14:13

that you're pushing on now,

1:14:16

this is actually where I'd guess most

1:14:18

of the challenge around AI will be

1:14:21

for the foreseeable future.

1:14:23

I think that there's a lot of debate around things

1:14:25

like, is this going to create existential

1:14:27

risk to humanity? And

1:14:30

that those are very hard things to disprove one way

1:14:32

or another. My own intuition

1:14:34

is that the

1:14:35

point at which we become close to super intelligence

1:14:37

is just

1:14:41

really unclear

1:14:43

to me that the current technology is going

1:14:45

to get there without another set of

1:14:47

significant advances.

1:14:49

But that doesn't mean that there's no danger. I think the danger

1:14:52

is basically amplifying the kind of known

1:14:54

set of harms that

1:14:56

people or sets of accounts can do.

1:14:58

And we just need to make sure that we really focus

1:15:01

on

1:15:02

basically doing that as well as possible.

1:15:05

So that's definitely a big focus for me. Well,

1:15:08

you can basically use large language models

1:15:10

as an assistant of how to

1:15:12

cause harm on social networks. So you can ask it

1:15:14

a question.

1:15:17

Meta has very impressive

1:15:21

coordinated inauthentic account

1:15:24

fighting capabilities.

1:15:26

How do I do

1:15:28

the coordinated inauthentic account

1:15:31

creation where Meta doesn't detect

1:15:33

it? Like literally ask that question. And

1:15:36

basically there's this kind of

1:15:38

part of it. I mean, that's what open AI

1:15:40

showed that there are concerns with those questions. Perhaps

1:15:43

you can comment on your approach to it, how to do a

1:15:46

kind of moderation on the output

1:15:49

of those models that it can't be used

1:15:51

to help you coordinate harm

1:15:54

in all the full definition of what the harm

1:15:56

means.

1:15:57

Yeah. And that's a lot of the fine tuning

1:15:59

and the alignment.

1:15:59

training that we do

1:16:01

is basically, you know, when we, when

1:16:04

we ship

1:16:06

AIs across the, our products,

1:16:09

a lot of what we're trying to make

1:16:11

sure is that, you know, if you

1:16:13

can't ask it to help you commit

1:16:16

a crime, right?

1:16:21

It's, so I think training it to kind of understand

1:16:23

that, and it's

1:16:26

not that, not like any of these systems are ever going to be 100%

1:16:28

perfect, but, you

1:16:31

know, just making it so that this

1:16:33

isn't a, an easier

1:16:36

way to go about

1:16:38

doing something bad than

1:16:40

the next best alternative, right? I mean,

1:16:42

people still have Google, right? They, you know, you still have

1:16:44

search engines, so the information

1:16:47

is out there.

1:16:50

And for, for these, you know,

1:16:52

what we see is like for nation states or,

1:16:54

you know, these actors that are trying to pull

1:16:56

off these large, you

1:16:58

know, coordinated and authentic networks to,

1:17:00

to kind of influence different things. At

1:17:03

some point when we would just make it very difficult, they do

1:17:05

just, you know, try to use other services instead,

1:17:08

right? It's, it's just like, if you can make it more

1:17:10

expensive for, for

1:17:12

them to do it on your service, then, then

1:17:14

kind of people go, go elsewhere. And I think that that's,

1:17:17

that's the bar, right? It's like, it's not

1:17:19

like, okay, are you ever going to be perfect at

1:17:21

finding, you know, every adversary who

1:17:23

tries to attack you? I mean, you try

1:17:25

to get as close to that as possible, but, but

1:17:28

I think really kind

1:17:29

of economically what you're just trying to do is make it

1:17:31

so it's, it's just inefficient for them to

1:17:33

go after that. But there's also complicated

1:17:36

questions of what is and isn't harm,

1:17:38

what is and isn't misinformation. So

1:17:41

this is one of the things that Wikipedia

1:17:43

has also tried to face. I

1:17:45

remember asking GPT

1:17:47

about whether the virus leaked

1:17:50

from a lab or not. And the answer provided

1:17:53

was a very nuanced one and

1:17:55

a well-cited one, almost

1:17:58

dare I say, well thought out one. balanced,

1:18:03

I would hate for that nuance to be lost through

1:18:05

the process of moderation.

1:18:07

Wikipedia does a good job on

1:18:09

that particular thing too, but from

1:18:11

pressures from governments and institutions,

1:18:14

you could see some of that nuance

1:18:17

and depth of

1:18:19

information facts and wisdom be

1:18:22

lost.

1:18:24

Absolutely. And that's a scary

1:18:26

thing. Some of the magic,

1:18:29

some of the edges, the rough edges might be lost

1:18:31

to the process of moderation of AI systems.

1:18:35

So how do you get that right? I really

1:18:37

agree with what you're pushing on. I mean,

1:18:42

the core shape of the problem is that

1:18:44

there are some harms that

1:18:46

I think everyone agrees are bad,

1:18:48

right? So

1:18:50

sexual exploitation of children,

1:18:53

right? Like you're not going to get

1:18:55

many people who think that that type

1:18:57

of thing should be allowed on any service, right? And

1:18:59

that's something that we face and

1:19:01

try to push off as

1:19:04

much as possible today. Terrorism,

1:19:07

inciting violence, right? It's

1:19:10

like we went through a bunch of these types of harms

1:19:12

before.

1:19:15

But then I do think that you get to a set of harms

1:19:18

where there is more social debate around it.

1:19:21

So misinformation I think has

1:19:26

been a really tricky one

1:19:28

because there are things that are

1:19:31

kind of obviously false,

1:19:33

right? That are maybe factual,

1:19:37

but may not be harmful. So

1:19:40

that's

1:19:41

like, all right, are you going to

1:19:42

censor someone for just being wrong? If there's

1:19:45

no kind of harm implication of what they're doing, I think that

1:19:48

there's a bunch of real kind of issues and challenges

1:19:50

there. But then I think

1:19:52

that there are other places where it is, it just

1:19:56

takes some of the stuff around COVID earlier on

1:19:58

in the pandemic where there

1:20:00

were real health implications,

1:20:03

but there hadn't been time to fully vet a bunch

1:20:06

of the scientific assumptions. And, you know,

1:20:08

unfortunately, I think a lot of the kind of establishment

1:20:10

on that,

1:20:11

you know, kind of waffled on a bunch of

1:20:13

facts and, you know, asked for a bunch

1:20:15

of things to be censored that in retrospect ended

1:20:18

up being, you know, more debatable

1:20:20

or true. And that stuff

1:20:22

is really tough, right? And really undermines trust

1:20:25

in that. And

1:20:28

so I

1:20:29

do think that the questions around how to manage

1:20:31

that are very nuanced.

1:20:34

The way that I try to think about it is

1:20:36

that

1:20:38

it goes, I think it's best to generally

1:20:40

boil things down

1:20:42

to the

1:20:43

harms that people agree on. So when you think

1:20:45

about, you know, is something misinformation

1:20:47

or not, I think often the more

1:20:49

salient bit is,

1:20:51

is this going to potentially lead to

1:20:55

physical harm for someone

1:20:58

and kind of think about it in that sense. And then beyond

1:21:01

that, I think people just have different preferences on how they

1:21:03

want things to be flagged for them. I think

1:21:06

a bunch of people would prefer

1:21:08

to kind of have a flag on

1:21:10

something that says, hey, a fact checker thinks that this might be false.

1:21:12

Or I think Twitter's community

1:21:14

notes implementation is quite good on

1:21:16

this.

1:21:17

But again, it's the same type of thing.

1:21:19

It's like just kind of discretionary, adding

1:21:22

a flag because it makes the user experience better.

1:21:25

But it's not trying to take down the information

1:21:27

or not. I think that you want to reserve the

1:21:30

kind of censorship of content

1:21:32

to things that are of known categories

1:21:35

that people generally agree are bad.

1:21:37

Yeah, but there's so many

1:21:40

things, especially with the pandemic, but there's other

1:21:42

topics where there's just

1:21:45

deep disagreement fueled

1:21:47

by politics about what is and

1:21:50

isn't harmful. There's

1:21:52

even just the degree to which the

1:21:55

virus is harmful and the degree to which the

1:21:58

vaccines, the response to the virus, harmful,

1:22:00

there's just, there's almost

1:22:02

like a political divide around that. And

1:22:04

so how do you make decisions

1:22:07

about that where half the country

1:22:09

in the United States or some

1:22:12

large fraction of the world has

1:22:14

very different views from another part

1:22:16

of the world?

1:22:18

Is there a way for me to

1:22:20

stay out of the moderation

1:22:23

of this? I think we,

1:22:25

it's very difficult to

1:22:28

just abstain, but I think we

1:22:30

should be clear about which of these things

1:22:32

are actual safety concerns

1:22:35

and which ones are a matter of

1:22:37

preference in terms of how people want information

1:22:39

flagged. Right. So we did recently introduce

1:22:42

something that allows people

1:22:45

to have fact checking not affect the

1:22:47

distribution of what

1:22:50

shows them their products. So, okay, a bunch of people don't trust

1:22:52

who the fact checkers are. All right. Well,

1:22:54

you can, you can turn that off if you want, but

1:22:56

if the, if the, if the content, you

1:22:58

know, violates some policy, like it's inciting

1:23:00

violence or something like that, it's still not going to be allowed.

1:23:03

So I think that you want to honor people's

1:23:06

preferences on, on that as much as possible.

1:23:09

Um,

1:23:10

but look, I mean, this is really difficult stuff. I think

1:23:13

the, it's really hard to know

1:23:15

where to draw the line on

1:23:18

what is fact and what is opinion,

1:23:21

because the nature of science is that

1:23:23

nothing is ever a hundred percent known for

1:23:25

certain, you can disprove certain things, but

1:23:28

you're constantly testing new hypotheses

1:23:30

and, um,

1:23:32

you know, scrutinizing frameworks that have been

1:23:34

long held and every once in a while

1:23:36

you throw out something that was working for

1:23:39

a very long period of time and it's

1:23:41

very difficult. But, um, but

1:23:43

I think that just because it's very hard and just because

1:23:45

they're edge cases doesn't mean that you

1:23:47

should not

1:23:49

try to give people what they're looking for as well.

1:23:52

Let me ask about

1:23:54

something you faced in

1:23:56

terms of moderation is, uh,

1:23:59

pressure from different

1:24:02

sources, pressure from governments. I

1:24:04

wanna ask a question how to withstand that

1:24:07

pressure for a world

1:24:09

where

1:24:10

AI moderation starts becoming a thing

1:24:12

too. So what's Meta's

1:24:16

approach to resist the pressure

1:24:21

from governments and other interest groups in

1:24:23

terms of what to moderate and not?

1:24:27

I don't know that there's like a one size fits all

1:24:29

answer to that. I mean, I think

1:24:31

we basically have the principles around,

1:24:34

you know, we wanna

1:24:35

allow people to express as much as possible,

1:24:37

but

1:24:39

we have developed clear categories

1:24:42

of things that we think are wrong,

1:24:45

that we don't want on

1:24:47

our services, and we build tools to try

1:24:49

to moderate those. So then the question

1:24:51

is, okay, what do you do

1:24:53

when a government says that

1:24:56

they don't want something

1:24:58

on the service? And

1:25:01

we have a bunch of

1:25:03

principles around how we deal with that,

1:25:05

because on the one hand, if there's a

1:25:08

democratically elected government

1:25:10

and people around the world just have different

1:25:12

values in different places, then

1:25:15

should we as a, you know,

1:25:17

California based company tell them

1:25:21

that something that

1:25:23

they have decided is

1:25:25

unacceptable, actually like

1:25:28

that we need to be able to express

1:25:31

that. I mean, I think that that's, there's a certain

1:25:33

amount of

1:25:35

hubris in that. But

1:25:37

then I think that there are other cases where, you

1:25:40

know, it's like a little more autocratic and

1:25:43

you know, you have the

1:25:44

dictator leader who's just trying to crack

1:25:46

down on dissent and you know, the people

1:25:49

in a country are really not

1:25:51

aligned with that. And it's not necessarily against

1:25:54

their culture, but the

1:25:56

person who's leading it is just trying to push in

1:25:58

a certain direction.

1:26:00

These are very complex questions, but

1:26:04

I think, so it's difficult to have

1:26:07

a one size fits all approach

1:26:11

to it. But in general, we're pretty

1:26:13

active in kind of advocating

1:26:15

and pushing back on requests

1:26:18

to take things down.

1:26:21

But

1:26:23

honestly, the thing that I think a request

1:26:26

to censor things is one thing, and that's

1:26:29

obviously bad, but

1:26:30

where we draw a much harder

1:26:32

line is on request for access

1:26:34

to information. Because

1:26:38

if you get told that you can't say something, I mean,

1:26:41

that's bad. I

1:26:43

mean, obviously

1:26:44

it violates your

1:26:48

sense and freedom of expression

1:26:50

at some level, but a government

1:26:52

getting access to data in a

1:26:54

way that

1:26:55

seems that

1:26:57

could be unlawful in our

1:26:59

country exposes

1:27:01

people to real physical harm. And

1:27:06

that's something that in general we take very

1:27:09

seriously. And then, so there's

1:27:11

that flows through like all of our policies

1:27:13

and in a lot of ways, right? It's

1:27:15

by the time you're actually like litigating with

1:27:18

a government or pushing back on them, that's

1:27:20

pretty late in the funnel. I'd say

1:27:22

a

1:27:23

bunch of this stuff starts a lot higher

1:27:25

up in the decision of where do we put data centers?

1:27:28

Then there are a lot

1:27:30

of countries where we may have a lot of people

1:27:33

using the service in a place. It might

1:27:35

be good for the service

1:27:37

in some ways, good

1:27:39

for those people if we could reduce the latency by

1:27:41

having a data center nearby them.

1:27:44

But for whatever reason, we just feel

1:27:46

like, hey, this government does not have

1:27:49

a good track record on

1:27:53

basically not trying to get access to people's

1:27:56

data. And at the end of the day, I mean, if you put

1:27:58

a data center in a country,

1:27:59

and the government wants to get access to people's data,

1:28:02

then they do, at the end of the day,

1:28:05

have the option of having people show up with guns

1:28:07

and taking it by force. So

1:28:10

I think that there's a lot of decisions that go into how

1:28:12

you architect the systems,

1:28:15

years in advance of these actual

1:28:19

confrontations that end up being really

1:28:21

important. So you put the protection

1:28:23

of people's data as

1:28:25

a very, very high priority. That

1:28:28

I think is a, there are more harms that I think can be

1:28:30

associated with that. And I think that that

1:28:33

ends up being a more critical thing to defend

1:28:35

against

1:28:36

governments than,

1:28:38

whereas, if another government has a

1:28:40

different view of what should be acceptable speech

1:28:42

in their country, especially if

1:28:44

it's a democratically elected government, and

1:28:47

then I think that there's a certain amount

1:28:49

of deference that you should have to that. So

1:28:52

that's speaking more to the direct harm

1:28:54

that's possible when you give governments

1:28:56

access to data. But if we look at the United

1:28:58

States

1:29:00

to the more nuanced kind of pressure

1:29:02

to censor, not even order to censor,

1:29:05

but pressure to censor from political

1:29:07

entities, which has kind of received

1:29:09

quite a bit of attention in the United States.

1:29:14

Maybe one way to ask that question is,

1:29:16

if you've seen the Twitter files,

1:29:19

what have you learned from the kind

1:29:23

of pressure from US

1:29:25

government agencies that was seen

1:29:27

in Twitter files? And what

1:29:29

do you do with that kind of pressure?

1:29:32

You know, I've seen it.

1:29:36

It's really hard from the outside to know exactly

1:29:38

what happened in each of these cases.

1:29:40

You know, we've obviously been in a

1:29:44

bunch of our own cases where

1:29:46

agencies or different

1:29:49

folks will just say, hey,

1:29:51

here's a threat that we're aware of. You

1:29:55

should be aware of this too. It's not

1:29:57

really pressure. as

1:30:00

much as it is just flagging

1:30:03

something that our security

1:30:06

systems should be on alert about.

1:30:08

I get how some people could think of it as that.

1:30:11

But at

1:30:13

the end of the day, it's our call on how to handle

1:30:16

that. But I mean, I

1:30:18

just, in terms of running these services, won't have access

1:30:21

to as much information about what people think that adversaries

1:30:23

might be trying to do as possible. Well,

1:30:26

so you don't feel like there

1:30:27

would be consequences if

1:30:30

anybody, the CIA, the FBI,

1:30:33

a political party, the Democrats, the Republicans,

1:30:36

of high

1:30:38

powerful political figures write emails.

1:30:41

You don't feel pressure from a

1:30:43

suggestion. I guess what I'd say is there's so much pressure

1:30:46

from all sides that I'm

1:30:48

not sure that any specific thing

1:30:50

that someone says is really adding

1:30:53

that much more to the mix. There

1:30:56

are obviously a lot of people who think that

1:30:59

we should be censoring

1:31:01

more content.

1:31:02

There are a lot of people who think we should be censoring less content.

1:31:05

There are, as you say, all kinds of

1:31:07

different groups that are involved in these debates. So

1:31:10

there's the kind of elected officials

1:31:12

and politicians themselves. There's the agencies,

1:31:15

but I mean, but there's the media,

1:31:18

there's activist groups. This

1:31:20

is not a US specific thing. There are groups

1:31:22

all over the world and kind of all in

1:31:25

every country that bring different values.

1:31:29

So it's just a very active

1:31:32

debate, and I understand it. I

1:31:34

mean, these kind of questions

1:31:38

get to really

1:31:39

some of the most important social debates

1:31:42

that are being had. So it

1:31:44

gets back to the question of truth because for

1:31:48

a lot of these things, they haven't yet been hardened

1:31:51

into a single truth, and society's

1:31:53

sort of trying to hash out what we

1:31:56

think on certain issues,

1:31:58

maybe in a few hundred. everyone will look back

1:32:00

and say, hey, no, it wasn't obvious that it should have been

1:32:03

this, but no, we're kind of

1:32:05

in that meat grinder now and

1:32:09

working through that.

1:32:13

So no, these are all

1:32:15

very complicated.

1:32:18

Some people

1:32:20

raise concerns in good faith

1:32:22

and just say, hey, this is something that I want a flag for

1:32:24

you to think about.

1:32:26

Certain people, I certainly think, come

1:32:28

at things with somewhat of a more punitive

1:32:32

or vengeful

1:32:33

view of, I want you to do this thing. If

1:32:37

you don't, then I'm going to try to make your life difficult

1:32:39

in a lot of other ways. But I

1:32:43

don't know, this is

1:32:45

one of the most pressurized debates, I think, in society.

1:32:47

So I just think that there are so many

1:32:49

people and different forces that are trying to apply

1:32:52

pressure from different sides. I

1:32:54

don't think you can make decisions based on trying to

1:32:56

make people happy. I think you just have to do

1:32:59

what you think is the right balance

1:33:02

and accept that people are going to

1:33:04

be

1:33:05

upset no matter where you come out on that.

1:33:07

Yeah, I like that pressurized debate. So

1:33:10

how has your view of the freedom of speech evolved

1:33:13

over the years?

1:33:18

And now with AI, where

1:33:21

the freedom might apply to them, not

1:33:23

just to the humans, but to the

1:33:26

personalized agents as you've spoken about

1:33:28

them.

1:33:29

So yeah, I mean, I've probably

1:33:31

gotten a somewhat more nuanced view just because I

1:33:33

think that there are, you

1:33:35

know, I come at this, I'm obviously very pro

1:33:37

freedom of expression, right?

1:33:39

I don't think you build a service like this

1:33:41

that gives people tools to express themselves unless

1:33:43

you think that people expressing themselves at scale

1:33:45

is a good thing, right? So I

1:33:48

get into this to like try to prevent people

1:33:51

from expressing anything. I like want to give people tools

1:33:54

so they can express as much as possible. And

1:33:56

then I think

1:33:58

it's become clear that...

1:34:00

There are certain categories of things that we've talked about

1:34:02

that

1:34:03

I think almost everyone accepts are bad and

1:34:05

that no one wants and that they're illegal

1:34:07

even in countries like the US where you

1:34:09

have the First Amendment that's

1:34:12

very protective of enabling

1:34:14

speech. It's like you're still not allowed to do

1:34:16

things that are going to immediately incite violence or

1:34:19

violate people's intellectual property or things like

1:34:21

that. So there are those, but then there's also

1:34:23

a very active core of

1:34:26

just active disagreements in society

1:34:29

where some people may think that something is true

1:34:31

or false. The other side might think it's

1:34:34

the opposite or just unsettled.

1:34:37

And

1:34:38

those are some of the most difficult to kind

1:34:40

of handle like we've talked about. But

1:34:46

one of the lessons that I feel like I've learned is that

1:34:49

a

1:34:50

lot of times

1:34:53

when you can,

1:34:55

the best way to handle this stuff

1:34:57

more practically is not in

1:35:00

terms of answering the question of should

1:35:02

this be allowed, but just like

1:35:05

what

1:35:07

is the best way to deal with someone being a

1:35:09

jerk? Is the person

1:35:12

basically just having

1:35:15

a repeat

1:35:16

behavior of causing

1:35:19

a lot of issues? So

1:35:22

looking at it more at that level. And

1:35:25

it's effect on the broader communities, health

1:35:27

of the community, health of the state. It's

1:35:29

tricky though because like how do you know there

1:35:32

could be people that

1:35:33

have a very controversial viewpoint that turns

1:35:36

out to have a positive long-term

1:35:38

effect on the health of the community because

1:35:40

it challenges the community. That's true. Absolutely.

1:35:43

Yeah, no, I think you want to be careful about

1:35:46

that. I'm not sure I'm expressing this very

1:35:48

clearly

1:35:50

because I certainly agree with your point there

1:35:52

and my point isn't that we should

1:35:56

not have people on our services that are being controversial.

1:36:00

certainly not what I mean to say. It's

1:36:02

that

1:36:02

often I think

1:36:05

it's not just looking at a specific

1:36:08

example of speech that it's most effective

1:36:11

to handle this stuff. And

1:36:13

I think often you don't wanna make specific binary

1:36:16

decisions of kind of this is allowed

1:36:18

or this isn't. I mean, we talked about, you

1:36:21

know, it's fact checking or Twitter's community

1:36:23

voices thing. I think that that's another good example.

1:36:25

It's like, it's not a question of is this allowed

1:36:28

or not. It's just a question of adding more

1:36:30

context to the thing. I think that that's helpful.

1:36:33

So in the context of AI, which is what

1:36:35

you were asking about, I think there are lots

1:36:37

of ways that an AI can be

1:36:39

helpful.

1:36:40

You know, with an AI, it's less about

1:36:43

censorship, right? Because it's

1:36:45

more about

1:36:46

what is the most productive answer to a question?

1:36:50

You know, there was one case study that I was reviewing

1:36:52

with the team is someone

1:36:55

asked,

1:36:55

can

1:36:59

you explain to me how to 3D

1:37:01

print a gun? And

1:37:05

one proposed response is like,

1:37:07

no, I can't talk about that, but

1:37:10

it's like basically just like shut it down immediately,

1:37:12

which I think is some of what you see. It's like as a

1:37:14

large language model, I'm not allowed to talk about,

1:37:16

you know, whatever.

1:37:19

But there's another response, which is like, hey,

1:37:21

you know, I don't think that's a good idea. And a lot of

1:37:23

countries, including the

1:37:26

US 3D printing guns is

1:37:28

illegal or kind of whatever the factual

1:37:30

thing is. And it's like, okay, you know, that's actually

1:37:32

a respectful and informative answer.

1:37:35

And I may have not known that specific

1:37:37

thing. And so

1:37:39

there are different ways to handle this that I think

1:37:42

kind of you can either assume

1:37:45

good intent.

1:37:47

Like maybe the person didn't know, and I'm just gonna help educate

1:37:50

them, or you could like kind of come at

1:37:52

it as like, no, I need to shut this thing down immediately. Right?

1:37:55

It's like, I just, I'm not gonna talk about this. Like,

1:37:58

and there may be times where you know,

1:37:59

to do that. But

1:38:02

I actually think having a

1:38:04

somewhat more informative approach where

1:38:07

you generally assume good intent from people

1:38:10

is probably a better balance to be on as

1:38:12

many

1:38:13

things as you can be. You're not going to

1:38:15

be able to do that for everything. But

1:38:17

you were kind of asking about how I approach

1:38:19

this and I'm thinking about this as it relates

1:38:21

to AI.

1:38:24

And I think that that's a big difference

1:38:26

in kind of how to

1:38:28

handle

1:38:30

sensitive content across these different modes.

1:38:34

I have to ask, there's rumors you might be working

1:38:36

on a social network that's text-based

1:38:39

that might be a competitor to Twitter codenamed

1:38:42

p92. Is there

1:38:44

something you could say about

1:38:46

those rumors?

1:38:48

There is a project.

1:38:50

I've always thought that sort of a text-based

1:38:54

kind of information utility

1:38:58

is just a really important thing to society.

1:39:00

And for whatever

1:39:02

reason, I feel like Twitter has

1:39:04

not lived up to what I would have thought its

1:39:07

full potential should be. And I think that the

1:39:09

current, I think Elon thinks that right, and that's

1:39:11

probably one of the reasons why he bought it. And

1:39:14

I

1:39:17

do know there are ways to consider

1:39:20

alternative approaches to this. And one that

1:39:22

I think is potentially interesting is

1:39:25

this open and federated

1:39:27

approach where you're seeing with Mastodon, you're

1:39:29

seeing that a little bit with blue sky. And

1:39:34

I think that it's possible that something

1:39:36

that melds some of those ideas

1:39:38

with the graph

1:39:40

and identity system that people have already cultivated

1:39:42

on Instagram could be

1:39:45

a kind of very

1:39:47

welcome contribution to that space. But I

1:39:49

don't know, we work on a lot of things all the time, though, too.

1:39:51

So I don't want to get ahead of myself.

1:39:53

And we have projects that explore

1:39:56

a lot of different things. And this is

1:39:58

certainly one that I think could be interesting.

1:39:59

But

1:40:00

so what's the release the

1:40:02

launch date of that again or yeah,

1:40:04

what's the official website

1:40:07

and Well, we don't have that

1:40:09

yet. Okay, but I am All

1:40:12

right, and and look I mean, I don't know exactly

1:40:14

how this is gonna turn out I mean what I what I can

1:40:16

say is yeah, there's there's some people working

1:40:18

on this, right? I think that there's something there that that

1:40:21

that's interesting to explore So

1:40:24

if you look at it'd be interesting She said

1:40:26

that's this question and throw Twitter into

1:40:28

the mix at the landscape

1:40:30

of social networks That is Facebook

1:40:33

that is Instagram

1:40:35

That is whatsapp and

1:40:38

Then think of a text-based social

1:40:40

network when you look at that landscape, what are

1:40:43

the interesting differences to you? Why

1:40:45

do we have these different flavors?

1:40:48

And what what what are the needs what are

1:40:50

the use cases? What are the products? What is

1:40:52

the aspect of them that create a fulfilling

1:40:54

human experience and and and

1:40:56

a connection between humans that is somehow distinct?

1:40:59

well, I think text is very accessible

1:41:02

for people to transmit ideas

1:41:04

and to have back-and-forth exchanges So

1:41:08

it

1:41:09

I think ends up being a good a good

1:41:12

format for discussion in

1:41:14

a lot of ways uniquely good right if you look

1:41:16

at If some of the other

1:41:18

formats or other networks that have focused on one

1:41:20

type of content like tick tock is obviously huge Right

1:41:23

and there are comments on tick tock

1:41:25

But you

1:41:26

know I think the architecture of the

1:41:29

service is very clearly that you have

1:41:31

the video is the primary thing and there's you know

1:41:33

comments

1:41:34

after that

1:41:36

and But

1:41:39

I think one of the unique

1:41:41

pieces of having Text

1:41:44

based comments the content

1:41:46

is that the comments can also be first-class

1:41:49

and that makes it so that

1:41:51

Conversations can just filter and fork

1:41:54

into all these different directions and in

1:41:56

a way that's that can be super useful So I

1:41:58

think there's a lot of things that are really awesome about

1:41:59

the experience. It just always struck me.

1:42:02

I

1:42:02

always thought that Twitter should

1:42:05

have a billion people using it or whatever

1:42:07

the thing is that

1:42:09

basically ends up being in that space. And

1:42:12

for whatever combination of reasons, again,

1:42:15

these companies are complex organisms

1:42:17

and it's very hard to diagnose this stuff from the

1:42:19

outside. Why doesn't Twitter,

1:42:22

why doesn't a text-based

1:42:25

comment as a first citizen-based

1:42:28

social network have a billion users? Well,

1:42:31

I just think it's hard to build these companies.

1:42:33

So it's not that

1:42:35

every idea automatically

1:42:37

goes and gets a billion people, it's just that I think

1:42:39

that that idea

1:42:41

coupled with good execution should get

1:42:43

there. But I mean, look, we

1:42:45

hit certain thresholds over time

1:42:48

where

1:42:50

we kind of plateaued early on and it

1:42:52

wasn't clear that we were ever going to reach 100 billion people

1:42:54

on Facebook. And then we got really

1:42:56

good at dialing in internationalization

1:42:59

and helping the service grow in different countries.

1:43:02

And that

1:43:04

was like a whole competence that we needed to

1:43:06

develop and

1:43:08

helping people basically spread the service

1:43:10

to their friends. That was one of the things, once we got

1:43:13

very good at that, that was one of the things that made

1:43:15

me

1:43:15

feel like, hey, if Instagram

1:43:18

joined us early on, then I felt like we could help grow that quickly.

1:43:20

And same with WhatsApp. And I think that that's sort

1:43:22

of been a core competence that we've developed

1:43:25

and been able to execute on. And others have too. ByteDence

1:43:28

obviously have done a very good job with

1:43:30

TikTok and have reached more

1:43:32

than a billion people there. But

1:43:34

it's

1:43:35

certainly not automatic. I think you need

1:43:37

a certain level of execution

1:43:41

to basically get there. And I think for whatever

1:43:43

reason, I

1:43:44

think Twitter has this great idea and sort

1:43:47

of magic in the service. But

1:43:50

they

1:43:51

just haven't kind of cracked

1:43:53

that piece yet. And I think that that's made

1:43:55

it so that you're seeing all these other things, whether it's Mastodon

1:43:58

or Twitter.

1:43:59

blue sky that

1:44:02

I think are

1:44:03

maybe just different cuts of the same thing.

1:44:05

But I think through the last generation of social

1:44:08

media overall,

1:44:10

one of the interesting experiments that I think should get

1:44:12

run at larger scale is

1:44:14

what happens if there's somewhat more decentralized

1:44:16

control and if the

1:44:19

stack is more open throughout. And I've

1:44:21

just been pretty fascinated by that and seeing

1:44:24

how that works.

1:44:26

To some degree, end-to-end encryption

1:44:29

on WhatsApp and as we bring it

1:44:31

to other services

1:44:32

provides an element of it because it pushes the

1:44:35

service really out to the edges. The

1:44:38

server part of this that we run for WhatsApp

1:44:42

is relatively very thin compared to what

1:44:44

we do on Facebook or Instagram.

1:44:46

And much more of the complexity is in how

1:44:48

the apps negotiate with each other

1:44:51

to pass information in a fully

1:44:53

end-to-end encrypted way. But

1:44:55

I don't know, I think that is a good model.

1:44:58

I think it puts more power in individuals' hands

1:45:00

and there are a lot of benefits of it if you can make

1:45:02

it happen. Again, this is all pretty

1:45:05

speculative. I think that it's

1:45:08

hard from the outside to know why

1:45:10

anything does or doesn't work until you take

1:45:12

a run at it.

1:45:15

So I think it's an interesting

1:45:17

thing to experiment with, but I don't really know where

1:45:19

this one's going to go.

1:45:21

So since we were talking about Twitter, Elon

1:45:24

Musk

1:45:26

had what I think a few harsh

1:45:28

words

1:45:29

that I wish he didn't say. So let me ask,

1:45:33

in the hope and the name of camaraderie,

1:45:36

what do you think Elon is doing well with Twitter?

1:45:39

And what, as a person who

1:45:41

has run for a long time, you, social

1:45:44

networks, Facebook, Instagram,

1:45:48

WhatsApp,

1:45:48

what can he do better?

1:45:52

What can he improve on that text-based

1:45:54

social network? Gosh, it's always

1:45:56

very difficult to offer specific

1:45:58

critiques from the outside.

1:45:59

side before you get into this. Because

1:46:02

I think one thing that I've learned

1:46:04

is that

1:46:05

everyone has opinions on what you should do. And

1:46:09

like running the company, you see a lot of specific

1:46:11

nuances on things that are not

1:46:13

apparent externally. And I

1:46:16

often think

1:46:20

that some of the discourse around

1:46:22

us would be

1:46:23

could be better if there is more kind

1:46:27

of space for acknowledging that there's certain

1:46:29

things that we're seeing internally that guide what we're

1:46:31

doing. But I

1:46:33

don't know, I mean, since you asked what

1:46:35

is going well,

1:46:43

you know, I do think that

1:46:45

Elon led a push

1:46:48

early on

1:46:49

to make Twitter a lot leaner.

1:46:51

And

1:46:53

I think that

1:46:57

you can agree or disagree with exactly

1:46:59

all the tactics and how we did that. Obviously,

1:47:02

every leader

1:47:04

has their own style for if you need

1:47:07

to make dramatic changes for that, how you're going to execute

1:47:09

it. But

1:47:12

a lot of the specific principles that he pushed

1:47:14

on around

1:47:17

basically trying to make the organization more

1:47:19

technical around decreasing the distance between

1:47:22

engineers of the company and him like fewer layers of management.

1:47:30

I think that those were generally good

1:47:32

changes. And I'm also I also think that it

1:47:34

was probably

1:47:35

good for the industry that he made those changes. Because my sense

1:47:37

is that there were a lot of other people

1:47:40

who thought that those were good changes, but

1:47:43

who

1:47:44

may have been a

1:47:46

little shy about doing

1:47:49

them. And I think he, you

1:47:51

know,

1:47:51

just in my conversations with other

1:47:53

founders, and how people

1:47:55

have reacted to the things that we've done, you know, what I've

1:47:58

heard from a lot of folks is just,

1:48:00

hey, when someone like you, when

1:48:03

I wrote the letter outlining the organizational

1:48:05

changes that I wanted to make back

1:48:07

in March, and when people see what Elon

1:48:09

is doing, I think that

1:48:12

that

1:48:12

gives people the

1:48:14

ability to think through how to

1:48:17

shape their organizations in a way

1:48:19

that hopefully can be good for the

1:48:21

industry and

1:48:24

make all these companies more productive over time.

1:48:26

So, something that that was one where

1:48:28

I think he was quite

1:48:30

ahead of a

1:48:32

bunch of the other companies on. And

1:48:35

what he was doing there, and again, from the

1:48:37

outside, very hard to know. It's like, okay, did he cut

1:48:39

too much? Did he not cut enough? Whatever. I

1:48:41

don't think it's like my place to opine

1:48:44

on that. And you asked

1:48:46

for a positive framing of the question of

1:48:49

what do I admire? What

1:48:51

do I think it went well? But I think that

1:48:54

certainly his actions led me, and

1:48:56

I think a lot of other folks in the

1:48:58

industry to think about, hey, are we kind of doing this as

1:49:02

much as we

1:49:03

should? Like, could we make our companies

1:49:05

better by pushing on some of these same principles? Well,

1:49:08

the two of you are in the top

1:49:10

of the world in terms of leading the development of tech, and

1:49:13

I wish there was more both way, camaraderie and

1:49:16

kindness, more

1:49:19

love in the world, because love is

1:49:21

the answer. But

1:49:25

let me ask on a point of efficiency.

1:49:29

You recently announced multiple stages

1:49:31

of layoffs and meta.

1:49:33

What are the most painful aspects

1:49:36

of this process? Given for the individuals,

1:49:38

the painful effects it has

1:49:41

on those people's lives? Yeah, I mean, that's it.

1:49:43

And that's it.

1:49:45

I mean, it's,

1:49:46

and you basically

1:49:47

have a significant number of people

1:49:50

who, this

1:49:52

is just not the end of their time at meta

1:49:55

that they or I would have hoped for.

1:49:59

when they join the company.

1:50:02

And, I

1:50:04

mean, running a company,

1:50:06

people are

1:50:08

constantly joining and leaving

1:50:10

the company for different directions, but for

1:50:13

different reasons. But, and layoffs

1:50:16

are like uniquely challenging

1:50:18

and tough in

1:50:20

that you have a lot of people leaving

1:50:24

for reasons that aren't connected to their own

1:50:26

performance

1:50:27

or the

1:50:29

culture not being a fit

1:50:31

at that point. It's really just,

1:50:34

it's a kind of strategy decision

1:50:37

and sometimes financially required, but

1:50:42

not fully in our case. I mean, especially

1:50:44

on the changes that we made this year, a lot of it was

1:50:47

more kind of culturally and strategically

1:50:49

driven by this push where I wanted us

1:50:51

to become a stronger technology

1:50:53

company with more of a focus on

1:50:56

building more technical and

1:50:58

more of a focus on building higher quality

1:51:01

products faster. And I just view

1:51:03

the external world as quite volatile

1:51:05

right now. And I wanted to make sure that

1:51:07

we had a stable

1:51:09

position to be able to continue investing

1:51:12

in these long-term ambitious

1:51:14

projects that we have around continuing

1:51:16

to push AI forward and continuing

1:51:19

to push forward all the metaverse work. And

1:51:21

in order to do that in light of the pretty

1:51:24

big thrash that we had seen

1:51:26

over the last 18 months, some of it macroeconomic

1:51:31

induced, some of it specifically, some of it

1:51:33

competitively induced, some of it just

1:51:35

because of bad decisions or things

1:51:38

that we got wrong. I

1:51:41

just decided that we needed to get to a point where

1:51:43

we were a lot leaner. But

1:51:45

look, I mean, but then, okay, it's one thing to do

1:51:47

that, to like decide that at a high level, then

1:51:50

the question is, how do you execute that as compassionately

1:51:52

as possible? There's no good way.

1:51:56

There's no perfect way for sure. And it's

1:51:58

gonna be tough no matter what, but.

1:51:59

I,

1:52:01

you know, as a

1:52:02

leadership team here, we've certainly spent a lot

1:52:04

of time just thinking, okay, given that this is

1:52:07

a thing that sucks, like what

1:52:09

is the most compassionate way that we can do

1:52:11

this? And

1:52:13

that's what we've tried to do. And you mentioned

1:52:16

there's an increased focus

1:52:19

on engineering on tech,

1:52:21

so technology teams, tech focused

1:52:24

teams on building products

1:52:26

that- Yeah, I mean, I wanted

1:52:29

to empower engineers

1:52:31

more.

1:52:35

The people are building things, the technical

1:52:37

teams.

1:52:41

Part of that is making sure

1:52:43

that the people are building things aren't just

1:52:45

at like the leaf nodes of the organization.

1:52:47

I don't want like

1:52:49

eight levels of management and then the

1:52:52

people actually doing the work. So we made changes

1:52:54

to make it so that you have individual contributor engineers

1:52:56

reporting at almost every level up the stack. Which

1:52:59

I think is important because, you know, you're running a company, one of

1:53:01

the big questions is, you

1:53:03

know, latency of information

1:53:05

that you get. You know, we talked about

1:53:07

this a bit earlier in terms of

1:53:09

kind of the joy of,

1:53:12

in the feedback that you get doing something

1:53:14

like jiu-jitsu compared to they're running

1:53:16

a long-term project. But I

1:53:18

actually think part of the art of running a company is trying

1:53:20

to constantly

1:53:22

re-engineer it so that your feedback loops get

1:53:25

shorter so you can learn faster. And part

1:53:27

of the way that you do that is by,

1:53:28

I kind of think that every layer that you have in the organization

1:53:33

means that information might not need to get reviewed

1:53:36

before it goes to you. And I think,

1:53:38

you know, making it so that the people doing the work are

1:53:40

as close as possible to you as possible is

1:53:43

pretty important. So there's that. And

1:53:46

I think over time, companies just build up

1:53:48

very large

1:53:50

support functions that are not doing the kind of core

1:53:52

technical work. And those functions are

1:53:54

very important, but I think having them in the

1:53:57

right proportion is important. And if...

1:54:00

If you try to do

1:54:02

good work, but you don't have

1:54:04

the right

1:54:05

marketing team or the right

1:54:08

legal advice, you're going to

1:54:10

make some pretty big blunders. But

1:54:13

at the same time, if you

1:54:15

just

1:54:17

have too big of things

1:54:19

in some of these support roles, then

1:54:22

that might make it so that things

1:54:24

just move a lot.

1:54:26

If you're too conservative or you move

1:54:29

a lot slower than

1:54:31

you should otherwise. I just use those just examples.

1:54:34

But it's a

1:54:35

constant equilibrium that you're searching for.

1:54:42

Yeah. How many managers to have? What are

1:54:44

the pros and cons of managers?

1:54:46

Well, I believe a lot

1:54:48

in management. I think there are some people who think that it doesn't matter

1:54:50

as much, but look, I mean, we have a lot of

1:54:52

younger people at the company for him. This is their first

1:54:54

job and people need

1:54:56

to grow and learn in their career. And I

1:54:59

think that all that stuff is important. But here's one mathematical

1:55:01

way to look at it.

1:55:03

The beginning of this, I asked

1:55:09

our people team,

1:55:10

what was the average number of reports

1:55:13

that a manager had? And I think

1:55:15

it was around three, maybe three

1:55:17

to four, but closer to three. A

1:55:21

manager can

1:55:22

best practices that

1:55:24

person can manage, seven or eight

1:55:26

people.

1:55:28

But there was a reason why it was closer to three. It

1:55:30

was because we were growing so quickly. And

1:55:33

when you're hiring so many people so quickly, then

1:55:36

that means that you need managers

1:55:38

who have capacity to onboard new people. And

1:55:41

also if you have a new manager, you may not want to have them

1:55:43

have seven direct reports immediately because you

1:55:45

want them to ramp up.

1:55:48

The thing is going forward, I don't want us

1:55:50

to actually hire that many people that

1:55:52

quickly. So I actually think we'll

1:55:54

just do better work if we have more constraints and we're

1:55:56

leaner as an organization.

1:55:59

So.

1:55:59

In a world where we're not adding so many people as quickly,

1:56:03

is it as valuable to have a lot of managers

1:56:05

who have extra capacity waiting for new people? No.

1:56:08

So now we can defragment

1:56:11

the organization and get to a place where the

1:56:13

average is closer to that seven or eight. And

1:56:16

it just ends up being a somewhat more compact

1:56:19

management structure, which decreases

1:56:22

the latency on information going

1:56:24

up and down the chain and I think

1:56:27

empowers people more. But I mean, that's an example

1:56:29

that I think it doesn't kind of undervalue

1:56:31

the

1:56:32

importance of management and the

1:56:35

kind of the personal

1:56:37

growth or coaching that people need in order

1:56:39

to do their jobs well. It's just, I think, realistically,

1:56:42

we're just not going to hire as many people going forward.

1:56:45

So I think that you need a different structure.

1:56:46

This whole incredible

1:56:49

hierarchy and network of humans

1:56:51

that make up a company is fascinating. Oh,

1:56:53

yeah. Yeah. How

1:56:55

do you hire great teams?

1:56:58

How do you hire great

1:57:00

now with the focus on engineering and technical

1:57:02

teams? How do you hire great

1:57:04

engineers and great

1:57:07

members of technical teams?

1:57:09

Well, you're asking how you select

1:57:11

or how you attract them? Both,

1:57:14

but select, I think.

1:57:15

I think attract is work

1:57:18

on cool stuff and have a vision. I think

1:57:20

the stuff works on it. I think that's right. And have

1:57:22

a track record that people think you're actually going to be able to do it.

1:57:25

To me, the select seems like more

1:57:27

of the art form, more of the tricky

1:57:29

thing. Do you select

1:57:32

the people that fit the culture and

1:57:34

can get integrated the most effectively and

1:57:36

so on? And maybe, especially

1:57:38

when they're young, to

1:57:41

see the magic through the

1:57:43

resumes, through the

1:57:46

paperwork and all this kind of stuff, to see that there's

1:57:48

a special human there that would do

1:57:50

incredible work.

1:57:53

So there are lots of different

1:57:55

cuts on this question. I mean, I think

1:57:58

when an organization has grown quickly.

1:57:59

One of the big questions that teams

1:58:02

face is,

1:58:03

do I hire this person who's in front of me

1:58:05

now because they seem good, or

1:58:08

do

1:58:08

I hold out to get someone who's even better?

1:58:11

And

1:58:13

the heuristic that I always

1:58:15

focused on for myself

1:58:16

and my own kind of

1:58:18

direct hiring that I think works

1:58:20

when you recurse it through the organization

1:58:24

is that you should only hire someone to be on your team

1:58:26

if you would be happy working for them in an alternate

1:58:29

universe. And I think

1:58:31

that that kind of works. And that's basically

1:58:33

how I've tried to build my team.

1:58:36

I'm not in a rush to not be running the company,

1:58:39

but I think in an alternate universe where one of these

1:58:41

other folks was running the company, I'd be happy to work

1:58:43

for them. I feel like I'd learn from them. I

1:58:46

respect their kind of general judgment.

1:58:49

They're all very insightful. They have good values.

1:58:53

And I think that that gives you some rubric

1:58:56

for,

1:58:57

you can apply that at every layer. And I think if you apply

1:59:00

that at every layer in the organization, then

1:59:02

you'll have a pretty strong organization.

1:59:06

Okay, in an organization that's not growing as quickly,

1:59:09

the questions might be a little different though.

1:59:12

And there, you

1:59:14

asked about young people specifically, like

1:59:16

people out of college.

1:59:17

And one of the things that we see

1:59:20

is it's

1:59:21

a pretty basic lesson, but like

1:59:23

we have a much better sense of who

1:59:25

the best people are who have interned at the

1:59:27

company for a couple of months, than by

1:59:30

looking at them at kind of a resume

1:59:32

or a short interview loop. I

1:59:35

mean, obviously the in-person feel that you get

1:59:37

from someone probably tells you more than the resume, and

1:59:40

you can do some basic skills assessment.

1:59:44

But

1:59:44

a lot of the stuff really just is cultural. People

1:59:47

thrive in different environments

1:59:50

and

1:59:51

on different teams, even within a specific

1:59:54

company. And it's like

1:59:56

the people who come for even a

1:59:58

short period of time over a semester.

1:59:59

summer who do a great job here,

2:00:02

you know that they're going to be great if they

2:00:04

came and joined full time. And that's one

2:00:06

of the reasons why we've invested so much in internship

2:00:09

is

2:00:10

basically it's a very useful

2:00:13

sorting function both for us and for the people

2:00:15

who want to try out the company. You mentioned

2:00:17

in person, what do you think about remote work,

2:00:20

a topic that's been discussed extensively over

2:00:23

the past few years because of the pandemic?

2:00:26

Yeah, I mean, I think

2:00:28

it's a thing that's here to stay. But

2:00:32

I think that there's value in both,

2:00:35

right? It's not, you know,

2:00:37

I wouldn't want to run a fully remote company

2:00:39

yet,

2:00:40

at least. I think there's an asterisk on that,

2:00:43

which is that... Some of the other

2:00:45

stuff you're working on, yeah. Yeah, exactly. It's like

2:00:47

all the, you know, metaverse

2:00:49

work and the ability to feel like

2:00:52

you're truly present, no

2:00:55

matter where you are. I think once you have

2:00:57

that all dialed in, then we may,

2:00:59

you know, one day reach a point where it really

2:01:01

just doesn't matter as much where you are physically.

2:01:05

But I don't

2:01:09

know, today it still does, right?

2:01:10

So yeah,

2:01:13

for people who, there are all these people

2:01:15

who have special skills and want

2:01:17

to live in a place where we don't have an office, are

2:01:20

we better off having them at the company? And

2:01:23

are a lot of people who work at the company for

2:01:26

several years and then build

2:01:28

up the relationships internally and kind

2:01:32

of have the trust and have a sense of how the company

2:01:34

works, can they go work remotely now if

2:01:36

they want and still do it as effectively? And we've

2:01:39

done all these studies that show it's like, okay, does that affect

2:01:41

their performance? It does not.

2:01:44

But, you know, for the new folks who are

2:01:46

joining, and

2:01:48

for people who are earlier in their career and need

2:01:51

to learn how to solve certain problems and need to get

2:01:53

ramped up on the culture,

2:01:56

when you're working through

2:01:58

really complicated problems,

2:01:59

where you don't just want to sit in the, you don't

2:02:02

just want the formal meeting, but you want to be able to like

2:02:04

brainstorm when you're walking in the hallway together

2:02:06

after the meeting.

2:02:09

I don't know, it's like we just haven't replaced the kind

2:02:11

of in-person dynamics there yet

2:02:16

with anything

2:02:18

remote yet, so. Yeah, there's a magic

2:02:20

to the in-person that, we'll talk

2:02:22

about this a little bit more, but I'm really excited

2:02:25

by the possibilities in the next two years in virtual

2:02:27

reality and mixed reality that

2:02:29

are possible with high resolution

2:02:31

scans. I mean, I

2:02:34

as a person who loves in-person interaction,

2:02:37

like these podcasts in person, it

2:02:40

would be incredible to achieve the

2:02:42

level of realism I've gotten the chance to witness.

2:02:45

But let me

2:02:46

ask about that. I

2:02:49

got a chance to look at

2:02:51

the Quest 3 headset and

2:02:54

it is amazing. You've

2:02:58

announced it, it's, you'll

2:03:01

get some more details in the fall, maybe

2:03:03

releasing the, when is it getting released again? I forgot,

2:03:05

you mentioned it. We'll give more details at Connect,

2:03:08

but it's coming this fall. Okay.

2:03:10

So

2:03:12

it's priced at $4.99.

2:03:17

What features are you most excited about there?

2:03:19

There are basically two big new things that we've added

2:03:22

to Quest 3 over Quest 2. The

2:03:24

first is high resolution mixed reality.

2:03:28

And

2:03:29

the basic

2:03:31

idea here is that

2:03:33

you can think about virtual reality as you

2:03:35

have the headset and all

2:03:37

the pixels are virtual and you're basically

2:03:40

like immersed in a different world.

2:03:42

Mixed reality is where you

2:03:44

see the physical world around you and you can place virtual

2:03:46

objects in it, whether that's a screen to watch a

2:03:49

movie

2:03:50

or a projection of your virtual desktop, or

2:03:53

you're playing a game where like zombies

2:03:55

are coming out through the wall and you need to shoot them. Or,

2:03:58

you know, we're playing Dungeons and Dragons.

2:03:59

or some board game and we just have a virtual

2:04:02

version of the board in front of us while we're sitting here.

2:04:05

All that's possible in mixed reality.

2:04:08

And I think that that is going to be the next

2:04:10

big capability on top of virtual reality.

2:04:12

It has done so

2:04:14

well.

2:04:15

I have to say as a person who experienced

2:04:17

it today with zombies having

2:04:20

a full awareness of the

2:04:23

environment and integrating that environment

2:04:25

in the way they run at you while they try to kill

2:04:27

you. It's just the

2:04:30

mixed reality, the pass through is really, really,

2:04:32

really well done. And the fact that

2:04:34

it's only $500 is really, well done. Thank

2:04:38

you. I'm super excited about

2:04:40

it. I mean,

2:04:42

we put a lot of work into

2:04:44

making the

2:04:45

device

2:04:47

both as good as possible

2:04:49

and as affordable as possible because a big part

2:04:51

of our mission and ethos here is

2:04:54

we want

2:04:55

people to be able to connect with each other. We want to reach

2:04:57

and we want to serve a lot of people. We want to bring this technology

2:05:00

to everyone. So we're not just trying

2:05:03

to serve

2:05:04

like an elite,

2:05:05

a wealthy crowd.

2:05:10

We really want this to be accessible. So that

2:05:12

is in a lot of ways an extremely

2:05:14

hard technical problem because we

2:05:16

don't just have the

2:05:18

ability to put an unlimited amount

2:05:20

of hardware and thus we needed to basically deliver something

2:05:23

that works really well, but in

2:05:25

an affordable package. And we started with Quest Pro

2:05:27

last year. It was $1,500.

2:05:33

And now we've lowered the price to a thousand,

2:05:35

but in a lot of ways, the mixed reality

2:05:37

in Quest 3 is an

2:05:40

even better and more advanced level than what we were able

2:05:42

to deliver in Quest Pro. So I'm really

2:05:44

proud of where we are with Quest 3 on that.

2:05:48

It's gonna work with all of the virtual reality titles

2:05:51

and everything that existed there.

2:05:53

So people who want to play fully immersive games,

2:05:55

social experiences, fitness, all

2:05:58

that stuff will work. But now you'll...

2:05:59

also get mixed reality too,

2:06:03

which I think people really like because it's,

2:06:06

sometimes you wanna be super immersed in a game, but

2:06:09

a lot of the time, especially when you're moving around,

2:06:12

if you're active, like you're doing some fitness

2:06:14

experience, let's

2:06:16

say you're like doing boxing or something,

2:06:18

it's like, you kinda wanna be able to see the room around

2:06:21

you so that way you know that like, I'm not gonna punch a

2:06:23

lamp or something like that. And

2:06:25

I don't know if you got to play with this experience, but I mean, we basically

2:06:27

have the, I mean, it's just sort of like a fun little

2:06:30

demo that we put together, but

2:06:32

it's like you just,

2:06:34

we're like in a conference room or you're

2:06:36

a living room and you have the

2:06:38

guy there and you're boxing him and you're fighting

2:06:40

him and it's like. All the other people are there

2:06:43

too, I got a chance to do that. And all the people are

2:06:45

there,

2:06:47

it's like that guy's right there.

2:06:49

Yeah, it's a good throw in the room. And the other human,

2:06:51

the path that you're seeing them also, they

2:06:53

can cheer you on, they can make fun of you if you're

2:06:56

anything like friends of mine. And then just,

2:06:59

yeah, it's really,

2:07:01

it's

2:07:03

a really compelling experience. I

2:07:05

mean, VR is really interesting too, but this

2:07:07

is something else almost. This becomes

2:07:10

integrated into your life, into your world.

2:07:13

Yeah, and it,

2:07:15

so I think it's a completely new capability

2:07:17

that will unlock a lot of different content. And

2:07:19

I think

2:07:20

it'll also just make the experience more comfortable

2:07:22

for a set of people who didn't wanna

2:07:24

have only fully immersive experiences.

2:07:27

I think if you want experiences where you're grounded in, your

2:07:29

living room and the physical world around you, now

2:07:32

you'll be able to have that too.

2:07:34

And I think that that's pretty exciting. I really liked how

2:07:37

it added windows to a room

2:07:39

with no windows. Yeah. Me

2:07:41

as a person. Did you see the aquarium one where you could see

2:07:43

the sharks swim up or is that just the zombie one?

2:07:45

Just the zombie one, but it's still outside. You

2:07:48

don't necessarily want windows added to your living room

2:07:50

where zombies come out of, but yes, in the context

2:07:52

of that game, it's yeah, yeah. I enjoyed

2:07:54

it because you could see the nature outside.

2:07:57

And me as a person that doesn't have windows.

2:08:00

It's just nice to have nature. Even

2:08:04

if it's a mixed reality setting.

2:08:06

I know

2:08:08

it's a zombie game, but there's a Zen nature,

2:08:12

Zen aspect to being able to look outside and

2:08:14

alter your environment as you know it.

2:08:20

There will probably be better, more Zen ways to do that

2:08:22

than the zombie game you're describing, but you're right

2:08:24

that the basic idea of

2:08:26

having

2:08:28

your physical environment on pass-through,

2:08:30

but then being able to bring in

2:08:33

different elements.

2:08:35

I think it's gonna be super powerful.

2:08:38

And

2:08:38

in some ways, I think that these are,

2:08:41

mixed reality is also a predecessor to, eventually

2:08:43

we will get AR glasses that are not the

2:08:46

goggles form factor of the current generation

2:08:49

of headsets that people

2:08:51

are making.

2:08:52

But I think a lot of the experiences that developers are

2:08:55

making for mixed reality of basically you just

2:08:57

have a kind of a hologram that you're putting

2:08:59

in the world,

2:09:00

will hopefully apply once we get the AR

2:09:03

glasses too. Now that's got its own whole set of

2:09:05

challenges and it's...

2:09:07

Well, the headsets are already smaller than the

2:09:09

previous version. Oh yeah, it's 40% thinner.

2:09:12

And the other thing that I think is good about it, yeah, so mixed

2:09:14

reality was the first big thing.

2:09:16

The second is it's just

2:09:19

a great VR headset. I mean, it's got

2:09:21

2X the graphics processing power, 40% sharper

2:09:25

screens, 40% thinner, more comfortable,

2:09:27

better strap

2:09:29

architecture, all this stuff that, if you liked

2:09:32

Quest 2, I think that this is just gonna be, it's

2:09:34

like all the content that you might've played in Quest 2 is just

2:09:36

gonna get sharper automatically and look better

2:09:39

in this. So it's, I think

2:09:41

people are really gonna like it. Yeah, so this fall. This

2:09:44

fall, I have to ask, Apple

2:09:47

just announced a mixed reality

2:09:49

headset called Vision Pro for $3,500 available

2:09:52

in early 2024. What

2:09:56

do you think about this headset?

2:09:59

Well, I said...

2:09:59

the materials when they launched,

2:10:02

I haven't gotten a chance to play with it yet. So

2:10:04

kind of take everything with a grain of salt. But

2:10:07

a few high

2:10:08

level thoughts. I mean, first,

2:10:12

you know, I do think that this is a

2:10:15

certain level of validation for

2:10:18

the

2:10:18

category, right? Where, you know, when

2:10:21

we were the

2:10:22

primary folks out there before saying, hey,

2:10:25

I think that this,

2:10:26

you know, virtual reality, augmented reality,

2:10:28

mixed reality, this is going to be a big

2:10:30

part of the next computing platform. I

2:10:33

think having Apple come

2:10:35

in

2:10:36

and share that vision

2:10:41

will make a lot of people who are fans of their

2:10:43

products

2:10:44

really consider that.

2:10:47

And then, you know,

2:10:49

of course, the $3,500 price, you know, on the one hand, I get

2:10:51

it for

2:10:53

with all the stuff

2:10:56

that they're trying to pack in there. On the other hand, a lot

2:10:58

of people aren't going to find that to be

2:11:00

affordable.

2:11:01

So I think that there's a chance that them

2:11:03

coming in actually increases demand

2:11:06

for the overall space and that Quest 3

2:11:09

is actually the primary beneficiary of that,

2:11:11

because a lot of the people who might say,

2:11:14

hey, you know, this, like, I'm

2:11:16

going to give another consideration to this

2:11:18

or, you know, now I understand maybe

2:11:20

what mixed reality is more. And in Quest 3

2:11:23

is the best one on the market that I

2:11:25

can afford. And it's great

2:11:27

also, right? It's, I think that that's, and,

2:11:30

you know, in our own way, I think

2:11:31

we're, and there are a lot of features that we have where

2:11:33

we're leading on.

2:11:35

So I think that that's, that I think is

2:11:37

going to be a very,

2:11:39

that could be quite good. And

2:11:41

then obviously, over time,

2:11:43

the companies are just focused on

2:11:45

somewhat different things, right? Apple has always,

2:11:48

you

2:11:49

know, I think focused on building

2:11:52

really kind of high end things.

2:11:55

Whereas our

2:11:56

focus has been on, it's,

2:11:58

it's just we have a more democratic

2:12:01

ethos. We want to build things that are accessible

2:12:04

to a wider number of people.

2:12:06

We've sold tens of millions

2:12:09

of Quest devices.

2:12:12

My understanding,

2:12:14

just based on rumors, I don't have any special knowledge on this,

2:12:16

is that Apple is building about 1 million

2:12:18

of their device. So

2:12:21

just in terms of what you kind of expect

2:12:23

in terms of sales numbers, I

2:12:26

just think that

2:12:29

Quest is going to be the primary

2:12:32

thing that people in the

2:12:34

market will continue using for the foreseeable future.

2:12:36

And then obviously over the long term, it's up to the companies

2:12:38

to see how well we each executed the different things

2:12:40

that we're doing. But we kind of come at it from different

2:12:42

places. We're very focused on social

2:12:45

interaction, communication,

2:12:49

being more active. There's fitness, there's

2:12:52

gaming, there are those things. Whereas

2:12:54

I think a lot of the use cases

2:12:57

that you saw in Apple's

2:13:00

launch material were more around people

2:13:02

sitting, people looking

2:13:04

at screens, which are

2:13:06

great. I think that you will replace your laptop

2:13:09

over time with a headset. But I

2:13:12

think in terms of how the

2:13:14

different use cases that the companies are going after,

2:13:16

they're

2:13:17

a bit different for

2:13:19

where we are right now. Yeah, so gaming

2:13:22

wasn't a big part of the presentation, which is interesting.

2:13:25

It

2:13:26

feels like mixed reality

2:13:29

gaming is such a big part of that. It was

2:13:32

interesting to see it missing in the presentation.

2:13:34

Well, I mean, look, there are certain design trade-offs

2:13:37

in this

2:13:38

where they

2:13:41

made this point about not wanting to have controllers,

2:13:43

which on the one hand,

2:13:45

there's a certain elegance about just being able to navigate

2:13:48

the system with eye

2:13:49

gaze and hand tracking. And by

2:13:52

the way, you'll be able to just navigate Quest

2:13:54

with your hands too, if that's what you want. Yeah,

2:13:56

one of the things I should mention is that

2:13:59

the

2:14:00

the capability from the cameras to, with

2:14:03

computer vision to detect certain aspects of the

2:14:05

hand, allowing you to have a controller that doesn't

2:14:07

have that ring thing. Yeah, the hand

2:14:10

tracking in Quest 3 and the controller

2:14:12

tracking is a big step up from

2:14:14

the last generation.

2:14:17

And one of the demos that we have is basically

2:14:20

an MR experience teaching you how to play piano

2:14:22

where it basically highlights the notes that you need to play

2:14:24

and it's like, just all, it's hands, it's no controllers.

2:14:27

But

2:14:28

I think if you care about gaming, having

2:14:31

a controller

2:14:33

allows you to have a more tactile feel and

2:14:36

allows you to capture fine

2:14:39

motor movement much more precisely

2:14:41

than what

2:14:43

you can do with hands without something that you're touching.

2:14:46

So again, I think there are certain

2:14:48

questions which are just around

2:14:50

what use cases are you optimizing for.

2:14:54

I think if you wanna play games, then

2:14:56

I think that you wanna

2:14:58

design the system in a different way and we're

2:15:01

more focused on kind of social

2:15:03

experiences, entertainment experiences.

2:15:07

Whereas if what you want is to make

2:15:09

sure that the

2:15:11

text that you read on a screen is

2:15:13

as crisp as possible, then you

2:15:15

need to make the design

2:15:16

and cost trade-offs that

2:15:18

they made that lead you to making

2:15:20

a $3,500 device. So

2:15:22

I think that there is a use case for that for sure,

2:15:25

but I just think that the

2:15:27

company is, we've basically made different

2:15:29

design trade-offs to get

2:15:31

to the use cases that we're trying to serve.

2:15:34

There's a lot of other stuff I'd

2:15:37

love to talk to you about the Metaverse,

2:15:40

especially the Kodak avatar, which

2:15:42

I've gotten to experience a lot of different variations

2:15:44

of recently that I'm really, really

2:15:46

excited about. Yeah, I'm excited to talk about that too. I'll

2:15:49

have to wait a little bit because,

2:15:53

well, I

2:15:55

think there's a lot more to show off in that regard, but

2:15:58

let me step back to AI.

2:16:00

I think we've mentioned it a little bit, but

2:16:04

I'd like to linger on this question that

2:16:07

folks like Eliezer Yudkowsky has to worry

2:16:09

about and others of

2:16:12

the existential, the serious threats

2:16:14

of AI that have been reinvigorated

2:16:17

now with the rapid developments of AI systems.

2:16:20

Do you worry about

2:16:22

the existential risks of AI

2:16:25

as Eliezer does about the

2:16:27

alignment problem, about this getting out of hand?

2:16:30

Any time where there's a number of serious people

2:16:33

who are raising a

2:16:35

concern that is that existential about

2:16:38

something that you're involved with, I think you have to think

2:16:40

about it, right? So I've

2:16:42

spent quite a bit of time thinking about it from that perspective.

2:16:49

The thing that I, where I basically

2:16:52

have come out on this for now is I, I do think that

2:16:54

there are,

2:16:55

over time I think that we need to think about this even

2:16:58

more as we approach something that

2:17:01

could be closer to super intelligence. I just think it's pretty

2:17:04

clear to anyone working on these projects today

2:17:06

that we're not there.

2:17:09

And one of my concerns

2:17:11

is that, we spent a fair amount

2:17:13

of time on this before, but there

2:17:16

are more,

2:17:19

I don't know if mundane is the right

2:17:21

word, but there's like

2:17:23

concerns that already exist, right? About

2:17:25

like people using AI tools

2:17:28

to do harmful things of the type

2:17:30

that we're already aware, whether we talked about fraud

2:17:32

or scams or different things

2:17:34

like that.

2:17:37

And that's going to be a pretty big

2:17:40

set of challenges that the company is working

2:17:42

on this, they're gonna need to grapple with,

2:17:46

regardless of whether there is an existential concern

2:17:48

as well at some point down the road. So

2:17:51

I do worry that to some degree,

2:17:53

people

2:17:55

can get a

2:17:57

little too focused on,

2:17:59

on some of the tail risk

2:18:01

and then not do as good of a job as we need

2:18:04

to on the things that you

2:18:06

can be almost certain are going

2:18:08

to come down the pipe as

2:18:11

real risks that manifest themselves in the

2:18:14

near term. So for me, I've spent most

2:18:16

of my time

2:18:17

on that once I made the

2:18:21

realization that the size

2:18:23

of models that we're talking about now in terms of what

2:18:25

we're building are quite far

2:18:28

from the super intelligence type concerns

2:18:30

that people raise.

2:18:32

But I think once we get a couple steps closer

2:18:34

to that, I know as we

2:18:36

do get closer, I think that

2:18:39

there are going to be some novel risks

2:18:42

and issues about

2:18:43

how we make sure that the systems are safe for sure.

2:18:47

I guess here just to take the conversation in a somewhat

2:18:49

different direction,

2:18:51

I think

2:18:52

in some of these debates around safety,

2:18:54

I

2:18:55

think the concepts of intelligence and

2:18:58

autonomy

2:19:02

or like the being of the

2:19:04

thing, as an

2:19:06

analogy, they get kind of conflated

2:19:09

together.

2:19:11

I think it very well could be the case

2:19:13

that you can make something and scale intelligence

2:19:16

quite far, but

2:19:21

that may not manifest

2:19:24

the safety concerns that people are saying in

2:19:26

the sense that, I mean, just if you look at human

2:19:28

biology, it's like, all right, we have our neocortex

2:19:30

is where all the thinking happens, but

2:19:34

it's not really calling the shots at the end of the day.

2:19:36

We have a much more primitive

2:19:39

old brain structure for

2:19:41

which our neocortex, which is this powerful

2:19:43

machinery, is basically just a kind

2:19:46

of prediction and reasoning engine to

2:19:48

help

2:19:49

our very simple

2:19:51

brain

2:19:54

decide how

2:19:56

to plan and do what it needs to do

2:19:58

in order to achieve these

2:20:00

very kind of basic impulses. And

2:20:02

I

2:20:04

think that you can think about some of the

2:20:07

development of intelligence

2:20:09

along the same lines, where

2:20:11

just like our neocortex doesn't have free

2:20:13

will or autonomy,

2:20:16

we might develop these wildly intelligent

2:20:18

systems that are

2:20:19

much

2:20:20

more intelligent than our neocortex have much more

2:20:22

capacity, but are in

2:20:24

the same way that our neocortex is sort of subservient

2:20:27

and is used as a tool by our

2:20:29

kind of simple, impulse brain.

2:20:32

It's, you know, I think that it's not

2:20:35

out of the question that very intelligent systems

2:20:37

that have the capacity to think will

2:20:39

kind of act as that is sort of an extension

2:20:42

of the neocortex doing that. So I think my

2:20:45

own view is that

2:20:46

where we really need to be careful is

2:20:49

on the development of autonomy

2:20:51

and how we

2:20:53

think about that, because

2:20:56

it's actually the case that

2:20:58

relatively simple and unintelligent things

2:21:00

that have runaway autonomy and just

2:21:03

spread themselves, or, you know, it's like,

2:21:05

we have a word for that. It's a virus, right? I

2:21:07

mean, like it can be simple computer code

2:21:09

that is not particularly intelligent, but just spreads

2:21:11

itself and does a lot of harm,

2:21:15

biologically or computer. And

2:21:20

I just think that these are somewhat separable things. And

2:21:24

a lot of what I think we need to develop when people

2:21:26

talk about safety and responsibility

2:21:28

is really the governance on the

2:21:30

autonomy that can be given to

2:21:33

systems. And

2:21:35

to me, if, you know, if I were,

2:21:36

you know, a policymaker is, or thinking about

2:21:38

this,

2:21:39

I would really want to think about that distinction between

2:21:42

these, where I think building intelligent systems

2:21:44

will be,

2:21:45

can create a huge advance in terms of

2:21:47

people's quality of life and

2:21:50

productivity growth in the economy.

2:21:52

But it's the autonomy part of this

2:21:55

that I think we really need to make

2:21:57

progress on how to govern these things responsibly.

2:22:00

before we build

2:22:03

the capacity for them to make

2:22:05

a lot of decisions on their own or give

2:22:07

them goals or things

2:22:10

like that. And I think that that's a research problem,

2:22:12

but I do think that to some degree, these

2:22:14

are somewhat separable things. I

2:22:17

love the distinction between intelligence and autonomy

2:22:20

and the metaphor within your cortex.

2:22:23

Let me ask about power. So

2:22:28

building super intelligence systems, even

2:22:30

if it's not in the near term, I think

2:22:32

Meta is one

2:22:34

of the few companies, if not the main

2:22:37

company that will develop

2:22:39

the super intelligence system. And

2:22:42

you are a man who's at the head of this company.

2:22:44

Building AGI might make you the most

2:22:47

powerful man in the world. Do you worry that that

2:22:49

power will corrupt you?

2:22:53

What a question.

2:22:57

I mean, look, I think realistically, this gets

2:22:59

back to the open source things that we talked about before,

2:23:02

which is I don't think

2:23:04

that the world will be best served

2:23:07

by

2:23:09

any small number of organizations

2:23:13

having this

2:23:15

without it being something that is more broadly

2:23:18

available. And I think if you look through history,

2:23:21

it's

2:23:23

when there are these sort of like unipolar

2:23:26

advances and things that, and

2:23:28

like power imbalances that they're doing to

2:23:30

being kind of

2:23:32

weird situations. So this

2:23:35

is one of the reasons why I think open sources is

2:23:37

generally the right

2:23:40

approach. And I

2:23:43

think it's a categorically different question today when

2:23:45

we're not close to super intelligence. I

2:23:47

think that there's a good chance that even once we get closer to

2:23:49

super intelligence, open sourcing remains

2:23:51

the right approach, even though I think at that point, it's a somewhat

2:23:53

different debate.

2:23:56

But I think part of that is that that

2:23:58

is, I think one of the... best ways

2:24:00

to ensure that

2:24:01

the system is as secure and safe as

2:24:04

possible. Because it's not just about a lot of people

2:24:06

having access to it, it's the scrutiny

2:24:08

that kind of comes with

2:24:10

building an open source system. I

2:24:12

think that this is a pretty widely accepted thing

2:24:14

about open sources that you

2:24:17

have the code out there so anyone can

2:24:19

see the vulnerabilities, anyone

2:24:21

can kind of mess with it in different ways, people

2:24:24

can spin off their own projects and experiment

2:24:26

in a ton of different ways, and the net result

2:24:28

of all of that is

2:24:30

that the systems just get hardened and

2:24:32

get to be a lot safer and more secure. So

2:24:36

I think that there's a chance

2:24:39

that that ends up being

2:24:41

the way that this goes to, a

2:24:44

pretty good chance, and that

2:24:47

having this be open

2:24:49

both leads to a healthier development

2:24:51

of the technology

2:24:53

and also leads to a more balanced

2:24:57

distribution of the technology in

2:24:59

a way that strikes me as good

2:25:01

values to aspire to. So

2:25:03

to you the risks, there's risks to open

2:25:05

sourcing but the benefits outweigh the risks. At

2:25:08

the two, it's interesting, I think

2:25:11

the way you put it,

2:25:13

you put it well that there's a

2:25:15

different discussion now than when we

2:25:17

get closer to the

2:25:20

development of super intelligence of the

2:25:22

benefits and risks of open

2:25:24

sourcing. Yeah, and to be clear, I feel

2:25:27

quite confident in the assessment that

2:25:29

open sourcing models now is

2:25:31

net

2:25:32

positive. I think there's

2:25:34

a good argument that in the future it will

2:25:36

be too even as you get closer to super intelligence

2:25:39

but I've

2:25:40

certainly have not decided on that

2:25:42

yet and I think that it becomes a somewhat more complex

2:25:44

set of questions that I think people

2:25:46

will have time to debate and will also be

2:25:48

informed by what happens between now and then to make

2:25:51

those decisions. We don't have to necessarily

2:25:53

just debate that in theory right now. What

2:25:55

year do you think we'll have a super intelligence?

2:25:59

I don't know. I mean, that's pure speculation.

2:26:02

I think it's very

2:26:04

clear, just taking a step back, that we had a big breakthrough in

2:26:06

the last year, right? Where the

2:26:08

LLMs and diffusion models basically reached

2:26:11

a scale where they're able to do some

2:26:14

pretty interesting things. And then I think the question

2:26:16

is what happens from here. And just

2:26:18

to paint the two extremes,

2:26:24

on one side, it's like, okay, we just had

2:26:26

one breakthrough. If we just have another

2:26:28

breakthrough like that, or maybe two, then

2:26:31

we could have something that's truly crazy, right?

2:26:33

And it is like, is

2:26:36

just like so much more advanced. And on

2:26:39

that side of the argument, it's

2:26:41

like, okay, well, maybe we're,

2:26:45

maybe we're only a couple of big steps away

2:26:48

from reaching

2:26:51

something that looks more like general intelligence. Okay,

2:26:54

that's one side of the argument. And

2:26:56

the other side, which is what we've historically seen a lot

2:26:58

more, is

2:26:59

that a breakthrough leads to, in that

2:27:04

Gartner

2:27:06

hype cycle, there's like the hype.

2:27:08

And then there's the trough of disillusionment after

2:27:11

when people think that there's a chance that, hey,

2:27:13

okay, there's a big breakthrough, maybe we're about to

2:27:15

get another big breakthrough. And it's like, actually, you're not

2:27:17

about to get another breakthrough. You're maybe you're

2:27:20

actually just gonna have to sit with this one for a while. And

2:27:22

it could be

2:27:24

five years, it could be 10 years, it

2:27:27

could be 15 years until you figure

2:27:30

out

2:27:31

the

2:27:33

next big thing that needs to get figured out.

2:27:37

But I think that the fact that we just had this breakthrough

2:27:41

sort of makes it so that we're at a

2:27:42

point of almost a very wide

2:27:44

error bars on what happens next. I

2:27:48

think the traditional technical view, like

2:27:51

looking at the industry,

2:27:53

would suggest that we're not just going to stack

2:27:56

in a breakthrough on top of breakthrough, on

2:27:58

top of breakthrough, like every...

2:27:59

six months or something right

2:28:02

now. I think it will, I'm

2:28:03

guessing, I would guess that it will

2:28:06

take somewhat longer in between these, but I

2:28:08

don't

2:28:10

know. I tend to be pretty optimistic

2:28:12

about breakthroughs too. So I mean, so I think if you're

2:28:15

normalized for my normal optimism, then

2:28:17

maybe it would be even slower

2:28:20

than what I'm saying. But even within that, like I'm

2:28:22

not even opining

2:28:23

on the question of how many breakthroughs are

2:28:25

required to get to general intelligence because no one knows.

2:28:28

But this particular breakthrough

2:28:30

was so,

2:28:31

such a small step

2:28:33

that resulted in such a big leap

2:28:36

in performance

2:28:38

as experienced by human beings that

2:28:40

it makes you think, wow, as

2:28:43

we stumble across this very open world

2:28:46

of research, will

2:28:48

we stumble

2:28:49

across another

2:28:51

thing that will have a giant leap in performance?

2:28:56

And

2:28:57

also we don't know exactly at which stage

2:29:00

is it really going to be impressive because

2:29:02

it feels like it's really encroaching on

2:29:05

impressive levels of intelligence.

2:29:08

You still didn't answer the question of what

2:29:10

year we're going to have super intelligence. I'd like

2:29:12

to hold you to that. No, I'm just kidding.

2:29:14

But is there something you could say

2:29:17

about the timeline

2:29:19

as you think about the development of

2:29:22

AGI super intelligence systems? Sure.

2:29:26

So I still

2:29:27

don't think I have any particular insight on

2:29:29

when like a singular AI system

2:29:32

that is a general intelligence will get created.

2:29:34

But I think the one thing that most people

2:29:37

in the discourse that I've seen about this haven't really grappled

2:29:40

with is that we do seem to have

2:29:42

organizations

2:29:46

and structures in the world that exhibit greater

2:29:48

than human intelligence already. So one

2:29:51

example

2:29:52

is a company.

2:29:53

It acts as an entity,

2:29:55

it has a singular brand. Obviously

2:29:59

it's a collection. of people, but

2:30:01

I certainly hope that, you know, meta

2:30:03

with tens of thousands of people make

2:30:06

smarter decisions than one person. But

2:30:08

I think that that would be pretty bad if it didn't.

2:30:12

Another example that

2:30:13

I think is even more removed from

2:30:15

kind

2:30:16

of the way we think about like the personification

2:30:19

of intelligence,

2:30:21

which is often implied in some of these questions, is

2:30:23

think about something like the stock market.

2:30:25

The stock market is, you know, it

2:30:28

takes inputs, it's a distributed system. It's like the

2:30:30

cybernetic organism that,

2:30:32

you know,

2:30:33

probably millions of people around the world are

2:30:36

basically voting every day by

2:30:39

choosing what to invest in. But

2:30:40

it's basically this, this

2:30:44

organism or structure that

2:30:46

is smarter than any individual

2:30:49

that we use to

2:30:50

allocate capital

2:30:52

as efficiently as possible around the world. And

2:30:55

I

2:30:55

do think that

2:31:00

this notion that there are already these cybernetic

2:31:03

systems that are either

2:31:06

melding

2:31:08

the intelligence of multiple people together or melding

2:31:11

the intelligence of multiple people and technology

2:31:13

together to

2:31:15

form something which is

2:31:18

dramatically more intelligent than any individual in

2:31:21

the world is

2:31:25

something that seems to exist and

2:31:28

that we seem to be able to harness in a

2:31:30

productive way for our society as

2:31:33

long as we basically build these structures in

2:31:35

balance with each other. So

2:31:37

I don't know. I mean, that

2:31:39

at least gives me hope that

2:31:42

as we advance the technology, and I don't know how long exactly

2:31:44

it's going to be, but you asked when is this going

2:31:46

to exist. I think to some degree, we already have

2:31:49

many organizations in the world that are smarter

2:31:51

than a single human. And that seems

2:31:54

to be something that is generally productive in advancing

2:31:56

humanity. And somehow the individual AI

2:31:58

systems

2:31:59

the individual humans and the interaction

2:32:02

between those humans to make that collective

2:32:04

intelligence machinery that you're referring

2:32:07

to smarter. So it's not like AI

2:32:09

is becoming super intelligent. It's just becoming

2:32:12

the engine that's making the collective intelligence

2:32:15

is primarily human more intelligent.

2:32:18

Yeah, it's educating the humans better. It's

2:32:21

making them better informed. It's

2:32:24

making it more efficient for them to communicate effectively

2:32:27

and debate ideas. And through

2:32:29

that process, just making the whole collective

2:32:31

intelligence more and more and more intelligent, maybe

2:32:35

faster than the individual AI systems

2:32:37

that are trained on human data anyway are

2:32:39

becoming.

2:32:40

Maybe the collective intelligence of the human species

2:32:43

might outpace the development of AI.

2:32:45

Just like- I think there's a balance in

2:32:48

here, because I mean, if like,

2:32:49

you know, if a lot of the input that the

2:32:52

systems are being trained on

2:32:55

is basically coming from feedback from people,

2:32:57

then a

2:32:58

lot of the development does need to happen in human

2:33:01

time, right? It's not like a

2:33:03

machine will

2:33:04

just be able to go learn all this stuff about

2:33:07

how people think about stuff. There's a cycle

2:33:09

to how this needs to work. This is an

2:33:11

exciting world we're

2:33:14

living in, and that you're at the forefront

2:33:16

of developing. One of the ways

2:33:18

you keep yourself humble, like we mentioned with

2:33:20

Jiu-Jitsu, is doing some

2:33:23

really difficult challenges, mental and physical.

2:33:26

One of those you've done

2:33:28

very recently is the Murph Challenge,

2:33:31

and you got a really good time. It's a hundred pull-ups, 200

2:33:34

push-ups, 300 squats, and

2:33:36

a mile before and a mile around after.

2:33:39

You got under 40

2:33:41

minutes on that. What

2:33:43

was the hardest part? I think

2:33:45

a lot of people were very impressed. It's very

2:33:48

impressive time. Yeah,

2:33:50

I was pretty happy- How crazy are you? It was the

2:33:52

question I'm asking. It

2:33:54

wasn't my best time, but anything under 40

2:33:56

minutes I'm happy with. It wasn't your

2:33:58

best time.

2:33:59

No, I think I've done it a little

2:34:02

faster before, but not much. I mean,

2:34:05

it's, and of my friends, I did not win

2:34:07

on Memorial Day. One of my friends did it actually

2:34:10

several minutes faster than me. But

2:34:12

just to clear up one thing that I think was, I

2:34:15

saw a bunch of questions about this on the internet. There are

2:34:17

multiple ways to do the Murph Challenge.

2:34:19

There's a kind of partitioned mode

2:34:22

where you do

2:34:23

sets of pull-ups, push-ups, and

2:34:26

squats

2:34:27

together. And then there's unpartitioned where

2:34:29

you do the hundred pull-ups

2:34:31

and then the 200 push-ups and then the 300

2:34:33

squats

2:34:35

in serial. And obviously if you're

2:34:37

doing them

2:34:39

unpartitioned, then it

2:34:41

takes longer to get through the hundred pull-ups

2:34:43

because anytime that you're resting in between

2:34:45

the pull-ups, you're not also doing push-ups and squats.

2:34:48

So yeah, so I'm sure my unpartitioned

2:34:50

time would be quite a bit slower, but

2:34:53

no, I think at the end of this,

2:34:58

I don't know, first of all, I think it's a good way to honor Memorial

2:35:00

Day, right? It's this Lieutenant Murphy, basically, this

2:35:08

was one of his favorite exercises

2:35:11

and I just try to do it on Memorial Day each

2:35:13

year and it's a good workout.

2:35:17

I got my older daughters to do it with me this time.

2:35:21

My oldest daughter wants a weight vest because

2:35:23

she sees me doing it with a weight vest. I

2:35:26

don't know if a seven-year-old should be using a weight

2:35:28

vest to do pull-ups, but- The

2:35:31

difficult question a parent must ask themselves,

2:35:33

yes. I was like, maybe I can make you a very lightweight

2:35:35

vest, but I didn't think it was good for this. So

2:35:37

she basically did a quarter Murph, so she ran

2:35:39

a quarter mile and

2:35:41

then did 25 pull-ups, 50 push-ups

2:35:44

and 75 air squats, then

2:35:47

ran another quarter mile in like 15 minutes,

2:35:50

which I was pretty impressed

2:35:52

by and my

2:35:53

five-year-old too.

2:35:56

So I was excited

2:35:58

about that. And I'm glad that I'm teaching.

2:35:59

them kind

2:36:00

of the value of

2:36:03

physicality. I think

2:36:05

a good day for Max, my daughter, is when

2:36:08

she gets to go to the gym with me and cranks out

2:36:10

a bunch of pull-ups.

2:36:11

I love that about her. I

2:36:14

think it's good. Hopefully

2:36:17

I'm teaching her some good lessons. The

2:36:20

broader question here is, given how

2:36:22

busy you are, given how much stuff you have going on in

2:36:24

your life,

2:36:28

what's the perfect exercise regimen

2:36:30

for you to

2:36:33

keep yourself happy, to keep

2:36:36

yourself productive in your main line of work?

2:36:40

Yeah, so right now I'm focused

2:36:42

most of my workouts on

2:36:45

fighting. So jiu-jitsu

2:36:47

and

2:36:48

MMA.

2:36:49

I

2:36:52

don't know. Maybe if you're a professional you can do that

2:36:54

every day. I can't. I just

2:36:57

get too many bruises and things that you need to recover

2:36:59

from. So I do that three to

2:37:01

four times a week.

2:37:04

The other days I

2:37:06

just try to do a mix of things like just cardio

2:37:09

conditioning, strength building, mobility.

2:37:11

So you try to do something physical every day?

2:37:14

Yeah, I try to. Unless I'm just so tired that I

2:37:16

just need

2:37:18

to relax. But then I'll still try to go for

2:37:20

a walk or something. I mean,

2:37:22

even

2:37:23

here, I don't know. Have you been on the roof here yet? No.

2:37:26

We'll go on the roof after this. I've heard of things. But

2:37:28

it's like we designed this building and I put a park on

2:37:30

the roof. So that way, that's like my meetings

2:37:32

when I'm just doing kind of a one-on-one or

2:37:34

talking to a couple of people. I have

2:37:36

a very hard time just sitting. I feel like it

2:37:38

gets super stiff. It feels really bad.

2:37:43

But I don't know. Being physical

2:37:45

is very important to me. I

2:37:48

do not believe this gets to the question

2:37:50

about AI.

2:37:51

I don't think that a being is just

2:37:53

a mind.

2:37:54

I think we're kind of meant

2:37:57

to

2:37:58

do things in like physical.

2:37:59

and a lot of the sensations

2:38:02

that we feel are connected

2:38:05

to that. And I think that that's a lot of what makes you a human

2:38:08

is basically

2:38:10

having that set of sensations

2:38:12

and experiences around that

2:38:18

coupled with a mind to reason about them.

2:38:22

But I don't know, I think it's

2:38:25

important for balance to

2:38:27

kind of get out, challenge yourself in different

2:38:30

ways, learn different skills, clear your

2:38:32

mind. Do

2:38:33

you think

2:38:35

AI in order to become super intelligent,

2:38:37

an AGI should have a body?

2:38:41

It depends on what the

2:38:44

goal is. I think that there's this assumption

2:38:48

in that question that intelligence

2:38:50

should be

2:38:52

kind

2:38:53

of person-like. Whereas, as

2:38:55

we were just talking about,

2:38:59

you can have these greater than

2:39:01

single human intelligent organisms like

2:39:04

the stock market, which obviously do not have bodies

2:39:06

and do not speak a language and

2:39:09

just kind of have their own system.

2:39:14

But so

2:39:16

I don't know, my guess is there

2:39:18

will be limits to what

2:39:20

a system that is purely an intelligence

2:39:22

can understand about the

2:39:24

human condition without having the same,

2:39:27

not just senses, but like

2:39:30

our bodies changes, we get older, right?

2:39:33

And we kind of evolve and

2:39:35

let those

2:39:36

very subtle

2:39:39

physical changes

2:39:41

just drive a lot of social patterns and

2:39:43

behavior around like when

2:39:46

you choose to have kids, right? Like just like all these,

2:39:48

that's not even subtle, that's a major one, right? But like how

2:39:51

you design things around the house. So yeah, I

2:39:56

mean, I think if

2:39:57

the goal is to understand people as much as possible,

2:39:59

I think,

2:39:59

I think that that's

2:40:02

trying to model those sensations is

2:40:04

probably somewhat important, but I think that

2:40:06

there's a lot of value that can be created by having intelligence,

2:40:09

even that is separate from that, is

2:40:11

a separate thing. So one of the features

2:40:13

of being human is that we're

2:40:16

mortal,

2:40:17

we die, we've talked

2:40:19

about AI a lot, about potentially

2:40:21

replicas of ourselves. Do

2:40:24

you think there'll be AI replicas of you and me

2:40:26

that persist long after we're gone? That

2:40:29

family and loved ones

2:40:31

can talk to?

2:40:34

I think we'll have the capacity to do something

2:40:37

like that. And I think one of the big questions

2:40:40

that we've

2:40:41

had to struggle with in the context of

2:40:44

social networks is who gets to make

2:40:46

that. And my

2:40:49

answer to that, in

2:40:50

the context of the work that we're doing is that that

2:40:53

should be your choice. Right, I don't think anyone should

2:40:55

be able to choose to make a Lex

2:40:57

bot that people can

2:41:01

choose to talk to and get to train that. And

2:41:04

we have this precedent of making some of these calls

2:41:07

where

2:41:09

someone can create a page for a Lex fan club,

2:41:11

but

2:41:14

you can't create a page and say that you're Lex. Yes.

2:41:18

So I think that similarly, I think,

2:41:22

someone maybe

2:41:24

should be able to make an AI as

2:41:26

a Lex admirer that someone can talk to, but I think

2:41:29

it should ultimately be your call whether

2:41:33

there is a Lex AI. Well,

2:41:35

I'm open sourcing the Lex. So

2:41:40

you're a man of faith. What

2:41:42

role has faith played in your life and your understanding

2:41:45

of the world and your understanding of your own life and

2:41:48

your understanding of your work

2:41:51

and how

2:41:53

your work impacts the world?

2:41:57

Yeah, I think that there's a few different parts of this that

2:41:59

are relevant.

2:42:02

There's sort of a philosophical part and there's

2:42:04

a cultural part. And one of

2:42:06

the most basic

2:42:07

lessons is right

2:42:10

at the beginning of Genesis, right? It's like

2:42:12

God creates the earth and creates

2:42:14

people and creates people in God's

2:42:16

image. And there's the question of what

2:42:19

does that mean? And all the only context that you have

2:42:21

about God at that point in the Old Testament is that

2:42:24

God has created things. So I always thought

2:42:27

that one of the interesting lessons from that

2:42:29

is that there's

2:42:31

a virtue in creating things.

2:42:34

That is,

2:42:35

whether it's artistic or whether

2:42:38

you're

2:42:39

building things that are functionally useful for other

2:42:41

people,

2:42:44

I think that that by itself is

2:42:48

a good.

2:42:52

That kind of drives a lot of how I think about

2:42:55

morality and

2:42:57

my personal philosophy around

2:42:59

what is a good life.

2:43:01

I think it's one where you're

2:43:05

helping the people around you and

2:43:07

you're being a kind

2:43:10

of positive, creative force in the world that

2:43:13

is helping to

2:43:14

bring new things into the world, whether

2:43:16

they're amazing other people, kids,

2:43:24

or just leading to the creation of different things that

2:43:26

wouldn't have been possible otherwise. And so that's

2:43:28

a value for me that matters deeply.

2:43:31

And I just love

2:43:33

spending time with the kids and seeing that they sort

2:43:35

of try to impart this value to them.

2:43:39

And

2:43:40

it's like nothing makes me happier than when

2:43:42

I

2:43:43

come home from work and

2:43:45

I see my

2:43:47

daughter's building Legos on the table or something. It's

2:43:49

like, all right, I did that when I was a kid. So

2:43:51

many other people were doing this. And I hope

2:43:54

you don't lose that spirit, where when you

2:43:56

kind of grow up and you want to just continue building

2:43:59

different things, no matter what it is. To

2:44:03

me, that's a lot of what matters.

2:44:05

That's the philosophical piece. I think the cultural piece

2:44:07

is just about community and values.

2:44:09

And that part of things I think has just

2:44:11

become a lot more important to me since I've had kids.

2:44:15

It's almost autopilot when you're a kid,

2:44:18

you're in the kind of getting imparted two phase

2:44:20

of your life. And

2:44:23

I didn't really think about religion that much for a

2:44:25

while. I was in college

2:44:27

before

2:44:29

I had kids. And then

2:44:32

I think having kids has this way of really

2:44:35

making you think about what traditions you want to impart

2:44:38

and how you want to celebrate and

2:44:43

what balance you want in your life. And

2:44:45

I mean, a bunch of the questions that you've asked and

2:44:47

a bunch of the things that we're talking about.

2:44:50

Just the irony of the curtains

2:44:53

coming down as

2:44:55

we're talking about mortality. Once

2:44:57

again, same as last time. The

2:45:02

universe works and we are definitely living

2:45:04

in a simulation. But go ahead, community,

2:45:07

tradition, and the values, the

2:45:09

faith, and religion is still... A lot of the topics

2:45:11

that we've talked about today are

2:45:13

around how do you balance,

2:45:17

you

2:45:19

know, whether it's running a company or different

2:45:22

responsibilities with this...

2:45:26

I don't know, how do you kind of balance that?

2:45:29

And I

2:45:30

always also just think that it's very grounding

2:45:32

to just

2:45:34

believe that there is something that is much

2:45:36

bigger than you that is guiding things. That

2:45:40

amongst other things gives

2:45:43

you a bit of humility.

2:45:48

As you pursue that spirit of

2:45:51

creating that you spoke to, creating

2:45:53

beauty in the world, as Dostoyevsky

2:45:55

said, beauty will save the world. Mark,

2:45:59

I'm a huge fan of you.

2:45:59

of yours, honored

2:46:02

to be able to call you a friend. And I am looking

2:46:04

forward to both

2:46:07

kicking your ass and you kicking my ass on the

2:46:09

mat tomorrow in Jiu Jitsu, this

2:46:13

incredible sport and art that we both participate

2:46:15

in. Thank

2:46:17

you so much for talking today. Thank you for everything you're doing

2:46:19

in so many exciting realms of technology

2:46:22

and human life. I can't wait

2:46:24

to talk to you again in the metaverse. Thank you.

2:46:28

Thanks for listening to this conversation with Mark Zuckerberg.

2:46:31

To support this podcast, please check out our sponsors

2:46:33

in the description. And now let

2:46:36

me leave you with some words from Isaac Asimov.

2:46:39

It is change, continuing change,

2:46:42

inevitable change that is the

2:46:44

dominant factor in society today.

2:46:47

No sensible decision can be made any longer

2:46:50

without taking into account not only the world

2:46:52

as it is, but the world

2:46:54

as it will be.

2:46:57

Thank you for listening and hope to see you next

2:46:59

time. Thank

2:47:01

you.

Rate

Join Podchaser to...

  • Rate podcasts and episodes
  • Follow podcasts and creators
  • Create podcast and episode lists
  • & much more

Episode Tags

Do you host or manage this podcast?
Claim and edit this page to your liking.
,

Unlock more with Podchaser Pro

  • Audience Insights
  • Contact Information
  • Demographics
  • Charts
  • Sponsor History
  • and More!
Pro Features