Podchaser Logo
Home
Faster, Cheaper Drugs with AI

Faster, Cheaper Drugs with AI

Released Thursday, 9th March 2023
 1 person rated this episode
Faster, Cheaper Drugs with AI

Faster, Cheaper Drugs with AI

Faster, Cheaper Drugs with AI

Faster, Cheaper Drugs with AI

Thursday, 9th March 2023
 1 person rated this episode
Rate Episode

Episode Transcript

Transcripts are displayed as originally observed. Some content, including advertisements may have changed.

Use Ctrl + F to search

0:15

Pushkin. Over

0:21

the past few decades, it's become more and

0:23

more expensive to develop new drugs.

0:25

It now costs over a billion dollars

0:28

on average to bring a new drug to market

0:30

in the United States, and of course

0:33

drug companies pass those high development

0:35

costs onto us in the form of higher

0:37

drug prices. This has been

0:39

going on for so long that we have sort of gotten

0:41

used to it. But when you zoom out,

0:44

it's strange because,

0:47

as I've said before on this show, and as

0:49

I will say again on this show, one

0:51

of the main things technology does

0:54

is it makes things more

0:56

efficient and therefore cheaper.

0:59

Over the past few centuries, we've seen technologies

1:01

make all kinds of things cheaper, everything from

1:04

clothes to food to TVs.

1:06

So why hasn't new technology

1:09

made drugs cheaper? Two. I'm

1:15

Jacob Goldstein and this is What's Your Problem,

1:17

the show where I talk to people who are

1:19

trying to make technological progress.

1:22

My guest today is Alice Zang, co founder

1:25

and CEO of verge Genomics.

1:27

Alice's problem is this, how

1:30

do you use artificial intelligence to

1:32

drive down the price of discovering

1:35

and developing new drugs? Why

1:38

is it getting more expensive to develop drugs,

1:40

despite the fact that we have better technology to do it.

1:42

Yeah. Absolutely. One of the reasons

1:45

is, you know, even though a lot of the new

1:47

technologies we've developed have made

1:49

us better at

1:51

testing more drugs faster, but

1:54

the fundamental problem is that even

1:56

if we can get a drug all the way to clinical

1:59

trials, which is the last step of drug development,

2:02

ninety percent of those drugs still

2:04

fail. So if you think about it, we're spending millions

2:06

on each drug. Of

2:09

those drugs are failing at the last and most

2:11

expensive stage of drug development. And

2:13

so really most of that billion

2:16

plus dollar figure you hear is

2:18

due to the cost of failure. Just

2:21

to be clear, that figure more than a billion dollars.

2:23

It's you've got to include the cost of

2:25

all the drugs that don't work exactly,

2:28

the ones that do right exactly. So the ones

2:30

that do work have to pay for all the ones that fail.

2:32

That's the fundamental problem, exactly, And

2:35

you're setting out to fix that if

2:37

you can. Absolutely, we think

2:40

there's an opportunity for

2:42

AI to fundamentally

2:44

shift really the failure

2:46

rate, and the most impactful time

2:49

to do that really is the failure in clinical trials.

2:51

So can we predict

2:53

before we go in to these

2:56

expensive clinical trials genes

2:58

or targets or drugs that are more likely

3:01

to work in humans, because even a

3:04

ten percent decrease in that failure

3:06

rate could have massive

3:09

I saw a number of up to fifteen

3:11

billion dollars annually in industry

3:13

cost savings. You could still

3:15

be in a universe where most of the drugs

3:18

that go into clinical trials fail, but instead

3:20

of ninety percent of them failing, seventy

3:22

percent of them fail, and that would be a huge win.

3:24

That would be a huge efficiency gain. It would save a

3:26

ton of money, absolutely, And I think that's

3:29

something that's underappreciated about

3:31

AI and really any technology, is

3:33

that oftentimes people have this expectation

3:36

that this technology is going to absolutely

3:38

transform a field overnight. And I

3:40

think what people don't appreciate is that most of

3:42

the time that doesn't happen. It's always step

3:45

by step incremental. But even a

3:47

ten percent change would have billions

3:50

of dollars of cost savings and

3:52

would be a huge win for patients

3:54

in the industry worldwide. I like that

3:56

frame, actually, I like that frame of maybe

3:59

AI can have drugs fail

4:02

most of the time, but not as much of the time

4:04

as they fail. Now, like, it seems very credible,

4:07

It seems very plausible. Would you put it that way? Yeah,

4:09

it's all life is nothing but a learning process,

4:13

Yes, getting less bad at everything.

4:16

So I know you were

4:19

studying to be a doctor and a researcher not

4:21

that long ago, a few years ago before you started your company.

4:24

Like, tell me how you went from an

4:27

mdphd program to starting the company.

4:30

Well, my PhD research was

4:33

actually in using genomic analysis

4:35

and computational biology to

4:38

analyze large scale data sets

4:40

and find new drugs that

4:43

could improve drug development.

4:45

And we found that from our

4:48

very first drug that was predicted

4:50

from our algorithms when we put it in mice

4:53

after they've been injured, help them walk

4:56

and recover from that injury, that

4:58

nerve injury about four times faster than the leading

5:00

standard. And that was just the first drug that was

5:03

predicted. And I looked at this technology

5:05

in this approach and I thought, Wow, there's so much

5:07

promise here. You know, am I really

5:09

going to just publish this and

5:12

let it sit on a bookshelf somewhere,

5:15

or if I'm not going to be the one to

5:17

really develop this to patients, you know

5:19

who will, And when I looked out

5:21

off the field, I did not see a

5:23

ton of biotech or farmer companies

5:26

that were truly computationally driven.

5:28

Usually within pharma companies they might bring

5:31

in computational biologists to support.

5:34

There are scientists or their biologists, but

5:36

there wasn't really a genomics

5:39

computationally driven company at that time. Now

5:41

there are many, but at the

5:43

time there are very few. And so I

5:45

actually, you know, it wasn't a binary decision.

5:47

People always ask me, how did you make the

5:50

courageous decision to leap?

5:52

It wasn't really like that.

5:55

I think what we did first is that we

5:57

just took three months three month leave of absence.

5:59

We joined a program, an incubator called a y

6:01

combinator. We as you and

6:04

you and well me and my co founder Jason.

6:08

And the first question really was, you

6:11

know, can we even generate

6:13

some data that validates that

6:15

computational biology can predict targets

6:18

that work? And then when we saw

6:20

some data, the next question was can

6:23

we even hire people that want

6:25

to come on? And the next question was can we

6:27

even raise money from people

6:29

that will care? And I think that

6:32

is so such an important lesson because

6:34

I think people oftentimes get caught up in

6:36

just the destination, you know, is where

6:38

I want to be? Is this the career I want to have

6:41

that they don't take the first step,

6:43

And really it's the first step that's needed to

6:45

actually get the data to even decide if

6:48

it's the appropriate track for you. And

6:50

did you really just keep thinking, well, this might

6:52

not work, but let's do the next thing. Were

6:54

you in a place where you could have gone back to the MD

6:56

PhD program for a while. Yeah.

6:59

I took a leave of a continuous leave of absence

7:02

for probably over five years,

7:04

probably more than I should have, until

7:07

the point where a lot of my friends are like, are you really,

7:09

are you really gonna go back? And

7:12

finally the medical school is like, you're not really

7:14

going to come back, let's just terminate

7:16

your leave of absence. But it was in

7:18

the first few years a really important safety net

7:20

for me that gave me the psychological

7:23

safety to really take a risk and

7:26

really pursue a new idea that I don't

7:28

know if I would have otherwise. And I think

7:30

that's so important. I think for universities

7:33

to provide is that to recognize there

7:35

can be more than one track for

7:37

people to do really excellent science

7:40

and make an impact more than just becoming a

7:42

professor. And sometimes that

7:44

psychological safety is what's needed

7:47

to help people find their ultimate

7:49

calling too. By the ways, so

7:53

far, By the way, what's

7:56

a very brief definition

7:58

of computational biology. It's

8:02

really, at the end of the day, in my view, just the

8:04

use of computers

8:06

and data sets to understand

8:09

and biology better. By the way, what

8:12

happened to that molecule that

8:15

you were testing in mice in grad school?

8:17

That seemed useful? I

8:20

don't know. It's a good question. Actually, I

8:26

think the project was taken on by someone

8:29

else, but I'm not actually completely

8:31

sure. So, Okay, you

8:33

leave grad school, you start a company

8:36

you in fact now have taken

8:39

You have a bunch of molecules that you're

8:41

working on, and that seemed promising. But there's one

8:44

that is in clinical trials now

8:47

right to treat als Luke

8:49

Gary's disease, a terrible disease that is very

8:51

poorly treated. And

8:53

I thought that we could talk about the

8:57

story of that molecule of that drug

8:59

as a way to understand the way your company works.

9:02

Can you just sort of take me through the life

9:05

of that drug? So far? Yeah,

9:07

absolutely. I'll start off just

9:10

by talking about als and

9:12

why it's been so hard to

9:15

discover the right therapy, and then you

9:17

know why how we did that differently. So,

9:19

as you might know, LS Luke

9:22

Garrig's disease is a really horrible disease.

9:25

What happens is that these neurons

9:27

called motor neurons start dying, and

9:29

most patients experience paralysis

9:31

and then death, usually within three to five years

9:34

of diagnosis. A very fast progressing

9:36

disease, and there really aren't

9:39

any meaningfully effective treatments

9:42

that really slow or stop the disease today.

9:44

So a very horrible disease with

9:46

a horrible prognosis and no available treatments,

9:49

and why it's been so hard I

9:52

think to discover really effective treatments

9:54

is really just the complexity of the disease,

9:57

and really any disease of the brain, the brain

10:00

is the most complex organ in the body.

10:02

So you end up having a lot of drugs

10:04

brought into clinical trials that worked in mice.

10:07

I always like to say we've cured LS or can There

10:09

are many diseases in mice a thousand times,

10:12

but none of them have really worked in humans.

10:15

So what we did differently was we started

10:17

from day one by collecting data

10:19

from over a thousand ALS patients

10:21

as well as controls, and specifically,

10:24

we collected samples of brain tissue

10:27

as well as spinal cords from these patients

10:29

that actually passed away from ALS.

10:32

So you got samples from

10:34

a thousand patients who

10:36

had died of ALS. How

10:38

did you do that? So what we've

10:41

done over the last seven years is we've signed

10:43

partnerships with over twenty

10:45

one different brain banks, hospitals,

10:48

labs, academic centers worldwide

10:51

that collect these brain tissues. They're usually

10:53

donated from patients that have passed

10:56

away from the disease and whose families want

10:58

to contribute to research. Could

11:00

So step one basically is

11:02

get tissue samples from real

11:05

patients. And you said controls

11:07

as well, right, So tissue samples from healthy people

11:09

as well, so that you can use them as a basis

11:11

of comparison. You have the samples,

11:13

Now, what's step two? So step

11:16

two is that we put an

11:18

enormous amount of effort into quality controlling

11:21

these, So that's a big

11:23

underappreciated step. They can be very noisy

11:26

samples. And then step three

11:28

is that we sequence them, so

11:31

we profile,

11:33

what is the expression of all twenty thousand

11:35

genes in the genome, and we also

11:38

sometimes do DNA sequencing,

11:40

we look at genetic mutations. We

11:42

also have a clinical information

11:44

about that patient, how long did they have

11:47

the disease, when did they die? And

11:51

that makes for a very rich, multidimensional

11:53

data set, and that gives us essentially

11:55

a global snapshot of what happened

11:57

in that patient. H okay,

12:00

and you and presumably the

12:03

sequencing that you're doing on

12:05

the patient's tissue samples, you're doing the same

12:07

sequencing on the controls, the samples

12:09

from healthy people. So

12:11

now you have this very large

12:14

data set. What's the next

12:16

step. So then you have this snapshot

12:18

of what happened, and the tricky part is

12:20

to figure out what caused it. I often liken

12:23

it to a plane has crashed, right,

12:25

You're looking through the rubble and you want to

12:27

figure out how the plane crashed and how

12:30

that information can be used to prevent further

12:32

planes from crashing. So that's

12:34

when our software engineers

12:37

and data scientists as well as machine

12:39

learning scientists come in and we have

12:41

algorithms essentially to integrate multiple

12:43

data types all the way from the

12:46

RNA, so how the genes were expressed to genetic

12:48

mutations to essentially

12:50

create a map of disease

12:52

biology, and within the map

12:54

our networks of genes that are all

12:57

interconnected that we believe

12:59

cause disease. And so I like to

13:01

think about it like when you're looking through

13:03

a plane crash the rubble, you want to find the black box,

13:06

which I'll help you figure out the cause

13:09

of the disease. And by having all the information,

13:11

we essentially locate the black boxes

13:14

of disease, the targets that are really

13:16

at the center of those networks, and

13:18

then we design drugs against those targets

13:21

that we believe can reverse disease.

13:23

It seems like differentiating

13:25

between correlation and causality in this

13:28

particular setting would be really hard, right, Like to

13:30

use the plane metaphor, if you had a bunch of planes that

13:32

crash and a bunch that hadn't crashed, you might say, oh,

13:34

like the wings were off all the ones

13:36

that crashed, and that's why they crashed.

13:39

But actually the wings came off because they crashed,

13:41

right, and it was something else that caused the crash. I feel like

13:43

that would be I mean, an

13:45

obvious problem. Yeah, that might

13:47

be hard. To solve Absolutely, you hit

13:50

the nail on the head, and actually the plane metaphor is

13:52

a really great one here. For one of the biggest

13:54

challenges with looking at tissue

13:56

from a patient that already died is that you're

13:58

getting the crash right. You're not seeing video

14:01

of before the crash. You're

14:03

really getting the crash. And the challenge

14:05

is how do you figure what caused the crash

14:08

versus as well was just the effect

14:11

of the crash, like a burned wing, etc.

14:15

And one of the ways we do

14:17

that is we combine different

14:19

data types. So we found that looking at

14:21

one type of data, for example,

14:23

just RNA data is in particularly

14:25

helpful, but it's actually looking

14:27

at where do you get convergence signal that

14:30

pulls through multiple types of data to

14:32

start revealing more compelling signal.

14:34

So as an example, we look at genetic data

14:37

as well. So genetic data is useful

14:39

for looking at cause versus effect

14:41

because it contains information about genetic

14:44

mutations that you were born with as a baby

14:47

that then lead to increased

14:49

risk later in life for a disease. And that's kind

14:51

of nature's human experiment

14:54

for really cause and effect. And

14:56

when we layer that on that information

14:58

on with the RNA data. It

15:00

actually gives us information about how the genetic

15:03

drivers are acting in these functional

15:05

pathways, which is a big issue

15:07

actually with just looking at genetic data on its

15:10

own. So I wish I had a better I wish I had a way

15:12

to actually string that through to the plane metaphor.

15:14

But and

15:17

there's a time for leaving metaphors behind. Your

15:20

company uses AI in drug

15:23

discovery. I appreciate in a certain way

15:25

that you haven't said AI yet, but

15:27

also I don't want to not talk about it. I

15:29

mean in the sort of figuring out what's

15:31

going on in this step? Is that well,

15:34

is that the first instance in this process

15:37

where you're using AI? Is it? We're talking about

15:39

that here? Yeah, I mean,

15:41

I think AI is a really broad term for

15:43

any kind of process

15:45

where the computer is learning from

15:47

something. So there are all sorts of applications

15:50

of AI in this entire process,

15:53

for example, how we're integrating the data

15:55

sets together, how we're inferring

15:58

what are the central nodes

16:01

or the key targets. I

16:04

would say the most classical

16:06

use of A on the way that most people think of it

16:08

is then once we have this network of

16:10

say one hundred genes, how do we actually

16:13

find what the cause is? How

16:15

do we find what is the hub or the right

16:18

target to hit to turn

16:20

off or on all hundred of those genes.

16:22

And that's where machine learning and AI comes

16:25

in handy.

16:29

In a minute, Alice explains how this actually

16:32

works in the case of the ALS

16:34

drug verges working on. Now now

16:44

back to the show. So okay,

16:46

Alice and her colleagues at Verge have collected

16:48

all these tissue samples from ALS patients.

16:51

They've used the samples to generate this huge

16:54

data set that shows genetic variation

16:56

and changes in how genes are expressed,

16:59

along with lots of clinical data about

17:01

the patients, and then they

17:03

build these basically these AI models

17:06

to try to figure out where in

17:08

this complicated biological

17:10

process that's happening in this disease, where

17:12

they should try to intervene

17:15

with a drug, Basically where they should try and

17:17

target a drug. I think of this

17:19

oftentimes, like if you think of a map

17:22

of all the airports in the US, you want

17:24

to figure out how

17:26

to go after the hubs like Chicago

17:29

or New York. You don't want to go an

17:31

airport in Kansas or I will wouldn't be very

17:33

effective at stopping airplane

17:36

travel in the country. So there's

17:38

a lot of different pieces of information

17:40

that we collect to then infer

17:43

what are the best genes that are not only central

17:45

within this network, but also there's

17:47

independent evidence of a disease

17:50

causal effect or a relationship to disease.

17:54

And so you do all

17:56

that in this instance, and

17:59

what do you figure out? So what the

18:02

algorithms spit out is essentially a ranked list

18:04

of targets, all right, So these are

18:06

ranked list of targets that are predicted if we could

18:08

dry them, would restore

18:11

that network back to levels of healthy

18:14

people and potentially

18:16

slow or stop the disease. And

18:19

then what we do is we take those targets and we start

18:21

testing them in the lab, all right, So

18:23

we actually what is kind of cool about the platform

18:25

is we get all these targets from human brain tissue

18:28

and we also can test them in human

18:30

brain cells in the lab. So you

18:32

get a list it's basically genes

18:34

to target. You either it says upregulate

18:37

or make this gene express more or make this gene

18:39

express less. Is that basically what the AI

18:41

is out putting exactly, Like,

18:44

so how long in the instance

18:46

of this ALS drug. How long was the list? More

18:49

or less, our initial set of targets

18:51

was twenty two high confidence

18:53

targets, and then

18:56

we actually then generated another chut

18:58

choosing updated data of about

19:00

thirty more targets as well. And

19:03

what was really striking when we tested

19:05

these targets is that when we tested

19:07

them in the lab, we found

19:10

that on average over

19:12

sixty percent of them, though more recently actually

19:14

around eighty percent of them actually validated

19:17

in the lab, so they actually protected ALS

19:20

patient cells from dying, which

19:22

is very high. So we're really

19:24

excited that we're actually seeing very

19:26

robust validation of the computational

19:28

predictions, at least in the lab. Okay,

19:31

so you have this list, you're

19:33

testing it, something like half

19:35

of them seem promising, you said, sixty

19:37

percent seem promising. What happens

19:40

next? Okay, So what happens

19:42

next is that we so we test them

19:44

in these human brain cells. We understand

19:46

the mechanism. One

19:48

of the really interesting findings from this ALS

19:51

program and specific is that when we looked

19:53

at the network that we found in these patient

19:55

spinal cords, we found a new

19:57

cause of disease that was previously unknown

20:00

so most of the hypotheses

20:03

in ALS, where many of them to date, have really

20:05

been focused around these protein

20:07

aggregates, these clumps of proteins that we

20:09

can easily observe by ie that you see

20:11

in ALS patients. Right, A lot of them are

20:13

observational hypotheses.

20:16

But what we found by looking at a deeper

20:18

cut of the data is actually,

20:20

at baseline, most of these

20:22

patients actually had a baseline

20:24

dysfunction in their life csomal

20:26

pathway, which I like to call the garbage

20:29

disposal pathway. It's what is critical

20:31

to clear out junk from the

20:33

cell. And because patients

20:35

were at baseline vulnerable to

20:38

these toxic insults, it wasn't so much the protein

20:40

clumps that were directly causing it. It was because

20:42

they're already vulnerable to these

20:44

clumps of proteins that their

20:47

cells started dying. And is

20:49

the idea that the gene

20:51

you're targeting is causing

20:54

the cell's garbage disposal

20:56

to not work, right, Like you're trying to fix the garbage disposal

20:58

by targeting this particular gene.

21:02

Yeah, it's a central regulator of that pathway.

21:04

And it was also a target that was ranked I think

21:06

it was ranked number one or number two on the list. So

21:10

just to be clear, how how

21:12

do you get from

21:14

you know, so you have fifty or

21:16

so things to test, fifty

21:19

or so targets, something

21:21

like thirty

21:23

of them seem promising. How

21:26

do you decide which of those thirty to proceed

21:28

with? Yeah,

21:30

so that's a great question. We get asked

21:32

that a lot. I think at that point it's

21:34

a strategic decision. Right, you were a startup,

21:38

Right, we have to be able to develop things quickly

21:40

and capital efficiently. So

21:42

we were lucky in that sense that one

21:45

of the top targets was also a target

21:47

that already had where

21:50

the path to developing a drug was relatively

21:52

smooth, A lot was known

21:54

about that target. We could start doing chemistry

21:56

and designing molecules relatively easily,

21:59

and the target itself had actually been tested

22:02

in the clinic for other diseases, not

22:04

als, but things

22:07

like Crohn's disease and surrounds, so

22:10

we did know there was some safety data

22:12

around hitting that target. We

22:15

do then for targets where we

22:18

can't develop all of the targets, right, we

22:20

can only take focused bets for targets

22:22

where there's a bit more technical risk,

22:24

Right, It might be a bit more exotic. People

22:27

don't really understand how it works. There's

22:30

not a lot of tools out

22:32

there to really develop drugs

22:34

against it. That's where we might partner

22:37

with a pharma company to

22:40

develop those targets. And we have such

22:42

a collaboration with Eli Lily where we

22:44

developed our als target, but actually Lily

22:46

has the opportunity to essentially take

22:49

you targets number three through twenty

22:51

two plus and

22:53

choose four of them to develop themselves. Oh

22:56

interesting. So in that way, you're essentially

22:58

laying off the risk to this giant

23:00

pharma company that can afford to make more bets.

23:04

I'd say we're distributing the risk and we're

23:07

allowing us to really capitalize on the entire

23:10

opportunity all of the targets, because

23:12

it's impossible for any small startup to do,

23:14

you know, thirty different programs.

23:17

And it's actually in line with what a lot of pharma companies

23:19

are looking for. A lot of pharma companies are looking

23:21

for. What is that novel

23:24

target that no one else is working on that's

23:26

kind of unexpected, Where if

23:28

we could really get a competitive edge in here,

23:30

this would be really meaningful for a position

23:33

within within drug development in the next

23:35

ten years. Well,

23:37

and I mean it also seems

23:40

compelling because even though this

23:43

seems like a more promising way to do drug

23:45

development, drug development is hard

23:47

enough that anyone candidate

23:50

drug is probably not going to work, right.

23:53

Yeah, An any biotechniqus to be able

23:55

to have a pipeline and the ability to withstand

23:58

I think some failures because

24:00

I think it's unrealistic to expect one hundred

24:02

percent of what you try will work. But that

24:05

doesn't reflect on the technology

24:07

itself, and that can be something

24:09

unfortunate in biotech, where you know,

24:11

if the first thing fails, everyone's

24:14

all can be. It can be tempted

24:17

to say, oh, the technology didn't work, but in

24:19

reality, you think about how many different

24:21

drugs that pharmac companies test all the time. Right,

24:24

So I think really promising technologies

24:26

need to be afforded that runway

24:28

and that ability to really take multiple shots

24:30

on goal before you can get the end to really see

24:32

if it's working. Right. Well, I mean, if nine of

24:36

traditionally developed drugs fail

24:38

once they get to clinical trials, you

24:41

could be way better but still likely

24:43

to fail on anyone drug. Yeah,

24:46

Yeah, even a fifty percent would be huge, right,

24:48

but still that means one out of two drugs

24:50

will fail. Relative

24:55

to the world we live in now, a world

24:57

where one out of two drugs fail

25:00

could be a world where we get more

25:02

new drugs for less money.

25:05

In a minute, the Lightning Round including

25:08

the worst thing out being named to the

25:10

Forbes thirty Under thirty, and

25:12

the best thing about accepting that your

25:14

company might sail. That's

25:22

the end of the ads. Now we're going back to

25:24

the show. Let's let's

25:26

close with the Lightning Round. You

25:29

personally interviewed over a thousand

25:31

people when you were starting your company, as

25:33

I understand it, which seems very

25:36

intense. And I'm sure as if there's

25:38

anything in your life outside of work where

25:40

you've been that intense. Oh,

25:42

everything that is a core

25:45

to my being. If

25:47

you ask my spouse, you would say any new

25:50

game that we start playing. And I'm very competitive

25:53

and it's just part of my being. I iterate,

25:55

I get a lot of reps in He always

25:57

likes to make fun of me that I have an

25:59

AI in my head. I'm constantly

26:02

learning and improving the model until

26:04

eventually I become a lean

26:07

mean. We've been saying a lot of

26:09

Katan recently, and I think

26:11

if we him fifteen times in a row, So

26:14

yeah, I am very intense

26:16

and thorough in my life. Is

26:20

chat GPT overrated or underrated?

26:25

Both? Actually? I think it's both over and underrated.

26:28

It's overrated for some applications

26:30

and underrated for others. I

26:32

think it's overrated for things where there aren't

26:36

a lot of information available already

26:38

on that thing. I think

26:40

it's underrated for applications

26:42

at coding, where there's already a large body of

26:44

literature out there. So it's really good at replicating

26:47

things that exist, less good at discovering

26:49

new things that don't exist. I

26:53

read an interview where you said one

26:55

of the things you've learned as in

26:57

running your company is you learn to be okay

27:00

with your company dying with your company not

27:02

making it, which I found like very

27:04

surprising and interesting. Can you just tell me a

27:06

little bit about that. Yeah,

27:08

I mean, I think it gets to really the core

27:11

of how we drive our culture, which is I

27:13

think that soul for so

27:15

long companies have been driven through fear

27:18

and bravado of you know, we're crushing it,

27:20

We're pounding on our talking about how we're

27:22

crushing it, and less about emotional vulnerably and

27:24

introspection and self awareness, and

27:27

ultimately I found the thing that really

27:29

transformed my leadership style was

27:32

learning what I had grips over of

27:34

where I was really attached to outcomes,

27:36

And ultimately, I think for all CEOs, a

27:39

lot of that is tying meaning to

27:41

what happens with the company. If the company fails,

27:43

this means something about me as a person,

27:46

and I think that stifles a ton of

27:49

innovation and curiosity and tends to drive

27:51

those cultures of fear. So ultimately,

27:53

the thing, for example, that got me to stop micromanaging

27:57

was really being okay with the company dying, because

27:59

ultimately, what is micromanaging if not

28:01

just fear right or fear or control.

28:04

And once you let go of that fear and you recognize

28:06

you're just open to learning. You can still really

28:08

want the company to succeed, and you can be passionate

28:10

about it, but you're no longer

28:13

thinking, oh, I'm screwed, or like I'm

28:16

a failure if this fails, and that just opens

28:18

a whole new level of levity and lightness.

28:21

Nice. What's the worst

28:23

thing about being named to the Forbes thirty

28:26

Under thirty list? I

28:29

think they did a photo shoot where

28:31

there was a there was a

28:34

very revealing split on the dress, and I still

28:36

get constantly made fun of by my close friends

28:38

for that. What's

28:42

one example of a thing that went

28:45

wrong as you were building the company?

28:47

Something bad that happened? Oh

28:49

so many things. We had a whole period where there

28:51

was a ton of attrition and

28:53

people leaving, and you know, the first time

28:55

that happens to a founder can

28:58

I took it personally, It's like someone leaving your baby,

29:00

and you wonder why. That

29:03

was actually a huge growth moment for me

29:05

because I was for

29:07

so long trying to put for the strong

29:10

face. If it's okay, it's okay. And finally,

29:12

at the end of like a month of this, I just

29:14

sat in front of the company at an all hands

29:16

and I honestly I just broke down in tears. I said,

29:19

I feel like I failed you guys. You

29:22

know I'm still grieving this. I really don't

29:24

know what to do. And it was paradoxically

29:27

in that moment, most of the team really

29:29

rose up to the occasion and I found support

29:31

in ways I didn't even know where possible from the team.

29:39

Alice saying, is the CEO and co founder

29:42

of verge Genomics. Today's

29:45

show was produced by Edith Russolo.

29:47

It was edited by Sarah Nix and Lydia Geancott

29:50

and engineered by Amanda ka Wong.

29:53

You're always looking for more

29:56

guests for the show. If there's someone out there working

29:58

on an interesting technical problem with big

30:00

stakes, tell us about that person.

30:02

You can email us at problem

30:06

at Pushkin dot fm, or

30:08

you can find me on Twitter at Jacob Goldstein.

30:11

I'm Jacob Goldstein and we'll be back next week

30:13

with another episode of What's Your Problem.

Unlock more with Podchaser Pro

  • Audience Insights
  • Contact Information
  • Demographics
  • Charts
  • Sponsor History
  • and More!
Pro Features