Podchaser Logo
Home
Startup Investing with George Mathew

Startup Investing with George Mathew

Released Wednesday, 17th April 2024
Good episode? Give it some love!
Startup Investing with George Mathew

Startup Investing with George Mathew

Startup Investing with George Mathew

Startup Investing with George Mathew

Wednesday, 17th April 2024
Good episode? Give it some love!
Rate Episode

Episode Transcript

Transcripts are displayed as originally observed. Some content, including advertisements may have changed.

Use Ctrl + F to search

0:00

George Matthew is a managing director

0:02

at Inside Partners, where he invested

0:04

in Weights & Biases, Jasper, and

0:06

others. He has over

0:09

20 years of experience developing

0:11

high-growth technology startups, including most

0:13

recently being CEO at Kespry.

0:16

George joins the podcast to talk

0:18

about his path to becoming an

0:20

investor, his data-first thesis about investment,

0:22

the AI business landscape, his book

0:24

recommendations, and more. This

0:26

episode of Software Engineering Daily is

0:28

hosted by Jocelyn Bern-Hul. Check

0:31

the show notes for more information on Jocelyn's work

0:33

and where to find her. George

0:48

Matthew, welcome to Software Engineering

0:50

Daily. Great Jocelyn, great to

0:52

meet you as well. Pleasure to be here. We're

0:55

really excited to spend some time talking with

0:57

you personally, as well as for our show's

0:59

audience, our technical audience. There's

1:01

nothing hotter to talk about than AI

1:03

or data, I think. You have been

1:06

a storied investor in this space, right?

1:09

So many investments so early on in Weights

1:11

& Biases, Excel Data, which is one of

1:13

my favorites, Jasper. We're

1:15

really excited to have you come and talk with us. But

1:17

before we kind of get into the technologies that you're interested

1:20

in, I want to learn a little bit about you. Are

1:22

you just a young little

1:24

boy out in the playground thinking, like, one

1:26

day I'll be an investor in this space? Or

1:28

how did you come to realize this is what

1:30

you wanted to do? Yeah, I

1:33

would say it certainly wasn't a portion of

1:35

my journey that I thought I was going to

1:38

be an investor when I was a young

1:40

kid in the playground. I think originally my intent

1:42

when I was that young was probably to

1:44

be a pilot. When I

1:46

got a little bit more older, I realized that

1:48

that wasn't going to be the right calling for

1:50

me. And then I really

1:52

wanted to be a medical doctor. And

1:55

as I was studying actually for my MCATs

1:58

my junior year, I came across across

2:00

the NCSA mosaic browser, which a

2:02

certain individual named Mark Andreessen had

2:04

shipped out of the

2:06

supercomputing facility at the University of Illinois.

2:09

And I would say my heart's kept a beat

2:11

back in 1996 and really

2:13

wanted to understand what was going on with

2:16

the emergence of a tool in

2:18

technology like the World Wide Web

2:20

and the modern date web browser

2:22

in those days was pretty early

2:24

in the overall journey. And

2:27

I just got really excited by that and followed a

2:29

lot of where my passion was there to

2:31

go find a job in California to

2:33

work in an early stage company

2:35

that was doing the first generation

2:38

of e-commerce applications

2:41

in the mid 90s working over

2:43

emerging technologies around the World Wide

2:45

Web. You know, it's such an interesting

2:47

moment that we're experiencing right now because I

2:49

do think, you know, my two big sea

2:51

changes that I've experienced, right, was the ad

2:53

to the internet came in, replaced client server

2:55

and then of course open source. These were

2:58

like huge moments, right, that felt like second

3:00

industrial revolutions and they're paling in comparison to

3:02

some of the excitement around generative AI at

3:04

the moment. Help me understand a

3:06

little bit about what is your perspective on investing

3:08

in generative AI right now because there's a lot

3:10

of interest on the consumer side, but

3:13

it's not clear which startups and what their

3:15

strategy should be. A

3:18

lot of my perspective from investing in

3:20

generative AI has really come from being

3:22

originally a builder and someone who's been

3:24

in around data and AI systems for

3:27

a good part of a decade and

3:29

a half. And

3:31

as I started to see this current

3:33

generation of systems continue to evolve, it

3:36

was just very clear in the enterprise

3:38

that the modern data stack itself was

3:40

just an important underpinning

3:42

for how this next generation

3:44

of AI-based applications and systems would

3:46

emerge. And so I really started to

3:49

look at where the modern data stack

3:51

was really headed when it comes

3:53

to all the tools that were required,

3:56

not only in data management, but data

3:58

catalogs, data orchestration, data development. observability

4:01

and really built my thesis on

4:03

just a data first view of

4:05

how this next generation of AI

4:07

and machine learning based systems would

4:10

emerge. And so naturally from that

4:12

layer of the modern data stack,

4:15

started to look at machine learning operations

4:17

and MLOps, right as a necessary

4:19

set of tools that would be required

4:21

for a machine learning practitioner to be

4:23

able to build a model

4:25

and be able to bring those models

4:28

into production. And so one

4:30

of my first investments joining insight about

4:32

three years ago was in a company

4:34

called points and biases, which became, of

4:36

course, one of the de facto tools

4:39

for all the experiment tracking, hyperparameter tuning

4:41

version controls that were necessary for a

4:43

machine learning practitioner, to be able to

4:45

get their job done and effectively build

4:47

models and bring them into production. I

4:50

think as you started to see the

4:52

evolution of MLOps, then became clear that

4:54

there was this opportunity to take the

4:57

data that was coming

4:59

from these modern data stacks and

5:02

merge them properly with the AI

5:04

systems, which are of course now

5:06

L ones and transformer based AI

5:08

systems and build these next generation

5:10

of generative AI applications. And of

5:13

course, one of our investments about

5:15

a year and a half ago

5:17

was a company called Jasper, which

5:19

is building a generative AI application

5:21

for how content writing could be

5:24

more naturally be done with a

5:26

copilot to support how contract writers

5:28

work on a day in day out

5:30

basis. So the thesis for me was

5:32

always starting with a data substrate, which

5:34

at least in my case, in the

5:36

enterprise was very much targeted towards the

5:39

modern data stack itself, and building upon

5:41

itself to go into MLOps and more

5:43

recently into generative AI applications. There

5:45

is a blog post on the website about the

5:47

difference between MLOps and LLMOps. And you know, I

5:50

interviewed Chris Nagati from Fiddler on the show as

5:52

well. So I spent a lot of time in

5:54

that MLOps space. Help me understand what

5:56

the well, maybe we should sort of say like

5:58

what we say this MLOps or LLM ops. Let's

6:00

just set a business case. Why do people care

6:02

about that? And then let's kind of compare and

6:04

contrast the two. One

6:06

of the things that we started to

6:09

notice with emergence of this category of

6:11

machine learning that was

6:13

really focused on transformers and large

6:15

language models was that you were

6:18

shifting from a model-centric

6:20

experience to getting

6:22

a model into production and getting success

6:25

out of the outcomes of model

6:27

production to almost a data-centric world

6:29

of getting models into production. And

6:32

the reason I mentioned that is

6:34

that the models weren't tremendously changing

6:36

in terms of what was going into the

6:38

algorithm, the models per se. You

6:40

might see some shifts in the weights

6:42

and biases surrounding a model, but less

6:45

so around the underpinnings of the

6:47

model itself. What was changing pretty dramatically was

6:50

the data that was coming into the models.

6:52

And this is where we

6:54

started to see this current

6:57

generation of transformers and LLMs

7:00

in particular emerged that the more

7:02

data that you brought into a

7:04

large language model, the more

7:06

human-like and reasoning you started to

7:08

see the models perform in terms

7:10

of when they were put into

7:12

a general purpose scenario

7:15

like what you saw with

7:17

OpenAI's chat GPT or even

7:19

domain-specific models that emerged, for

7:21

instance, like Bloomberg GPT. And

7:24

in all these cases, what became

7:26

clear was the techniques that you

7:29

were using to go build a

7:31

LLM was going to be similar,

7:33

but in some ways quite different

7:36

from the techniques that you would

7:38

be using to build a

7:40

model for the purposes of

7:43

what would be a traditional computer

7:47

vision, for instance, or around a

7:49

predictive statistical analysis that you might

7:51

be doing. So we started to

7:53

think about this at InSight. Where

7:56

would be the delineation

7:58

of where to and

8:00

platforms would evolve for the

8:02

systems of record versus

8:05

the systems of prediction

8:08

versus the systems of creation. And so

8:10

we've had a lot of history around

8:12

the systems of record and the systems

8:14

of analysis or prediction, but we haven't

8:17

had a lot of history in terms

8:19

of really understanding what these sort of

8:21

new systems of creation were. And

8:23

I think that's where we put a

8:25

lot of our attention more recently in

8:28

what's basically known as LM

8:30

ops versus ML ops. And

8:33

in that regard, as we try to

8:35

navigate what those differences were

8:37

and those similarities were, there were

8:39

some things that really struck us

8:41

as clear delineations. First and foremost,

8:43

if you look at a generative

8:45

model and compare it to historically

8:49

predictive-oriented applications, there was

8:51

this tremendous benefit that

8:54

came out of transfer learning.

8:56

And the models themselves had

8:58

this ability to, with very

9:00

little bit of data, few-shot learning, single-shot

9:03

learning, be able to transfer learn quite

9:05

a bit from what was previously taught

9:07

to the model itself. And so that

9:09

was really different from what was historically

9:11

the case with a predictive model that

9:14

was in market prior, whether you were

9:16

using a neural network or

9:18

any other algorithmic method. In

9:20

a similar way, you started to see

9:22

a difference in terms of how the compute had

9:24

to be managed, right? Because there was

9:26

a tremendous amount of just

9:29

compute required, particularly GPU-based compute that

9:31

was required to be able to

9:33

not only do the heavy training

9:35

that was required to build a

9:37

model with the number of parameters

9:40

that we're now seeing in a model reaching as

9:42

much as a trillion parameters in

9:44

the GPD-4 style of model, but

9:46

also in the inference itself. It

9:48

was a compute-intensive experience to be

9:50

able to handle the inference even in

9:52

a model like what you're seeing with OpenAI's

9:55

GPD-4 or Entropic or Cohere. And the last thing

9:57

that we saw is a difference in the data.

10:00

difference is just the feedback loops.

10:02

If you think about the use

10:04

of RLHF, a reinforcement learning through

10:06

human feedback or an RLAIF, in

10:09

all of these situations,

10:11

the feedback loops were more important

10:13

than ever just to be able

10:15

to improve models, particularly as they

10:17

were in production. And you're starting

10:19

to see this notion that the

10:21

model is almost a living,

10:25

you know, thing that continues to

10:27

improve upon itself using reinforcement learning

10:29

beyond the initial training runs themselves.

10:31

And so those were some of

10:33

the key things that really pushed

10:35

us to push out our

10:38

perspective on what

10:40

was really importantly delineated

10:42

around building for LLM

10:44

ops versus ML

10:47

ops. And of course, there were many

10:49

things that were similar in nature. David,

10:51

privacy was very similar in terms of

10:54

how you worked with ML ops versus

10:56

LLM ops, model governance, model

10:58

security, which I'm sure you spend a

11:00

bit of time with Christian and

11:02

one of our portfolio companies in that regard. But

11:04

in that regard, we wanted to

11:07

just really call out for anyone who

11:09

was interested in the next generation of

11:11

models as they emerged, the LLM models in particular,

11:13

what would it take to build it and how

11:16

is it different from this last generation of machine

11:18

learning. And that's really what we try to encapsulate

11:20

in that article. Okay, and so

11:22

in a couple of ways, just to summarize a

11:24

little bit what you're saying is like on the

11:26

left hand side of the diagram, it's a lot

11:28

of the same problems of data preparation, data privacy,

11:31

putting the data that from an

11:33

enterprise perspective, you feel comfortable is ready to

11:35

go in. But on the right hand side, it's

11:37

quite different. Because unlike ML, that's telling you

11:39

about the data you've already got, it's generating

11:41

new data. And there's a much more of a

11:43

role for the human in the loop on

11:45

that right hand side of iterating

11:47

and using the model that a fair assessment.

11:50

Yeah, I think it's fair. And I think

11:52

there's some things that go into the compute

11:54

management, which we've never had to really think

11:56

about at the scale that we have to

11:58

think about, particularly in in both the model

12:00

training as well as the model interests. And

12:03

we learned our lesson in cloud data. So

12:05

we're going to think about it early now.

12:07

Think about our expenses early. I will just

12:09

share, put in the notes as well. You

12:11

have a really amazing landscape of LLM. People

12:14

are sending it to me all over, you know,

12:16

all over the LinkedIn and my friends are like,

12:18

hey, have you seen this? So I think it's

12:20

terrific. And I would definitely encourage the audience to

12:22

take a quick look at it because I think

12:24

it has this notion of end user management that

12:26

I wanted to really double click on it. I've

12:28

heard you do some other interviews where you talk a

12:30

little bit about how the ability

12:33

to integrate human feedback is an

12:35

asset in your mind. Is that what you mean

12:37

when you talk about end user management as part

12:39

of this? Yes, it is

12:41

one of the pieces, right? For sure. The

12:43

feedback loop is pretty important

12:46

because if you start to think

12:48

about how model alignment occurs, right?

12:50

Building models that have values

12:52

that are aligned with human beings. The only way to

12:55

be able to do that is to have humans in

12:57

the loop to be able to provide the feedback to

12:59

models as they continue to be aligned for our needs.

13:02

I think there's also another piece of the puzzle, which

13:04

is when you think about beyond

13:06

the model itself and how you

13:08

instill that into say, for instance,

13:11

an enterprise experience, you need more

13:13

than just a powerful model with

13:15

good feedback loops. They also happen

13:17

to need to get good private

13:19

data sets, right? It turns

13:21

out that those private data sets

13:23

are the ones that really enable

13:26

models to further target

13:28

and focus the task

13:30

at hand and hallucinate less,

13:32

right? So the more complimentary

13:35

private data that you have to

13:37

the task at hand, the less

13:39

likely we will hallucinate on things

13:41

that were basically not

13:43

having enough data that was being folded

13:45

into the model in terms of the

13:47

model training itself. The third

13:50

thing I would also mention on top of

13:52

that from a user experience standpoint is

13:54

that you still need to think about this as

13:57

an enterprise application. And so great enterprise applications have

13:59

great UX. some workflow associated with it. And

14:01

if you're building an

14:04

enterprise grade application, it's

14:06

not just about the human

14:08

feedback loops in the models themselves, but

14:10

it's a great user experience for the

14:12

application front end. And it could be

14:14

as simple as the chat in the

14:16

face. It could be a more complex

14:18

workflow, but nevertheless, a great user experience

14:21

surrounding that is absolutely essential for any

14:23

AI based application to prosper in the

14:25

enterprise. Have

14:31

you tried Firebase or Superbase, but

14:33

wanted a better developer experience? Convex

14:36

is a backend as a service product

14:38

that's changing the way founders think about

14:40

designing their infrastructure layer by letting anyone

14:42

write their backend like they would write

14:44

their front end. Convex is designed for

14:47

devs to write their backend in pure

14:49

TypeScript, and it features a fast database

14:51

written in REST. Also, the self to

14:53

manage version of the platform is now

14:55

open source, and the team has been

14:57

excited to share their thoughts behind the

15:00

decision. Convex's CTO and co-founder, James Cowling,

15:02

recently joined the Software Engineering Daily podcast

15:04

to talk about the decision landscape for

15:06

teams considering the shift to open source. Check

15:08

out the conversation to hear all about it.

15:10

Be sure to visit convex.dev to see why

15:13

Convex is the best backend as a service

15:15

for developers who want to shift quickly. Yeah,

15:24

I wanna get back to that, but first

15:26

I wanna talk a little bit about this,

15:28

like high value private data, high value data

15:30

sets for train on to help give direction

15:32

to the generative AI model, LLM model. My

15:35

experience has been the people who

15:37

have the tastiest, most desirable private

15:39

data are typically the ones who

15:42

have the least capabilities often in

15:44

building their own software, creating their

15:46

own tools. Do you think

15:48

that has implications? Because I agree, you have

15:50

to have this private rich data, but do

15:52

you think that has implications for this whole

15:54

debate around will it be proprietary open

15:56

source models that win

15:58

overall? to form an opinion

16:00

there when you know that the organizations with the best

16:03

data often are going to be

16:05

reaching out to maybe a more proprietary organization rather

16:07

than building their own, with say a bundle of

16:09

open source models. Have you given some thoughts

16:11

to like, what is that sort of adoption route going to

16:13

look like? Let me kind of call out

16:15

a few things in terms of what we've seen up

16:17

to this point at the sort of tail end of

16:19

2023. And

16:22

then where things continued or will continue

16:24

to evolve in 2024. First

16:27

and foremost, like anyone who's gotten

16:29

a model into production right now, it

16:31

does seem like it's pretty

16:34

much open AI, right? And there's probably

16:36

a few other things that are coming down

16:38

the pike, including work that Anthropic and Cohere

16:40

are doing as well as a number of

16:42

the open source providers and

16:44

particularly longitude is a pretty exciting option

16:46

when you look at the compactness

16:48

of the model itself and just

16:50

how it's commercially available to be

16:52

licensed from an open source underpinning.

16:55

I think for the folks

16:57

who are building, and I

17:00

mentioned that this need to have proprietary

17:02

or private data sets, I

17:04

don't know if it matters

17:06

as much whether you're kind of

17:09

working over time with an

17:11

open source or a closed source model.

17:13

I think it matters a

17:15

lot in terms of what the model

17:17

performance is. And it matters a

17:19

lot in terms of what the likelihood

17:22

that the model hallucinates as

17:24

it is out of the box, how much you

17:26

can either fine tune it or you can use

17:29

retrieval augmented generation

17:31

surrounding the model and being in

17:33

production so that you can sort

17:35

of in a constant deliver the

17:37

results that you wanted to. But

17:39

in almost all those cases, it's

17:41

really going to be about model

17:43

performance and feedback loops and

17:46

whether it achieves

17:48

the objective that you would want in

17:51

terms of your model being in production.

17:53

And right now it does seem that

17:55

the closed source and particularly OpenAI is

17:57

a closed source model today. is

18:00

the highest performing model. And that's

18:02

where most of the fine tuning

18:04

is happening as we speak. I

18:07

think over time, we're gonna see some

18:10

amount of diversity. It's not going

18:12

to just be open AI, closed

18:15

source models only. When you think

18:17

about the type of narrow

18:19

AI use cases where you

18:22

look at like trade settlement and clearing

18:24

and a back office for a financial

18:26

services organization, well, some really massive opportunities

18:29

there in terms of being able to

18:31

introduce a generative model to help compliment the

18:33

work that humans are doing in that regard.

18:36

But when you look at what kind of

18:38

model would you use, do you need to

18:40

have a model that understands 14th

18:43

century European history to be able to do

18:45

trade settlement and clearance? Probably not, right? And

18:48

so I think this is exactly where a

18:50

smaller form factor model, whether it be open

18:52

source or not, it seems like some of

18:54

the more capable, smaller form

18:56

factor models are coming from the open source world. Those

18:59

models could be just as relevant

19:02

for tuning and training a

19:05

very specific private data set to be

19:07

able to accomplish a very specific task

19:09

at hand from a narrow AI standpoint.

19:12

So I think some of that's gonna be coming

19:14

to a theater in Arizona in 2024. We

19:17

just haven't seen it yet mainly because it

19:19

seems like most of the models in production,

19:21

at least the generative AI transformer based models

19:23

that are in production today are

19:25

very much leaning towards a fine tuned version

19:27

of open AI. That's

19:30

really helpful perspective. Just covering the media, it

19:32

seemed like that was a more mature discussion,

19:34

but I think what I'm hearing is we're

19:36

still early days there in terms of which

19:39

direction most enterprises are gonna go for

19:41

their enterprise adoption of OLMs. Yeah,

19:44

I mean, if we kind of cast this at the

19:46

end of the first inning, I would

19:48

agree that there's quite a bit more

19:50

of the game to be played out.

19:53

And I think we're gonna see more

19:55

variety and diversity in models as

19:57

they're in production. It just so happens. first

20:00

hitting of the ball game at the end of it,

20:03

there seems to be only one that has moved

20:05

these reach to production level value for most

20:07

enterprises. This

20:14

episode of Software Engineering Daily is brought

20:16

to you by Vantage. Do you

20:19

know what your cloud bill will be for

20:21

this month? For many companies, cloud costs are

20:23

the number two line item in their budget

20:25

and the number one fastest growing category of

20:27

spend. Vantage helps you get

20:29

a handle on your cloud bills

20:31

with self-serve reports and dashboards built

20:33

for engineers, finance, and operation teams.

20:36

With Vantage, you can put costs

20:38

in the hands of the service

20:40

owners and managers who generate them,

20:42

giving them budgets, alerts, anomaly detection,

20:44

and granular visibility into every dollar.

20:47

With native billing integrations with

20:49

dozens of cloud services, including

20:51

AWS, Azure, GCP, Datadog, Snowflake,

20:54

and Kubernetes, Vantage

20:56

is the one Synops platform to

20:58

monitor and reduce all your cloud

21:00

bills. To get started,

21:02

head to Vantage.sh, connect your accounts,

21:04

and get a free savings estimate

21:06

as part of a 14-day free

21:08

trial. Just

21:16

from a product design perspective, if you're

21:18

designing a product today and you want to

21:20

incorporate an element of LLM, do

21:23

you think there's going to be impacts on the

21:25

way that we actually even do product design?

21:28

I think so. As a former

21:30

product manager myself and someone who's

21:32

led product management teams prior to

21:34

becoming an investor, I think

21:37

for product leaders, you have

21:39

a few things that you have to think

21:42

about. One

21:44

is when you design software, historically,

21:46

most software has had an underpinning

21:48

of a rules engine associated with

21:50

it. These are the sequences of

21:52

things that we encapsulate in our software, and as

21:54

long as it does those things, it comes to

21:57

its terminus scalcum, and here we have a beautiful

21:59

product. piece of potentially workflow

22:01

based software that follows

22:03

those rules. That's how it works every time.

22:05

Seems to have worked in software for 40

22:07

plus years. Now, I think

22:09

what we are on the

22:12

precipice of is systems

22:14

that are less about these

22:16

deterministic outcomes, and more about

22:19

having a probabilistic set

22:21

of reasoning associated with it,

22:23

like human like smarts and

22:25

reasoning associated with it that is

22:28

embedded into the software itself. So

22:30

imagine the rules engines of yesteryear

22:33

now being replaced by a

22:35

generative model as the underpinning engine model

22:37

that continues to improve upon itself, a

22:39

model that learns, a model that has

22:41

reinforcement learning surrounding it. And so if

22:44

you're a product manager thinking about the

22:46

new products and services you're coming to

22:49

the market with, you have

22:51

to almost now introduce this mindset

22:53

of what a probabilistic

22:55

reasoning system could look like

22:58

in software, either alongside of

23:00

your rules engine or completely

23:04

reimagining your rules engine one or

23:06

the other. And so people

23:09

in a funny way sometimes ask me,

23:11

what's the scale and impact of AI

23:13

in the market? Well, at the very

23:16

least, it's going to be the total

23:18

addressable market of all software. And it

23:21

could be the total addressable market

23:23

of all humanity. But at the very least

23:25

on the software end of the spectrum, you

23:28

have this really interesting moment

23:30

as a product leader, to be

23:32

able to take everything that you

23:34

have historically known in terms

23:37

of building rules based software and replacing

23:39

it with a probabilistic model that has

23:41

human like reasoning associated with it. And

23:43

I think there's some powerful things that

23:46

are coming about, particularly in enterprise software,

23:48

if you take that first principle mindset

23:50

to how you're designing the software of

23:52

the future. Yeah, I agree with that,

23:55

right? It's a different mindset, different set of

23:57

tools. And I also think subject

23:59

matter, I experts are going to be invited back into

24:01

the design sessions more. We kind of over indexed on

24:04

like, let's ask the users, but you know, kind of

24:06

back to what you were saying about Mosaic coming out

24:08

and when we, when the internet happens, you know, apps

24:10

kind of were the pointy end of the sphere. And

24:13

to build those apps, you need the subject matter

24:15

experts who deeply understood what are the

24:17

expected outcomes? What is that workflow?

24:20

And similarly, I think for product

24:22

design, there's going to be a requirement,

24:24

right? To have these Sherpas help guide

24:27

everyone through complex, very

24:30

specific enterprise use cases. Subject

24:32

matter experts, process experts, folks

24:35

who understand how the backend

24:37

workflows between humans and machines

24:40

have historically worked. Those

24:42

are all opportunities to reimagine it,

24:44

what generative AI based software is

24:46

it's underpinning. Absolutely. Absolutely. So

24:49

I think that's kind of exciting in a way, because

24:51

a lot of that's still broken. As

24:58

a listener of software engineering daily, you

25:00

understand the impact of generative AI

25:03

on the podcast. We've covered many exciting

25:05

aspects of gen AI technologies, as well

25:08

as the new vulnerabilities and risks they

25:10

bring. Halfer1's AI Red

25:12

teaming addresses the novel challenges of

25:14

AI safety and security for businesses

25:16

launching new AI deployments. Their

25:19

approach involves stress testing AI models and

25:21

deployments to make sure they can't be

25:23

tricked into providing information beyond their intended

25:26

use. And that security flaws can't be

25:28

exploited to access confidential data or systems.

25:31

Within the HackerOne community, over

25:33

750 active hackers specialize

25:35

in prompt hacking and other AI

25:37

security and safety testing. In

25:39

a single recent engagement, a team of 18

25:42

HackerOne hackers quickly identified 26

25:45

valid findings within the initial 24 hours

25:47

and accumulated over 100 valid

25:50

findings in the two week

25:52

engagement. HackerOne offers strategic flexibility,

25:54

rapid deployment, and a hybrid

25:56

talent strategy. Learn more at

25:59

hackerone.com. That's

26:03

hackerone.com/AI.

26:12

One thing I learned as I was looking

26:14

at the MLOps space, I'm going to ask

26:17

you some other questions, but in the MLOps

26:19

space, one thing I thought was a funny

26:21

quote is like, no model runs the same

26:23

way twice, which is sort of the same

26:25

thing that's happening in LLMs, right? There's a

26:27

proliferation of training cycles in the ML world.

26:29

In the generative AI, LLM world, there's like

26:31

a proliferation of models, right? Let's say you've

26:33

got your controls around your data, great. Like

26:36

proliferation of models, you've got bundles of models

26:38

running, not just one. How are

26:40

organizations going to adopt that in a safe way? Is

26:42

it going to be like a feature registry type situation?

26:44

Is it going to be an after the fact audit?

26:46

What do you think is going to happen there? I

26:49

think there's a few things that are happening today

26:51

and there's a few things that will emerge, particularly

26:53

with some of the regulatory frameworks that are also

26:55

coming about. A few things that are happening today

26:57

is just as you called

26:59

out multiple models running

27:02

in tandem with each other, it

27:04

really calls for more orchestration capabilities,

27:07

right? I think that's a big

27:10

need in the market and you're seeing

27:12

existing orchestration providers like Airflow now starting

27:14

to jump in. We were

27:16

investors in a company called Astronomer, which is

27:18

one of the leading purveyors of Airflow. Entering

27:23

into the space as well as

27:25

you have startups that are capable

27:27

of delivering value from a model

27:29

orchestration standpoint. There's some great examples

27:31

there include both Llama

27:33

Index as well as Lion Chain. In

27:36

all of those cases, it's the

27:39

orchestration that's happening across the

27:41

models and the data fabric

27:43

that's underlying these models coming

27:45

into production that's driving the

27:48

need for orchestration tools. In

27:50

a similar way, you're seeing

27:52

the observability market continue to

27:54

evolve and grow quite significantly.

27:56

Again, you've had Krishna on

27:59

the podcast already. If you look at

28:01

Fennler's business, if you look at some

28:03

of the other companies in that category

28:05

of model and observability, their

28:07

focus is really to be able

28:09

to do the necessary model

28:12

observability that lets

28:15

you understand the model performance,

28:17

understands model performance from an

28:20

overall monitoring standpoint, and then

28:22

introduces things like traceability and

28:25

bias detection and fairness into how

28:27

the model monitoring is occurring. And

28:29

so that's a very, call it

28:31

internal view of how models

28:33

are either observed properly or orchestrated

28:35

as two great examples. But there's

28:37

also this external factor that's coming

28:39

to the theater nearest very quickly,

28:41

which is the regulations themselves. And

28:44

what I'm going to kind of see,

28:47

you know, my kind of prediction going into 2024 is

28:49

that there

28:51

is a cast of regulations that

28:53

are coming on a per industry

28:55

basis where regulators are going to

28:58

have updates in terms of what

29:00

they're expecting models to do. And

29:03

that's going to have to be managed from a

29:05

software standpoint, from a governance or compliance

29:07

perspective, a GRC software standpoint.

29:10

And so that's something that we haven't seen

29:12

yet, particularly in the world of AI. But

29:14

I think that's actually something to be really

29:17

kind of paying attention to going into 2024,

29:20

particularly the executive order and additional

29:22

other layers of regulatory frameworks coming

29:24

into the fold. Yeah, I work

29:26

with financials who are already pretty

29:28

highly regulated. And this new wave

29:30

of regulation is not only impressive, where

29:33

you see GDPR, EU suggesting expanding, you

29:35

know, you can elect not to be

29:37

part of a AI driven

29:39

decision, right? As part of as your

29:42

customer experience, like as a citizen, I think that's

29:44

great as a technical person, it makes me like

29:46

a little sick inside, because how do you do

29:48

it? Yeah, how do you do

29:50

it? It's like, okay, we know how

29:53

to put humans and machines together and

29:55

build systems and processes in the financial

29:57

services world effectively. But now if

29:59

you're being asked after you've done that and

30:01

after you've done it well, how do you

30:03

take the machine out of the loop and

30:06

only have a human-based process? Tough. Right.

30:09

We're kind of at the very beginning of that. We don't know

30:11

who's the regulator going to be. There's competing sets of proposed regulation.

30:13

One thing I do advise companies we work with is

30:15

to get started with what you know. There's some no

30:17

regrets work that should get started now

30:19

because waiting for regulators is a losing business,

30:22

right? That can be tough. So

30:24

yeah, we talked about regulatory risk. We talked a little

30:26

bit about data risk, right? And the need to orchestrate

30:28

data and make sure you have a private reliable data

30:30

source if you're doing an enterprise use case. What

30:33

about, we'll just briefly touch on these just because

30:35

if you have an expertise in this area, but

30:37

there's the other two kind of concerns that slow

30:39

down adoption in companies are bias,

30:42

right? Just model bias and then some concerns

30:44

around like just bad actors, prompt injection, that

30:46

type of thing. What are you hearing and

30:48

what are you telling people about those two

30:50

areas of risk? Yeah, by the

30:53

way, I want to hit that in a second, but I

30:55

do want to just elaborate on the data question a little

30:57

bit more and then we'll go into those

30:59

situations and detail those out.

31:02

But in the data side of the world, and

31:04

particularly the privacy preservation around data and how do

31:06

you handle it? I think there's two really compelling

31:08

things that I've seen so far. One

31:11

is where you're using synthetic data

31:13

to generate like copies that are

31:16

statistically significant for the value of

31:18

a training run, for instance, or

31:20

being able to just handle a

31:23

privacy preserved version of the data,

31:25

particularly as you're doing testing before

31:28

my model goes into production or an application

31:30

goes into production. So there's something

31:32

that's iconic that are doing a tremendous

31:34

job in terms of just handling synthetic

31:36

data generation. In a similar way, what

31:39

we're starting to see is this notion

31:41

of like, where do you keep all

31:43

of your private data properly stored and

31:45

managed? And so there's this idea of

31:47

a privacy vault, right? And this is

31:49

where Skyflow was probably one of the

31:52

more unique players right now in this

31:54

space, really focused on just a privacy

31:56

preservation mode around keeping all the PII

31:58

data in one place. going to secure

32:00

API that has a one way in

32:03

and out door to get through to

32:05

get your information that's privacy preserving and everything else

32:07

is kind of left

32:10

in the hands of the application

32:12

developer. But the privacy aspect

32:15

of what you need to handle from

32:17

a PII compliance standpoint is delivered by

32:19

a service like Skyflip.

32:22

So knowing that is in

32:24

place, then let's kind of talk

32:26

a little bit about. I want to hear what

32:28

you have to say about that, but I

32:30

cannot overstate what a huge sea change it

32:32

has been. The embrace of synthetic data, the

32:35

embrace of things like privacy vault and organizations.

32:37

I mean, it is a major step

32:39

forward for organizations that were pretty immature

32:41

on that. So I'm a big tonic

32:43

fan as well. So we'll do

32:45

another show on synthetics another day. That's great. But

32:48

I think it actually leads nicely to the

32:50

question that you were originally asking, which is,

32:52

okay, how do you really think about just

32:54

the security concerns and what are some of

32:57

the underlying risks? I think one of the

32:59

biggest things that we continue to

33:01

see in the space is

33:03

that this is not

33:05

only an opportunity for great things

33:07

to happen, good things to happen,

33:09

but also a veritable treasure

33:12

chest for nefarious actors to

33:14

do unethical things. And

33:17

I think some of that is just in

33:19

the fact that you have these

33:21

models that are capable of doing many

33:23

things and they can be applied in

33:26

very unethical ways. Some of it is

33:28

just you're just less careful with the

33:30

security and the systems surrounding how you're

33:32

building and bringing things into production. I

33:34

think when it comes to the point

33:36

of cyber security that I have been

33:38

paying a lot more attention to more

33:41

recently is the availability of unethical models.

33:43

So an example of this is like

33:45

form GPT. Turns out

33:47

that a number of black hat

33:49

hackers that had some very capable

33:51

skills in terms of hacking put that

33:54

all into a generative model that they

33:56

unethically trained and release that into the

33:58

wild. level one

34:00

flathead hacker can have the skill

34:03

sets that were now available to

34:05

someone who is way beyond that

34:07

capability and skill all encapsulated in

34:10

the co-pilot that does you know

34:12

the work that a level 506

34:14

flathead hacker could do with the

34:16

tools built into. Demonstrating bad actors.

34:19

Yeah democratize into a tool that's generally

34:21

available and the dark web for anyone

34:23

to take advantage of that wants to

34:25

use it for nefarious purposes. So I

34:28

think that's one example of where

34:31

the things that we are really excited

34:33

by in terms of all this opportunity

34:35

that's emerging scale of human productivity that's

34:37

coming out of the emergence of generative

34:39

AI there will be these dark corners

34:41

and these dark corners are going to

34:44

come to a theater near us and

34:46

they're going to be pretty pronounced and there's

34:49

going to be some really public incidences that

34:51

will come about in 2024 and beyond that

34:53

we just have to be much more aware

34:55

of and much more

34:58

guarded about than ever before

35:00

because the use of this is

35:02

now you know possible for

35:05

not only some of the black

35:07

hack techniques that I just mentioned

35:10

but also just impersonation. Right. You

35:12

can take three seconds of this

35:14

conversation that we're having and synthetically

35:16

generate you know my voice or

35:19

your voice Jocelyn and now put

35:21

that into an interactive voice response

35:23

system and break through the typical

35:26

IVRs that have been in place

35:28

to secure someone's input

35:31

into an online or

35:33

voice based account update for a bank

35:36

and so this is actually

35:38

a lot trickier than all the rainbows

35:40

and unicorns from all the opportunities that

35:42

are coming there's some really dark corners

35:44

that we're still about to face as

35:46

a society. We're in this

35:48

awkward liminal space between incredible

35:50

excitement about the opportunities and capabilities

35:52

but also we don't have enough

35:55

tools or thought work yet to mitigate some

35:57

of these risks. Yeah I mean the scale

35:59

of deepfakes right. now that are coming to a theater

36:02

in Europe seems just unprecedented and

36:05

that should be troublesome for anyone who's thinking

36:07

about this stuff today. I think I read

36:09

that's one of the things that inspired Joe Biden to

36:11

do this executive order is that he saw a fake

36:13

of himself. Did you see that? It's

36:19

like up to that point where like, you know, we'll

36:21

get around to, you know, an executive order for AI

36:23

and then finally the president sees a deep fake of

36:26

himself. It's like, oh, we should probably do something about

36:28

this. I did not know that. That's

36:30

amazing. It's like such a great

36:32

feeling. It's like sometimes people are like, Oh, you

36:34

sent that email. I'm like, I did. Like I

36:36

already did. There's a million things you say and

36:38

do that you don't remember. So it could easily be fake.

36:41

So let's switch gears a little bit, because

36:43

I wanted to talk a little bit

36:45

about advising on a lot of our

36:47

audience and like technical entrepreneurs, people who

36:49

want to quit their day job and

36:51

start a company. Maybe they're already in

36:53

like a small startup. They're in that seat

36:55

or, you know, a round space. People have

36:58

asked me, what do you do if you're

37:00

an established company? You got started as a

37:02

startup, but maybe you don't have generative

37:04

AI in your pitch. Maybe you don't have it

37:07

in your product. What do

37:09

you do? What do you do to keep

37:11

your investors engaged, to keep customers engaged, because

37:13

it's almost required to talk about

37:15

it now. I'm sure you've gotten a similar

37:17

question. What are your thoughts? Well,

37:19

we've been a little more proactive at

37:22

insight about this, because we generally do

37:24

believe that this is not just happy

37:26

talk. And there's, you know, some genuine

37:28

benefits of reimagining your software business as

37:30

a generative AI based underpinned software business.

37:32

And so in that regard, our inside

37:34

onsite team, which is really

37:37

helping our portfolio companies continue to

37:39

grow and scale in their specific

37:41

journeys, whether it be in sales

37:43

and marketing or talent or popped

37:45

in engineering, or finance or business

37:47

development. In all of those

37:49

functions, you can kind of see two

37:51

big opportunities emerge for generative AI. One

37:53

is just better products and services, and

37:55

then more efficiently running your business. And so in that

37:57

regard, in the last six months, we've been working on a lot of things.

38:00

months, we've spent time with all of

38:02

our portfolio companies to help them think

38:04

through exactly those two levers. Now, what

38:06

are the products and services that

38:08

you can be building with what you have

38:10

today that will enable you

38:12

to continue to expand into the opportunities

38:15

that you have in market because of

38:17

generative AI, as well as how can

38:19

you run your businesses better, mostly sales

38:21

and marketing, it turns out. And now

38:24

it is in our view, a situation

38:26

where I would be actually

38:28

delighted to be a company that's already

38:31

in market that doesn't necessarily have a

38:33

generative AI strategy. Why? It turns out

38:35

that you have a clemency. It turns

38:38

out you have private data sets. It

38:40

turns out you have a

38:42

market where there is distribution because you

38:44

are the incumbent in the space. And

38:48

in all of those

38:50

moments where incumbency distribution

38:52

and data pre-exists, it's

38:55

actually not that hard to build a generative bottle,

38:57

right? I don't mind tremendous amounts of

39:00

data science resources and machine

39:02

learning resources because you're mostly

39:05

requiring data engineering and great software developers,

39:07

data engineering to get the data prepped

39:09

in a way that you can tokenize

39:11

and sort of train a model appropriately

39:13

or tune a model appropriately. And then

39:15

of course, the development resources

39:17

to be able to put that into

39:20

an application in the right context. And

39:22

so what we're seeing is

39:24

a lot of existing folks

39:27

who have had very little

39:29

capabilities expertise from an AI

39:31

standpoint now start to build

39:34

some incredible products and services.

39:37

And if nothing else, look at how the incumbents

39:39

in the space have been so capable

39:41

of building great products. I mean, look at

39:43

what Microsoft has done with

39:45

GitHub Compilot as a starting point.

39:48

Look at what Adobe has done

39:50

with Adobe Firefly as a compliment

39:52

to being able to create a

39:55

product that enables you to

39:57

verify the sources of the

39:59

creation of. generative content and plug

40:01

that into Photoshop. And so in all

40:03

of these examples, what we're seeing is,

40:05

the comments are actually not at a

40:07

disadvantage. In fact, they might be at

40:09

a relative advantage, even if

40:11

you are starting your AI journey very

40:14

late, you can almost sort of catch

40:16

up quickly by adopting some of

40:18

the generative AI capabilities. I love that. I'm

40:20

an optimist at heart and you're absolutely right.

40:22

Where you've got the data, you've got the know-how,

40:24

maybe you have the customer relationships, those are the

40:26

hardest parts. If you're already an

40:28

established, you've got some of that in hand.

40:30

Let's say we're, because you mentioned Microsoft, you

40:33

know, when you talk to entrepreneurs who

40:36

are putting together their ideas, they're very early stage.

40:38

There's a couple of items of common wisdom,

40:41

which I'll share with you and you can

40:43

debunk it or agree. The biggest players, Google,

40:45

Microsoft, Meta, are gonna just sort of mop

40:47

up here. They've already got so much money,

40:50

they've got so much tailwind, you

40:52

know, maybe you have to have a

40:54

different idea process. What

40:56

do you think about that? Well, the

40:58

one incumbent, which just literally emerged out

41:00

of nowhere this past year that no

41:02

one would ever have thought of as

41:04

an incumbent, but now they are, you

41:06

know, within a year inclusive of the

41:08

one incumbents you mentioned, is also OpenAI,

41:10

right? And if you

41:12

notice, developer day, you know,

41:14

not more than a week ago, week and

41:17

a half ago, what you

41:19

saw was just a number

41:21

of startups and startup ideas

41:23

being completely eviscerated. I'm

41:25

digested, I'm digested. By

41:28

just the capabilities that were introduced around

41:30

GPTs and capabilities that

41:32

were introduced around just context windows and

41:34

all the additional features that, you know,

41:36

OpenAI themselves announced. For our audience, if

41:38

you haven't Googled, just go into YouTube

41:41

and just Google, like, did OpenAI just

41:43

kill all the startups? There's

41:45

some really good rundowns on that. I don't believe

41:47

that to be true, but it was a huge

41:49

set of announcements. Yeah, and again,

41:51

can OpenAI do all those things that

41:53

kill all the stars? Probably not, because

41:55

there's so much surface area to cover.

41:57

It's impossible for one company to do.

42:00

all that and that's why a great startup

42:02

ecosystem continues to thrive particularly in the

42:04

JIT or AI space. I think what

42:06

you're going to have to think about

42:08

as a founder though is you can't

42:11

be working towards a

42:13

feature that can be disrupted

42:15

quickly. You have to

42:17

be working towards at the very

42:19

least a strong unique product that

42:22

maintains its value proposition if not

42:24

a broader platform that certainly has

42:26

resistance and resilience in an overall

42:28

market. I think that's harder

42:30

to do. It takes longer. It

42:32

takes more investment to build products

42:35

and platforms over time versus features.

42:37

I think the features are going

42:39

to get really folded

42:41

into the incumbents that are building already

42:44

in the markets that they're in,

42:46

but I think there's ways that you can

42:48

build great products and platforms.

42:50

I mean, just look what Jasper has done.

42:53

They started as a prosumer

42:55

offering in content creation for content

42:57

marketers around generative AI. It

43:00

turns out open AI came in and

43:03

really sort of ate into that original

43:05

core assumption in the most of market,

43:07

but that enabled Jasper to take all

43:10

everything that they learned and go into

43:12

the enterprise and build great worldflows and

43:14

great applications and great embedded experiences for

43:16

how enterprise marketers work today. I

43:19

think that's a lesson to be taken for

43:22

all founders going through this journey that even

43:24

when you feel like you have called a

43:26

product market fit and you have success, you

43:28

will have to make some changes so

43:31

that you can find long-term durability.

43:34

My view is that as a

43:36

founder right now, you can't build

43:39

for features and you can't be

43:41

thinking just about the product

43:43

alone. You have to be thinking both

43:45

about product and go to market simultaneously.

43:48

It's interesting because if you talk to

43:50

any second time founder, that's really how

43:52

their mindset is. First time

43:55

founders always have the sensibility that it's about

43:57

product and yes, it is of course starting

43:59

with product. But second time founders are like,

44:01

yes, it's about product, but then how do I

44:03

go bring it to market? So it's like, George,

44:05

what does that mean? I hear people talk about GTM

44:07

all the time, but I would assume what you're saying

44:10

is like, hey, I've got the next great widget. This

44:12

is a cool product. Just go to market like, hey,

44:14

I have a letter of intent from a big customer,

44:16

and we're going to co-design this. Like, what are those

44:19

practically speaking? What would those be? I

44:21

mean, in the early days, it's good

44:23

design partners. It's good focus for folks

44:25

who are going to co-collaborate in building

44:28

what you're bringing to market. But I

44:30

think soon after that, like when you're

44:32

closer to first generation,

44:34

your MVP call it your first generation

44:36

of the product, then you

44:38

got to quickly move your mind towards

44:40

commercialization and how to get paid what

44:44

you're building. And I think in

44:46

previous incarnations, we've had more time

44:49

and we've had more ability to

44:51

keep building and not have

44:53

to worry as much about commercial

44:55

value and selling and

44:58

being good at finding

45:00

the things that you're building have some

45:02

ability to find. Yeah, yeah. I mean,

45:04

like ruthlessly focused on what's going to

45:06

sell, not just what's like, you

45:08

know, close to your heart. Yeah, and focus

45:10

earlier on that that you want. Yeah.

45:14

And I think that's something that in

45:16

this cycle of iteration and at

45:18

the clock speed that we're moving at right

45:20

now, you're going to have to

45:23

really find your commercial value

45:25

faster. And even

45:27

after you find your commercial value,

45:30

you might have moments where you

45:32

still have to pivot. Yeah, it's

45:34

so easy to say and hard to do like when

45:36

you're in the situation as a product, so you fall

45:38

in love with your product, you've got, you know, certain

45:40

things. And, you know, even though it's a simple statement,

45:42

I actually can't underline this enough for our listeners, how

45:45

important that is that kind of ruthless focus on

45:47

what people are going to buy. I mean,

45:49

for great founders, great founders as product people

45:51

are tortured artists, I get it. I

45:54

was certainly one of those folks myself,

45:56

but I think the best founders in

45:58

time, not a lot. are the

46:00

tortured artists, but are the world-class salespeople.

46:03

And so if you can take the

46:05

combination of tortured artists and world-class salesperson

46:07

and bring that together, those are the

46:10

founders that will go the distance over

46:12

time. Well, speaking of founders that will go

46:14

the distance, right? We talked a little bit about the overall

46:16

markets and things you're looking at in 2024, some like Signal,

46:18

you've got your own

46:21

point of view as an investor. You know,

46:23

you've invested in some, you've got your current

46:25

portfolio companies. Do you want to share some

46:27

stories of how some of your current portfolio

46:30

companies have checked off the boxes you think

46:32

are important in developing in this area? Yeah.

46:35

I mean, I'll give two examples of

46:37

companies that have some tremendous scale.

46:39

I mentioned weights and biases, but

46:41

I think it's important to understand

46:43

that when we had a great

46:45

blog post about Lucas and his

46:47

journey around weights and biases more

46:49

recently, the reason why weights and

46:51

biases has worked so well in

46:53

the market is that Lucas had

46:55

his first startup, which is CrowdFlower,

46:57

and he had all of the

46:59

challenges of building CrowdFlower. It is

47:01

mine as a second-time founder that

47:03

went to market and not

47:05

only built it for that little product,

47:08

but figured out a way that the

47:10

commercial business and scales insight was lucky

47:12

enough to catch weights and biases even

47:14

before the commercialization really started. And now

47:16

we're starting to three years later, we're

47:18

starting to see the tremendous scale of

47:20

how big a business like weights and

47:22

biases, particularly as a leading tool in

47:24

the MLOps, so MLOps World can get.

47:27

I think outside of even just the

47:29

AI space where software businesses are

47:31

becoming AI businesses, what Christine

47:33

Yen is doing with Honeycomb

47:35

is fascinating, right? Because it's not

47:38

just the fact that you're building

47:40

a next-generation observability product that is

47:42

fully differentiated from all the existing

47:44

incumbents in the space, but they're

47:47

doing innovative things like introducing generative

47:49

capabilities into a traditional

47:51

system observability market. And it turns

47:53

out that in the case

47:56

of Honeycomb, what the unique

47:58

value proposition of interest of

48:00

AI was that every observability product

48:02

has somewhat of an esoteric language

48:04

to be able to work in

48:06

the context of finding that observable

48:08

event. If you introduce natural

48:10

language as the front end, how

48:13

much easier and how much more democratized could

48:15

an observability product be? It's exactly

48:17

the thesis that the team went

48:20

in at Honeycomb to go build

48:22

a natural language interface using a

48:24

fine-tuned open AI experience into

48:27

a query builder for Honeycomb. It

48:29

was the fastest adopted feature

48:32

in Honeycomb's history where 40% of

48:34

all of Honeycomb users were

48:36

on that product within the first 24

48:39

hours of it launching to give you an

48:41

idea of how fast the adoption curve occurs.

48:44

What I would generally give advice to the

48:47

founders as they're going through this

48:49

is really continue to find those

48:52

great moments of product differentiation early

48:54

as well as later in your

48:56

product development cycle and then be

48:59

willing to continue to experiment and

49:01

iterate in the first

49:03

commercialization successes that you have.

49:05

Don't rest on those stories.

49:08

Find different ways that you can continue to

49:10

commercialize and scale your business because the market

49:12

continues to evolve very quickly and you want

49:14

to be resilient for all those changes that

49:17

are occurring. Very interesting.

49:19

We're going to wrap up here at the end of

49:21

our discussion and I really appreciate you taking the time.

49:25

Just one last idea here. You have a

49:27

point of view as an investor. In

49:29

the last few years, a lot of investors have different ways

49:31

of approaching the market. Some of them don't have a point

49:34

of view. Some of them kind of like, I'll take your

49:36

point of view and I'll use that too.

49:39

What do you think they're doing well in this space and

49:41

what could they be doing better? I

49:44

think most VCs are just

49:46

very good at sensing when there's a

49:48

shift in the market and really embracing

49:50

that shift. If

49:53

you think about where momentum is occurring,

49:55

VCs have some of the best sense

49:57

of momentum that could be. Now,

50:01

no surprise, kind of, how the

50:03

generative AI market has really evolved

50:06

because VCs have put a lot

50:08

of energy, a lot of investment

50:10

directly into that market. Now, I

50:12

think where we as VCs

50:15

can do a lot better over time

50:17

is just really, truly

50:19

understand how hard the founder journeys

50:21

are and be more

50:23

empathetic to how difficult these journeys

50:26

are because as much as you're

50:28

like, hey, you should move into this portion of the market

50:30

and you should do it. That takes work. That takes

50:33

a lot of energy. It just takes a lot

50:35

of calories. I think the best

50:37

VCs at some point have figured out a

50:39

way to have that empathy in terms

50:42

of really understanding what the founder's

50:44

journey is and being able to

50:47

be a co-pilot to their journeys. I

50:50

think for me personally, that empathy

50:52

came from just having to have

50:54

been one as a founder and

50:56

a builder in the past and that kind of gave me

50:58

a little bit of that

51:00

empathy as I went into venture capital. From

51:04

the venture capitalists that haven't necessarily done

51:06

that, okay, because I think you have

51:08

enough experiences of just being on boards,

51:10

being with founders that you should be

51:13

able to translate that to providing proper

51:15

empathy for them. I think in most

51:17

cases, the more that we

51:19

are empathetic as VCs and the more that

51:21

we understand the momentum that's occurring in the

51:24

market, the two of those things combined together

51:26

generally, you're going to end up with a reasonable

51:28

set of investments made as you're progressing. I

51:30

love that. I

51:33

love the idea too that you don't have to have

51:35

been a founder, but they just be with them. I

51:37

like that phrase, that simple phrase is great. Just

51:40

be with them for a while. Sit with them

51:42

and experience their point of view. I love that.

51:44

Alright, last question. My inbox and

51:46

every bookshelf is just topped up

51:49

with information in this space and reading

51:51

material that I often don't get through. For

51:54

our listeners, what are one or two things

51:56

that you've read or newsletters or articles that

51:59

obviously are not available? Obviously, Insight has some great

52:01

ones, but if there were one or two things

52:03

that somebody wanted to read, what would you recommend

52:06

right now? Yeah. I mean,

52:08

as far as a lot of the thinking

52:10

on responsible AI, certainly the work that we've

52:13

seen in market by observability leaders

52:15

like Hitler has been great. I

52:17

would also add the constitutional AI

52:20

frameworks that are emerging and tropic

52:22

is probably one of the folks

52:24

that have been leading that work

52:26

and some of the work that's

52:28

surrounding constitutional AI and RL, AIF,

52:30

the use of reinforcement learning and

52:32

AI systems to almost guard rail

52:35

how AI is in the wild

52:37

is a very emerging and topical field

52:39

and I would sort of dig into

52:41

that. I think kind of

52:43

more broadly, if you think about responsible AI

52:46

and where it's going to play out in

52:48

the next few years, we just have to

52:50

build better aligned systems to human values and

52:52

I think one of the original thoughts

52:54

on that was just what Brian Christensen had

52:57

wrote around a book called The Alignment Problem.

53:00

I would highly recommend taking a look at

53:02

that book as a way of really

53:04

thinking about how we can

53:06

build better aligned systems. If

53:08

you want to just have a discourse

53:10

on just how we think about AI

53:13

systems and the future of AI systems

53:16

sitting alongside of humanity, one

53:18

of my favorite recent works on this

53:21

topic was a series of

53:23

essays that were compiled into a

53:25

book written by Megan O'Gibley called

53:27

God, Human, Animal, Machine. It

53:30

was a wonderful sort of

53:32

amalgamation of her thoughts

53:34

on what it took to build

53:37

this current generation of AI systems and

53:39

what are the types of people that

53:41

really are finding meaning around

53:44

the AI systems that could

53:46

be part of what I almost think

53:48

of as co-pilots to our lives. Those

53:50

are probably three books and three sources

53:52

that I'd be inspired by. I almost

53:55

didn't ask that question and now I learned

53:57

at least two new things from you. Thank

53:59

you. That's great. Well, thank you

54:01

so much for taking the time. Software Engineering Daily thanks

54:03

you. I know our audience is super excited to hear

54:05

from you. Hope you have a great rest of your

54:07

day. Thanks a lot. Great. Thank you, again. Appreciate it.

Rate

Join Podchaser to...

  • Rate podcasts and episodes
  • Follow podcasts and creators
  • Create podcast and episode lists
  • & much more

Episode Tags

Do you host or manage this podcast?
Claim and edit this page to your liking.
,

Unlock more with Podchaser Pro

  • Audience Insights
  • Contact Information
  • Demographics
  • Charts
  • Sponsor History
  • and More!
Pro Features