Episode Transcript
Transcripts are displayed as originally observed. Some content, including advertisements may have changed.
Use Ctrl + F to search
0:00
Hey Lily , I'm curious about how you've been using
0:02
AI at your day job . In my conversations
0:05
with product people lately , I've gotten
0:07
a few great use cases , but loads more
0:09
cautionary tales .
0:11
Ooh interesting . How are your connections using
0:13
it ?
0:14
Well , let's see One . Cpo had it
0:16
write user stories for an Epic and
0:18
it did a really great job . But this
0:20
was a waterfall use case . They were implementing
0:22
a publicly documented API , so
0:25
there weren't that many unknowns in this one and
0:27
while the stories needed to be tweaked
0:29
, it saved a load of time . But
0:31
when people tell me about the bad use cases , they
0:33
tend to show up when there's lots of unknowns or
0:35
when there's unique circumstances in
0:37
the company , or when the PM doesn't
0:40
even bother to read the output and just passes
0:42
it along to the team .
0:44
OK , yeah , that might be more the fault of having
0:46
a faulty product manager than the
0:49
actual tool itself being a problem
0:51
, and we definitely tend to use
0:53
it to write first versions of things
0:55
like job descriptions or
0:57
user stories again as well , and
1:00
also getting inspiration on a piece of work
1:02
. So , for example , we are doing
1:04
some financial modelling at the moment and
1:07
I asked ChatTpt how I could
1:09
do it and what I should include , and
1:11
that was actually really helpful .
1:13
Well , that leads in really nicely
1:16
to our guest today , and she's
1:18
here to talk to us about how she and her team use AI
1:20
in their daily processes , building an
1:22
AI-enabled service for product people
1:24
.
1:25
Yes , yana Wellender , the CEO
1:27
at Craftful , is here to talk about all
1:29
that , what she looks for in hiring
1:31
product people in this space , and whether
1:33
AI is going to replace product
1:36
managers .
1:37
There's some good news there that we'll see
1:39
how long podcasts hosts actually have
1:41
.
1:43
On with the chat .
1:47
The product experience is brought to you by
1:49
mind the product . Every week on the podcast
1:51
we talk to the best product people from around
1:53
the globe .
1:54
Visit mindtheproductcom to catch up on
1:56
past episodes and discover loads of free
1:58
resources to help you with your product practice
2:01
. You can also find more information
2:03
about mind the products conferences and
2:06
their great training opportunities happening around
2:08
the world and online .
2:10
Create a free account on the website for a fully
2:12
personalised experience and to get
2:14
access to the full library of awesome
2:16
content and the weekly curated newsletter
2:19
Mind . The Product also offers free
2:21
product tank meetups in more than 200 cities
2:23
. There's probably one near
2:25
you .
2:27
Hi Yana , welcome to the Product
2:29
Experience podcast . It's great to be
2:31
talking to you today , so glad to
2:33
be here . Thank you so much for having me . So
2:35
, before we get stuck into our topic , it would be
2:37
fab if you could give our listeners a
2:40
quick intro into who you
2:42
are and what you have been
2:44
doing in product , but also what you do today
2:46
, which is beyond the scope
2:48
of product , I guess .
2:51
Yeah , absolutely so . I am
2:54
CEO and founder of Craftful
2:56
, which is a co-pilot for
2:58
product teams to help them build better
3:00
products by listening to millions of users
3:02
. And minutes Before
3:04
Craftful , I was product
3:07
lead at various tech companies , everything
3:09
from PM number two at a fast
3:11
growing unicorn to head of products at the
3:14
company , responsible for products
3:16
that serve millions of users
3:18
. So really done a kind of a big spectrum of different
3:21
types of product work .
3:23
So we're not talking about being a CEO today
3:25
. We are talking about how
3:27
product people can use
3:29
AI in their day-to-day
3:32
lives to , I guess
3:34
, make their jobs easier
3:36
and to work more
3:38
effectively , faster and
3:40
take advantage of this new technology
3:43
that is bubbling up all around
3:45
us in lots of different ways . But
3:48
let's start with kind of what's
3:50
going on with large language
3:52
models and kind of generative AI at
3:54
the moment . How
3:57
much technical knowledge do you think product managers
4:00
need to have in this
4:02
space to be able to really understand
4:04
the capabilities there ?
4:07
I think it really depends on what you're
4:09
working on when you're
4:12
touching large language models . So if you're
4:14
going to be a PM working on foundational
4:16
models at OpenAI
4:18
or Google or any one of the companies
4:20
building foundational models which isn't a
4:22
whole lot of those opportunities
4:25
around then you really need to have
4:27
in-depth understanding
4:29
of machine learning and be able to come up with
4:31
how your team
4:33
should go about to
4:35
build a better model , which is really
4:38
really technical understanding
4:40
of these types
4:43
of models . If you're building an
4:45
AI solution on top of a foundational
4:47
model which is what most folks working
4:50
in this space are doing you need to be
4:52
able to understand how foundational models
4:54
work and how they evolve
4:56
, but you don't need to be able to
4:58
conceptualize how you
5:00
would build a better one , maybe
5:03
to give a little bit more concrete examples
5:06
on that you need to be able to understand how
5:08
the model works to be able to come up with that architecture
5:11
and prompts that you would use
5:13
to create your product , and
5:16
then you need to understand how the model evolves
5:18
to strategically decide
5:21
how you're investing your time . So
5:23
, for example , if you come across an edge case
5:25
that the model doesn't address
5:27
, well , today you need to figure
5:29
out . Kind of do I need to adjust
5:31
my prompting or my architecture , or
5:33
do I need to fine tune the model , or do I need
5:35
to use another foundational
5:38
model , or should I just wait until
5:40
the foundational model I'm using is
5:42
going to get better and , if so , maybe I need to
5:44
just change the user experience around
5:46
how I use the model in the meantime ? So
5:49
there's a few kind of different things
5:51
that you may want to consider , and
5:54
if you don't know how
5:56
it evolves or anticipate
5:58
how the model is going to evolve , then you may
6:00
end up building something that
6:02
will take much longer to build and
6:06
may be too late , because the model has
6:08
changed in the meantime and you really need to build
6:10
it .
6:11
That is the most depressing answer I've
6:13
ever heard , not because of what you
6:15
were saying I think it was a brilliant
6:17
answer but it sounds exactly
6:20
like every other legacy tech
6:22
decision that I've dealt with in
6:24
the last 20 years .
6:26
You're like so many things have changed , but this hasn't
6:28
.
6:30
Yeah , okay , let's go slowly
6:33
. I'm curious . The answer you
6:35
gave , though , is that specific
6:38
to people working on
6:40
things that are using generative
6:42
AI , or people using
6:44
generative AI as
6:47
a superpower for what
6:50
they're doing ?
6:52
Oh , that's a great clarification . Yeah
6:54
, this was most definitely for people building
6:56
with large language models and generative AI , which
6:59
is sort of the space I am in . But
7:02
if you're looking at
7:04
using generative
7:07
AI tools , then
7:09
of course it's a very different equation . I
7:11
would say that you similarly need to appreciate
7:13
the evolution quite a bit , because
7:15
the space is moving so quickly that if you
7:18
try out a tool today
7:20
and you assume
7:22
that , okay , I could not use it to
7:24
solve this pinpoint , the
7:28
wrong conclusion would be that large
7:30
language models aren't good at that . The
7:33
right conclusion is this tool
7:35
isn't good at solving this solution today
7:37
and tomorrow , Even
7:39
this tool may be able to do it , but also
7:42
just like there's so many other solutions
7:44
out there and it's moving so incredibly quickly
7:46
that I think that's probably
7:48
what I would kind of take away the
7:50
kind of understanding that you need to
7:53
have if you're using these tools .
7:55
And how big is the learning curve ? So
7:58
, for example , when you're hiring , do you hire
8:00
people only with previous AI
8:03
you know LL and experience
8:05
or are you hiring for talent and saying this
8:07
is something I can teach , I can get people
8:09
up to speed on ?
8:11
Yeah , I'm the latter . I'm hiring
8:13
for folks who can learn really
8:15
quickly and are very , very flexible and
8:18
you can see that they have their
8:20
kind of growth . Mindset is
8:22
the kind of the piece
8:25
that I'm looking for .
8:27
And are you finding I
8:29
don't know if you're hiring at the moment or kind
8:31
of how recently you've been hiring
8:34
but are you finding
8:36
that there are people out there who do
8:38
come with experience , whether it's experience
8:40
that they've kind of gained from
8:42
playing around with staff
8:45
or doing research that you know
8:47
in their own time , or whether it's from like work
8:49
that they've been doing ? Is
8:52
there a pool of product managers out there
8:54
that have a lot more knowledge with
8:56
this technology than others ?
8:59
Yeah , there's definitely , and I would say , you know
9:01
we're a very , very small team , so
9:05
we have been actively hiring . We have hired
9:07
quite a lot of folks recently
9:09
the most recent hire starts
9:11
on Monday but I
9:14
would say that when we are hiring , we're
9:17
really hiring for people that have played around
9:19
with the technology . You know , we fall in that
9:21
second bucket where you
9:23
know we're building on top of foundational models . Our
9:26
team , everyone on our team , needs to be
9:28
able to understand how the foundational models work
9:30
and evolve . We don't need to actually be able to build them
9:33
, and so I'm looking for people who
9:35
have ideally played around
9:37
right , because now , you know we started
9:39
building , I started playing around with with GPT
9:42
technology in 2020 . So it's been , it's
9:44
you have been able to play around with for a while , and
9:47
so I'm looking for folks who ideally
9:49
have , but also just
9:51
looking to see are they
9:53
, will they be able to ? You know , we
9:56
don't have necessarily anyone who can build large
9:58
language models on our team , and
10:00
I would say that that may be
10:03
, in some sense , the liability right If you're
10:05
to have someone who
10:07
, with that expertise , if you're building
10:10
a product on top of foundational models
10:12
, because folks who have
10:14
that expertise and there's there's very few of them
10:16
are going to be one to build
10:18
models and focus
10:21
on fine tuning and do things that
10:23
that are pretty big investments , which
10:26
could potentially distract you from
10:28
the kind of the investments that
10:30
you need to be doing in this really fast
10:32
moving space and take you away from
10:34
the work that actually is strategically
10:37
important .
10:38
Yana , part of our job as
10:41
podcast hosts is to ask the stupid question
10:43
so that you can give us the smart answer . So
10:45
I'm going to not be afraid , I'm just going to
10:47
head and ask the stupid question Do
10:50
it , do it . So we've all
10:52
worked with with stakeholders
10:55
who think they know exactly what is
10:57
needed , they know what to build it , they
10:59
know what to do , and they all complain that
11:01
the product team is just not fast enough
11:03
, weird , or we don't do the right thing . So
11:06
Is AI going to
11:08
replace product management ? In organizations
11:11
like that , where they just say write the stories , write
11:13
the tickets , write the user acceptance criteria
11:15
? Is there a danger that product
11:17
management just disappears where the CEO
11:19
believes they know better ?
11:24
Well , I'm going to respond with this super corny
11:26
thing , which is no , ai
11:28
will not replace PMs . Pms
11:31
using AI will replace PMs . I
11:34
think that that's what we're saying that
11:37
AI is able to handle
11:39
lots and lots of different tasks that
11:42
then can free up product
11:44
managers to be much more strategic
11:46
in their work and
11:48
be able to make better decisions with
11:51
all the data that AI
11:54
makes accessible . That's for today
11:56
. I think the really interesting question is
11:59
what happens after ? Right In the world with
12:01
AGI and superintelligence and
12:05
in a place where AI can actually think
12:07
and reason better than humans . What happens
12:09
then ? Right ? And I
12:12
believe that humans will still
12:14
want to build products for other humans
12:16
, because creating things
12:19
is what makes us human . Our
12:23
drive to create things , art
12:25
, technology , is what
12:28
distinguishes us from other species , and
12:30
I think that that's always going to
12:32
be true . That's kind of why I'm so excited to be building
12:35
for product builders , because I think
12:37
we'll always figure out how to continue doing
12:39
that and how to make sure that that's
12:41
valuable in a world where
12:43
AI can reason better
12:45
than us .
12:47
So in a world where AI replaces
12:49
product managers , it's probably also
12:51
replacing everyone else around us as well
12:53
.
12:55
I think that we product managers are actually
12:57
product builders . We're in the category of product
12:59
builders and we will continue being product
13:01
builders . We will be somehow
13:03
, in an AI-enabled way , be building
13:05
products , and
13:07
the good analogy in my mind is what
13:10
happened with arts after
13:12
photography . Right , folks continued
13:14
, they continued being a market for
13:16
arts , people continued
13:19
making arts , people continued buying
13:21
art , and it's because there's something there we
13:23
humans love to create , and when
13:26
we create something for other humans , other
13:29
humans see a value in that that's kind
13:31
of unique to that human creation
13:33
. I think that that's
13:35
what . I don't know exactly what that will look
13:37
like , but we product managers , in
13:40
the category of product builders , will continue
13:42
having a role to play .
13:44
It will be like old school . You
13:47
know , use this product . It was created
13:50
by coding manually
13:52
by hand with no code
13:55
solutions and no AI used
13:57
.
13:59
It will be created by manually thinking
14:02
, you know .
14:04
Artisanal HTML .
14:07
Exactly so
14:10
. You said something earlier about how PMs
14:13
that use AI will replace PMs
14:15
. How do you see PMs
14:19
, the PMs that are using
14:21
AI , how do you see them using
14:23
, or how do you think they will be using AI
14:25
in their kind of daily
14:28
jobs ?
14:29
Yeah , so we use AI
14:32
for products and we'll
14:34
obviously use our own AI . So
14:36
craftful AI we use that for
14:38
all of our user research in
14:40
road mapping today . So all
14:43
of our support chats and
14:45
intercom and user
14:47
calls sync directly to craftful
14:49
from Google Meet and then we
14:51
get lists of feature requests that
14:55
come with links to direct mentions
14:58
. We can ask follow-up questions and chat
15:00
with that feedback . We're a B2B
15:02
company so we mostly rely on
15:04
intercom and
15:07
calls with users , but our B2C
15:09
customers can do the same thing with
15:12
App Store reviews and online
15:14
reviews for , like , g2 data and that
15:16
kind of stuff . From there you can do
15:18
kind of deeper research . So
15:21
we have this survey tool where
15:23
you can feed in all of the past
15:25
feedback and have the survey automatically
15:28
generate follow-up questions and
15:30
format all of those questions with AI . So
15:34
that's what used to take hundreds of hours
15:36
. To kind of read through all of the feedback you already
15:38
had and then try to figure out what are
15:40
going to be my kind of follow-up questions takes
15:42
a minute and then we
15:45
send out those surveys to users
15:47
and we also have them embedded on our website
15:49
and continuously collect
15:52
feedback . That way we then use it
15:54
to generate user stories , which
15:56
are also AI generated and
15:58
get synced automatically to JIRA
16:00
when there's a feature request that we actually want to act on
16:02
.
16:03
We have AI generated projects
16:05
which are sort of like the foundation of our roadmap
16:07
One of the things I
16:09
find quite interesting about
16:12
all of this , but about what you were
16:14
saying there was , you know , if
16:16
we can use AI to
16:18
kind of work faster , smarter , get
16:20
it to do a lot of the insight
16:23
analysis , like creating
16:25
surveys so that we can understand
16:28
our customers better , synthesizing
16:30
all of that data , writing user
16:32
stories , building roadmaps , and
16:35
I can imagine that , like I can really , you
16:37
know , seeing what chat GPT
16:40
can do , like I can imagine
16:42
all of this kind of coming to fruition . But
16:45
I feel like some of the creativity
16:47
in the product role is in some
16:49
of those things and is in our
16:51
kind of consumption
16:53
of that data and our own kind of internal
16:56
synthesis of
16:58
that and then how
17:00
we then create our stories and
17:02
romats and things . And I
17:05
think there was a story I can't remember who told
17:07
me this , but around how
17:09
they created these robots to help pick
17:11
and pack in supermarkets
17:14
and actually , instead
17:16
of it kind of helping the human
17:18
for like free up the human
17:21
to do more interesting things
17:23
, the human was kind of dumbed down
17:25
to this like now pick up
17:27
this thing , now pick up this thing , and like
17:29
we'd only kind of feed it like one
17:32
task at a time or something like that , and
17:34
it's almost like hearing you talk about what
17:37
AI can do for us is incredibly
17:39
powerful , but then it does
17:41
make me feel like , well , hang on , where
17:43
is that creativity going to come ? What
17:46
does the product manager then get
17:49
to do ? That is creative .
17:51
I think you know , and maybe that's there's
17:55
lots of different ways in which that can happen . I can
17:57
kind of talk about how we think about that in
17:59
our product today . I
18:02
think as these tools evolve , that's going
18:04
to be a big question around . Are
18:06
they built in a way to enable that creativity
18:09
or are they built in a way that just creates , you
18:12
know , an interesting work for everyone
18:14
? We really thought about deeply
18:16
how do we make sure that we
18:19
surface everything that a user
18:21
or product manager needs to know so
18:24
that they can be creative , so that
18:26
when I come to Craftful , I get
18:28
a list of here's all the things
18:30
that your users said
18:33
in their prioritized dimensions
18:35
, how frequently they were mentioned . So
18:38
I don't have to go and read , but I could right Like I
18:40
can then click through and just read examples
18:42
of where it came up and how that looked . Then
18:44
it's up to me to
18:46
decide which
18:48
of these things are interesting , if any , and
18:52
when I do that I say okay
18:54
, now generate a user story about this one thing
18:56
or do not ? Actually
18:58
, do not generate user stories about any of these
19:00
things . I want to create a project , a new project
19:02
that's based off of my idea , and
19:05
then I can go and see is there some interesting
19:07
feedback I've been getting about this ? No , well
19:10
, now let me then survey
19:12
my users and try to find out what they would
19:14
think about this idea . And then the
19:16
way the survey mechanism works is I
19:18
put in a new topic that's not based
19:21
on anything that is in Craftful
19:23
, that's based on my idea
19:25
again right and it automatically
19:27
generates a survey with questions that
19:29
I have then as a foundation and I can then
19:31
tweak them and change them however I want . But it saves
19:34
me time at every step , so that the
19:37
steps where I would creatively come up with
19:39
what's the best solution to users'
19:42
problem . I have lots
19:44
of time to do that and
19:46
I have lots of tools to use
19:49
to try to figure out what exactly is going to
19:51
be the right solution . Same thing with
19:53
when I write the PRD I
19:55
get an option to just generate it based
19:57
on what I want it to be , not necessarily
20:00
based on what's already in Craftful . So
20:02
there's kind of figuring out that right
20:05
balance between the user
20:07
using the product and what data
20:09
you're surfacing into the user . I
20:12
think that that's where you hopefully
20:14
get that human creativity
20:16
.
20:18
Yana , we've all seen , and
20:20
certainly I've had the experience where playing
20:23
around with an LLM and it starts making
20:25
things up and
20:27
that may just be because of poor
20:29
prompting or it was a more immature model at the time . So
20:33
if you're asking it to project
20:35
things based
20:38
on previous responses previous responses what's
20:42
the data ? Is this just an example of
20:44
a lazy person using bad tools
20:46
or somebody using tools poorly
20:48
? How do you protect against that
20:51
? You may be smart enough to
20:53
use the systems well , but how do you
20:55
ensure that everyone else is using them well
20:57
and not being led
20:59
down the wrong path by a
21:02
fanciful answer ?
21:05
Yeah , so we've put together
21:07
a lot of different machine
21:09
learning tools in
21:11
our analysis so it
21:13
isn't just relying on feeding
21:16
data into the model and then getting an output
21:18
back . So there's multiple
21:20
different steps and as
21:23
a result , it really is getting
21:25
insights out of the data , grouping
21:28
those insights as big
21:30
buckets of how it works , grouping
21:32
those insights into
21:34
categories and then matching it
21:36
back to original mentions . And
21:39
that matching back to original mentions
21:41
helps make
21:43
sure that there aren't hallucinations . But
21:45
even if there were , we then
21:48
show this to the human operator
21:50
and to the user so that they can see
21:52
okay , here's a category of
21:54
feedback that came up and
21:56
here's how many times the model thinks
21:58
that it came up . And then I can click
22:00
through and read original mentions and
22:02
see is this doing
22:05
what I thought it was going to do , or
22:07
maybe it's not ? And then I give feedback
22:09
on that and we use that feedback to continuously
22:12
improve the model to make sure that
22:14
it works
22:16
better and better for our users
22:19
over time .
22:20
And I feel like one of the things that you've
22:22
talked about a bit as well around
22:24
how product
22:26
managers use these
22:29
AI tools is
22:31
the new skills and techniques that we need
22:33
of analysing these
22:35
AI outputs and potentially
22:38
prompt writing , the right kinds
22:40
of prompts , and things like that . How
22:43
do you see us evolving
22:45
these skills as product people
22:47
, and is there anything else that we're going
22:49
to need to learn ?
22:52
I think that the need to learn how
22:54
to prompt well is
22:57
becoming smaller and smaller over time . When
23:00
I started experimenting with
23:02
these models in early
23:04
2020 , the
23:06
prompts had to be incredibly
23:09
complicated to get something
23:11
even remotely as helpful
23:14
as the kind of things you can get by
23:16
just saying one word today to the
23:18
model , a couple of words , and
23:23
so what I think is going to happen is that it's going to be
23:25
easier and easier over time to get
23:27
good output out of these models
23:29
, and so prompting
23:31
isn't necessarily going to be prompt
23:33
. Engineering is not necessarily
23:35
going to be a concept , and , of course
23:38
, various tools various kind of specialized
23:40
tools that also help adjust
23:42
the models to specific users
23:45
also help with that , because , even if
23:47
it is a little bit more complicated
23:50
to prompt models today
23:52
, when you use Craftful , you don't actually have to right
23:54
. It just generates a list
23:56
of feature requests for you and it generates a survey
23:58
for you and it generates a PRD for
24:00
you and it generates a user story and acceptance criteria
24:02
and all that stuff . But I do think
24:04
that the
24:07
skill or even maybe just personality
24:10
trait that I would say is important
24:12
to use all this stuff as open-mindedness
24:15
, just this sense of
24:17
this , will
24:19
probably start working or
24:21
being able to see the glimpses of
24:25
what's possible is , I think
24:27
, kind of what's
24:29
important , because what I see in
24:31
most of our users and we now have over
24:33
30,000 product teams using Craftful
24:35
, but it still is a tiny , tiny percentage
24:38
of each company when
24:40
you think about it , and they will often
24:42
tell me about interactions that they have
24:44
with their colleagues where someone has tried chat
24:47
GPT when it just came out and they
24:49
tried it on some use case and it didn't work , and
24:51
so they're like , oh no , this is never going to work and
24:53
now they're not trying anything and they've been trying
24:55
to send them like a team invite and they're
24:58
not picking it up because they've
25:00
decided that AI just doesn't do this stuff
25:02
. Well , I think
25:04
that's kind of what's important
25:06
is just curiosity and
25:09
playing around with
25:11
tools and trying
25:13
to see what's
25:15
possible today based on everything that launched
25:17
yesterday , what's possible today . I
25:21
think that's kind of where I see
25:23
the biggest need right now .
25:25
So we already have a problem as
25:27
humans with lots of bias , and adding
25:30
AI to it , we know we can just supercharge
25:32
and scale . Bias beyond belief Is
25:34
there a chance that we can do this better or
25:37
we just curse to be worse
25:40
at this , no matter what tools we add
25:42
in .
25:43
Yeah , I think that there
25:45
is a chance for us to do this better , but
25:48
we have to be really , really thoughtful about
25:50
what we're trying to do , and that
25:53
may be tough In
25:55
an environment that's moving really fast . To
25:59
give you an example of where I've seen bias
26:02
show up in our own work is
26:04
or rather , how
26:06
we've seen AI do better when
26:08
it comes to bias is . We
26:11
run our analysis
26:13
on lots and lots of different categories
26:15
of products . At one point we
26:17
ran it on dating apps
26:19
. One thing
26:21
we discovered is that all
26:24
or most dating apps
26:26
have this list of feature requests
26:28
. Somewhere around
26:31
, feature request number seven or eight or
26:33
so is always
26:35
a request for better
26:38
racial matching . When
26:41
I first saw this , I was appalled
26:44
by this . It's like , well , app Store or Music
26:46
, you never know , there
26:48
may be some bad stuff in there . I
26:51
don't trust this . But then , as I looked at
26:54
what actually is the underlying issue
26:57
, it's a lot of complaint
26:59
from racial minorities that they're not getting as
27:01
good matches as racial
27:03
majorities . The
27:06
issue that we
27:08
were seeing is that , I
27:10
assume , is that when you use
27:13
analytics to improve a
27:15
product , you're really looking at the majority
27:17
. That's what most apps have
27:19
been using in the past , they really
27:21
, as a result , made
27:23
the experience really bad for minority
27:26
users . They're complaining
27:28
, but they're always the minority , so no one's really listening
27:30
to it . But when you're then using AI
27:33
to surface everything that got said
27:35
, then suddenly you're starting
27:37
to see this minority voice and
27:39
it gets surfaced . The question
27:41
is , how do we make sure that this happens ? How
27:44
do we make sure that AI
27:46
is better at it when we humans
27:48
are not , and the AI is trained on us ? So
27:50
how do we even get around this
27:52
problem ? I think that really comes from
27:55
. There's essentially
27:57
two steps of training . There's
27:59
this where the model learns how
28:01
to talk and it just uses all of
28:03
the data on the internet to
28:06
just understand what are words . It
28:08
doesn't actually understand what are words . It understands what are tokens
28:10
, what's like statistical
28:13
likelihood of them showing up again next to
28:15
each other . But then there's this fine-tuning
28:18
step , and in that fine-tuning step
28:20
is where we can teach its values
28:23
. That's really where we can
28:25
counteract some of the biases that
28:27
exist wildly on the internet
28:29
and that we humans have had . I
28:33
think there's a lot of debate
28:35
around how to do that right , how to not overstep
28:37
, how to get their balance , and I think that that's
28:39
a very legitimate conversation
28:42
to be had , but I
28:44
think that there's just so much potential
28:46
to try to get it right , and
28:49
when we do , I think it will be better than
28:51
humans , hopefully .
28:54
When you talked about fine-tuning for this
28:56
, I was almost
28:58
thinking of it as adding
29:00
the personality
29:02
to the model , but
29:05
I like your thing of giving it
29:07
values . Is that only
29:10
done at the foundational layer , or if you're
29:12
then working at
29:15
that other layer , I guess as Quaffold
29:17
do , are you able to influence
29:19
those values at that point as well ?
29:22
Absolutely . At any point you
29:25
can have fine-tuning at different layers
29:27
. In our case , we're fine-tuning our models
29:29
to be the product manager , to think
29:31
as a product manager . So we're like , okay
29:33
, there's the piece
29:36
around , how do you understand language ? That's already been
29:38
done for us , then the morality
29:40
that's been , or some of these values that have
29:42
been added on top of that , where it will refuse to
29:44
look at certain words or
29:47
surface certain words that it
29:49
sees , which is great . And then
29:51
on top of that we add this
29:53
additional personality
29:56
that looks at data and
29:58
one way in which it shows up
30:00
is that it will actually not hallucinate
30:02
, like it rarely will hallucinate
30:04
. So when you're trying to get it to hallucinate , so
30:06
in our chat you can ask something like
30:09
what did users say about pink elephants ? And it will
30:11
say your data doesn't have
30:13
anything about pink elephants . In
30:16
a very matter-of-fact way , it
30:19
also says no , a lot . No , it doesn't
30:21
do that . But
30:24
you definitely get that when
30:26
you're focused on a specific use case , you get
30:29
to add personality that is
30:31
better at that particular use case .
30:33
Does it have to say it depends at
30:35
least once a day .
30:38
Most certainly .
30:43
Chris , we've talked a lot about theory
30:45
. We've mixed in a couple of examples
30:47
, but can you give us a couple more examples ? What
30:49
do you see people accomplish today using
30:52
AI as an accelerator for
30:55
what they're doing ?
30:56
Yeah , I think there's . You know , I see
30:59
two trends in my
31:01
kind of gen AI
31:03
network . One is folks
31:05
leveraging AI to create
31:07
really innovative
31:10
solutions to existing problems that
31:12
have been unsolvable before . You
31:14
know , I think of ourselves in that category
31:16
. We're able to take lots and lots of
31:19
data and actually analyze it in
31:21
a way that keeps the substance
31:23
or the meaning but makes
31:25
it still really concise . And
31:27
then the other type of kind
31:29
of trend that I see is folks
31:32
using really small teams to
31:34
move fast and solve
31:36
many , many more problems than you
31:38
could previously solve because
31:41
of kind of how our startup
31:43
ecosystem has looked in the past
31:45
, and so you have
31:47
a situation where a team
31:49
can identify a problem , they
31:52
can start building a solution based
31:54
on AI , use
31:57
AI internally as
31:59
much as possible , so they don't need a whole lot of resources
32:01
, which means that they don't have to go out and
32:03
convince investors that
32:06
this is a problem worth solving , which
32:08
is , you know , investors in many ways have been gatekeepers
32:11
too and they've looked at problems and been like
32:13
, oh , is this a venture scale problem ? And in the past
32:15
you know a lot of things that are
32:17
venture scale , like solution for women or
32:19
minorities , haven't been
32:21
considered to be a venture scale just because those
32:24
individuals weren't necessarily represented as
32:27
well in those professions
32:29
, and now I think a lot
32:31
of that is changing
32:33
and so you don't really need a whole lot of resources
32:36
to get started . You can build something
32:39
you can . Instead of going out and
32:41
trying to raise money , you can focus
32:43
on building you
32:45
can . You can generate
32:47
value , you can get revenue quickly
32:49
and probably ultimately build venture
32:51
scale businesses even though they're not technically
32:53
venture backed or maybe not to the same extent
32:56
.
32:57
This has been such an interesting conversation
32:59
, Jana . Thank you so much . We
33:02
have time for just one more . And
33:06
yeah , I'm just curious to know how
33:09
you think product leaders should support
33:11
their teams in learning how to
33:13
take advantage of AI in their work .
33:16
Yeah , I think you know teams
33:19
really should be exploring
33:22
AI as much as possible
33:24
, and so I encourage
33:26
our team to do that at
33:28
Craftful . I encourage people to
33:30
do that even if they're not at Craftful . We
33:34
, as part of our hiring process , we have
33:37
a task that we send out and
33:39
in that task , I specifically
33:41
tell people to use whatever AI they want
33:43
and
33:45
I'm not going to judge and
33:49
as long as they can make sure that the
33:51
outputs that they are delivering
33:53
ultimately to me is something they want to
33:55
stand by , and I think that's
33:58
essentially kind of the
34:01
support that everyone needs right
34:03
now is to tell people go
34:06
experiment , use whatever you use
34:08
, see if you can make it work
34:10
, as long as you can put your name
34:12
next to whatever you
34:14
say is the end product , but
34:16
absolutely do not deliver anything that
34:18
you wouldn't put your name next to just because it was AI
34:20
generated right . So I think that that's
34:22
kind of that's the way
34:25
to mentor this new
34:28
generation of AI enabled
34:30
folks .
34:31
Jana , that was fantastic
34:33
. Thank you so much for being with us on the podcast
34:36
today .
34:37
Thank you so much for having me . This is a lot of fun .
34:46
Thanks , Jana . The
34:50
product experience is the first and
34:53
the best podcast
34:55
from Mind the Product . Our
34:57
hosts are me , Lily Smith and
34:59
me , Randy Silver . Lou
35:02
Run Pratt is our producer and Luke
35:04
Smith is our editor .
35:05
Our theme music is from Hamburg based band
35:08
POW . That's PAU . Thanks
35:10
to Arnie Kittler , who curates both product tank
35:12
and MTP Engage in Hamburg and
35:15
who also plays bass in the band , for letting
35:17
us use their music . You can connect
35:19
with your local product community via product
35:22
tank regular free meetups in
35:24
over 200 cities worldwide
35:26
.
35:26
If there's not one near you , maybe you should think
35:29
about starting one . To find out
35:31
more , go to mindtheproductcom
35:33
. Forward slash product tank .
Podchaser is the ultimate destination for podcast data, search, and discovery. Learn More