Episode Transcript
Transcripts are displayed as originally observed. Some content, including advertisements may have changed.
Use Ctrl + F to search
0:04
Welcome to Practical
0:07
AI.
0:08
If
0:10
you work with artificial intelligence, aspire
0:14
to, or are curious how
0:16
AI-related technologies are changing
0:18
the world, this is the show for
0:20
you. Thank you to our partners
0:23
for helping us bring you Practical AI each
0:25
and every week. What's up,
0:28
friends?
0:42
There's so much going on in the data
0:45
and machine learning space. It's just
0:47
hard to keep up. Did you know that graph technology
0:49
lets you connect the dots across your data and
0:51
ground your LLM in actual knowledge?
0:54
To learn about this new approach, don't miss Nodes
0:56
on October 26th. At this free online
0:59
conference, developers and data scientists
1:01
from around the world will share how they use graph technology
1:04
for everything from building intelligent apps and
1:06
APIs to enhancing machine learning
1:08
and improving data visualizations. There
1:11
are 90 inspiring talks
1:12
over 24 hours, so no matter where you're
1:14
at in the world, you can attend live sessions. To
1:17
register for this free conference, visit neo4j.com
1:20
slash nodes. That's N-E-O,
1:23
the number four, J dot
1:25
com slash nodes.
1:42
Hello, Jared Santo here, Practical
1:45
AI's producer and co-host
1:47
of the ChangeLog podcast. Chris
1:50
and Daniel are out this week, and
1:52
I just got back from Raleigh, North Carolina,
1:54
attending the All Things Open conference. While
1:57
there, I moderated a panel all about AI's
1:59
impact.
1:59
on developers featuring keynoters
2:02
Emily Freeman and James Q. Quick.
2:05
We thought you might enjoy listening in on
2:07
that discussion so here it is. The
2:10
opening question didn't get recorded but
2:13
I asked each of them to introduce themselves
2:15
and tell us all if they're long-term,
2:18
bearish, or bullish on the
2:20
impact of AI on developers. James
2:23
Q. Quick, developer, speaker, teacher.
2:25
I've done
2:28
some combinations of those things professionally for 10
2:29
years now which is pretty fun and
2:32
on the AI front this is something I've actually talked a lot about.
2:35
I really enjoyed your talk by the way. That
2:37
was my first pitch was an AI talk and they're like no we
2:39
already have some money that's taking that so. My
2:42
take that I would love to get into more is a
2:45
very super positive thing and the thing that I've talked about
2:47
a lot recently is people's fear of it replacing
2:49
their jobs and kind of hopefully
2:52
maybe changing your mindset around that fear,
2:54
the fear that you might have and changing
2:56
it more into a positive thing so hopefully we can get
2:58
more into that long term.
3:00
I love that. I love that we're starting with bullish
3:02
or bearish like yes, no, go.
3:05
I'm Emily Freeman. I lead community engagement
3:08
at AWS. That means I come to communities
3:11
and conferences like these to
3:13
really
3:13
show up as a partner for the communities
3:15
that already exist.
3:16
I ran developer
3:18
relations at Microsoft prior to that
3:21
and I've certainly been in the community for a long
3:23
time wrote DevOps for Dummies, 97 things every
3:25
cloud engineer should know. I am
3:28
bullish on artificial intelligence
3:32
because it's happening right like this
3:34
is happening. We have
3:36
to kind of make it our own and
3:39
and lean into it rather than try
3:41
and fight it in my opinion.
3:43
I guess you guys agree. Yeah,
3:45
we should have made that more comfortable. We
3:48
should have set this up so we have a debate to kick
3:50
it off. Agreed.
3:55
Let's reel it in then. That's
3:58
long term both very positive. I
4:00
think I'm also in that in that camp so we
4:02
won't debate too harshly on that. But
4:04
what about today?
4:06
Where does it stand? I know we've had some good demos.
4:09
We have people using certain things. It's here.
4:11
We think it's staying. So to
4:13
developers it sounds like the message is it's time
4:16
to adopt. But how? How
4:18
do I get started if I'm just seeing
4:20
the demos on social media or
4:23
my colleague talks about it
4:25
and they show me what they're doing with it. What do I
4:27
do today to actually start my
4:30
AI journey.
4:31
I think getting started today
4:33
is really about acknowledging sort of where we're at
4:36
with AI and the tools that are available to
4:38
us in this moment. I think learning
4:41
as much as you can. This isn't new to
4:43
us. Right. Like we have to learn all the time
4:45
and adapt our skills and grow
4:47
as our technology grows.
4:50
So I believe that we have to again
4:52
lean into AI learn these things. I
4:55
mentioned prompt engineering earlier. I
4:58
don't think it's a permanent role, but
5:00
I think it is something that we have to engage with right
5:02
now and learning to design
5:07
our prompts to really lean into the specific
5:09
vectors of the model that you're
5:11
using is important. Learn
5:14
as much as you can about how it actually works on the
5:16
back end. I'm doing this right now. I
5:18
don't have a degree in data or
5:20
artificial intelligence. I'm
5:23
learning and I'm watching the content
5:25
that already exists and gleaning as much
5:27
as I can from it. So that's been
5:29
a great experience and
5:32
it's opening my eyes to sort of how we proceed
5:34
with this. But I think for now, it's just
5:36
exploring the tools, recognizing the strengths
5:39
and the limitations and being
5:41
ready to adapt
5:42
and change as we move forward. Perfect. I love the
5:44
adapt and change and I think if you don't
5:46
adapt and change and embrace
5:48
AI to a certain extent, this is dramatic,
5:50
but you'll get left behind. But the reason that's not
5:53
as scary as it sounds is that's been the case
5:55
with every technological advancement that we've ever
5:57
had. If you were writing machine
5:59
codes, years ago. If
6:01
you were still doing that, you would not be very
6:03
productive, right? Like maybe some of you are
6:06
and that's cool. But like we have
6:08
abstractions and we continue to have abstractions where
6:10
the world that we live in as developers is totally
6:12
different than it was five years ago, ten years ago, 20,
6:15
30 years ago. So this is just one of those things
6:17
and it doesn't happen overnight. It's a progression. And
6:19
so I think you look at like what's the easiest
6:22
way, can you add an extension to your text
6:24
editor to give you prompts, can you go to chat GPT.
6:26
I use that almost on a daily basis, not just for code,
6:29
but just a creativity standpoint.
6:31
Like give me an idea of a project I can build or give me
6:34
questions to ask my Discord is actually something
6:36
that I've done. So I think that's kind of
6:38
the easy way to do it. And I think like where we are
6:40
now is really I guess very similar to what
6:42
you said about like the ironclad stage.
6:45
I forget the exact phrasing, but basically
6:47
the verification phase where everything
6:49
you do with AI has to be
6:52
verified. And that means that our jobs
6:54
don't go away because we have to be developers and have
6:56
that knowledge to be able to do that verification process.
6:59
But I think that's you're
7:01
able to get a lot, but I think you also have to invest a pretty
7:03
good amount of time into the verification process
7:05
to make sure that it works, it works
7:07
correctly. And then if you're doing it for things outside
7:10
of code, it also fits your tone. So I use it for
7:13
blog posts and ideas for content and things,
7:15
but I have to like take that output and convert
7:17
that into something that is genuine for
7:19
me. So there's a lot that goes into just confirming,
7:21
verifying and tweaking the output that you
7:24
get.
7:24
I also just wanted to say I think there is
7:27
currently a bit of a misunderstanding about what a hype
7:29
cycle actually is. And so you'll hear this phrase
7:31
that AI, we're in a hype cycle of AI and
7:34
they're right. But the hype cycle,
7:36
if you actually go look, it was made by Gartner. Thank
7:39
you Gartner. And so it's
7:41
really just this, this sort of extreme
7:45
expectation, right? And so we're very
7:47
excited about it right now. And we haven't begun to really
7:50
see the technical limitations and the difficulties
7:52
that we will come across later. So being in a
7:54
hype cycle does not necessarily mean that AI is
7:57
going away. It is just
7:59
inflated right now.
8:02
Well to James's point I think very
8:05
few of us are writing machine code but the ones who
8:07
are getting paid very well to write it.
8:09
Like ridiculous amounts of money. Don't sleep on COBOL.
8:13
Still a thing. Still will be four
8:15
times a come. So in my experience
8:18
I think that AI codegen
8:20
in the small is very much here at the
8:23
function level, at the line level, maybe
8:25
at the module level. As
8:28
you get into broader strokes
8:30
understanding the system at large.
8:33
The things that really are in the
8:35
mind of the developers at this point. Do
8:38
you think it's always going to stay there? Do you
8:40
think it's going to move higher and higher up
8:42
the abstraction to where I can
8:45
say, hey AI
8:48
make me a Facebook for dogs? And
8:50
it will say, okay I'm done. Please
8:53
no. Well
8:56
that's the ridiculous end point but
8:58
if we look at like what a there
9:02
actually is one of those.
9:05
Or what perhaps.
9:08
If we look at the way that a client
9:11
would hire for instance an indie developer.
9:14
Right. A contract freelance dev and
9:16
they have a business idea and
9:18
the client has some sort of idea of what that
9:21
business is. Right. And so maybe
9:23
they're at like the user story level.
9:26
Now most people aren't quite there yet. You have to help
9:28
them flesh that idea out. But
9:30
at a certain point there becomes a feature that is
9:33
given to that person and then they go and implement it.
9:36
And right now I think it's fair to say that
9:38
that person will use AI tooling in order
9:40
to do that faster, better, stronger, etc.
9:43
But is there a point and if
9:46
so please prognosticate
9:48
when that point comes when I can simply
9:50
be the writer of the user story and
9:52
we don't need anybody in between me
9:54
and the computer.
9:56
I think we're a long ways off from that. time
10:00
you're talking about an abstraction, even the best
10:02
developer tools on the market right now, the difficulty
10:05
really comes in plugging everything together, right?
10:08
We have access to
10:10
so many different tools that
10:12
operate wonderfully and provide incredible
10:15
benefits, but making them all integrate
10:18
and flow together is always the hard
10:20
thing. And I see artificial intelligence
10:22
as the exact same thing. It will do really
10:25
well in small sort of pockets
10:27
of where we need it to, and then plugging
10:30
it all together will be the sort of last
10:32
moment, I think,
10:34
where we're involved.
10:35
I think the abstraction just gets higher and
10:37
higher. And again, that's been the evolution of
10:40
humankind, right? That's the reason we have
10:43
technology and inventions is so that we don't have to do
10:45
the stuff that we wasted
10:47
a bunch of time doing looking back now,
10:49
like 100 years or whatever. So
10:52
all the abstractions that we see in development from you
10:54
no longer have to manage your own servers, you no longer
10:56
have to do patches, you no longer have to do firmware updates
10:58
and that kind of stuff. That's
11:01
just the continual path that will go down.
11:03
And I think I'm glad that you started with, like, it's a
11:05
very far away way because
11:07
people's, I think, irrational fear is like
11:09
tomorrow they lose their job because they use chat
11:11
GBT to build the app. And that's not like anywhere near
11:14
the case. But I don't see
11:16
why the evolution of this wouldn't be
11:18
exactly
11:21
that where you say I want Facebook for dogs
11:23
and it gives it to you because that code and that logic is
11:25
out there. It takes a lot to put it together and to figure it out.
11:28
And this like prognosticate when
11:30
I years, but
11:32
that could be the goal. But one interesting thing and
11:35
in doing some research for one of the talks I gave, I came
11:37
across the Devin's paradox. Anybody
11:39
heard of that? Cool. So it makes
11:41
me sound smart. So Devin's paradox
11:43
says if a
11:45
lot of people fear, like, if something
11:48
can do my job faster, that
11:50
means I'm going to lose my job because it's going to do my job.
11:52
But Devin's paradox looks across
11:54
like we're only doing that in a
11:56
mind state of what we're capable of doing
11:58
now. We're not. thinking forward
12:01
about what as a whole we're capable of doing
12:03
with these augmented tools. So we
12:05
can't even imagine what problems
12:07
we can solve in 10, 15, 20, 50 years. So even if right now
12:11
we we have this idea of Facebook
12:13
in our head, we know what that is tangibly,
12:16
even if chat GBT or whatever can do
12:18
that. We don't know what problems
12:20
we'll be solving that are infinitely more difficult than
12:22
that at the time. So it's going to be continuing like
12:25
tools are getting better, but we're continuing to do more,
12:27
I think as a as ecosystem. Okay,
12:30
so we're gonna get past Facebook is what you're saying. Yeah.
12:32
Okay. Okay,
12:34
well, how about the other I know we're both we're all optimistic
12:36
long term. But what about this very
12:38
real possibility?
12:40
I'm a C level executive.
12:43
I'm
12:45
watching tick talk somebody else on tick
12:47
tock,
12:48
who's a C level executive
12:50
coach says, Look, developers are
12:52
getting more and more efficient things to AI. They
12:55
are now 40% more efficient. You
12:57
can just cut that directly off of your top line
12:59
and save your bottom line. We're in an economic
13:02
downturn, you need to cut your engineering team
13:04
today. Like that seems like a very real
13:06
fear and a very real possibility. What
13:08
are your thoughts? Sure, I'll take it.
13:11
No danger in that question.
13:13
No, I think I think I think
13:16
plenty of CEOs are probably watching this kind of videos on
13:18
the tick tock. I don't know why that abuses me so much like
13:20
a CEO. Yes, I call it the tick tock.
13:22
Because I think it's funny. Remember when Facebook was the Facebook?
13:25
And I'm a millennial. So
13:27
you know, only us have been coming up a lot
13:29
today. In a good way. Yes,
13:32
we're good. Despite what the baby members
13:35
say. So I think it is a very
13:37
real possibility to cut and for
13:39
that to be the impetus
13:41
and the sort of thought around
13:42
this. And you see this throughout history
13:45
as we become more efficient and effective. Instead
13:48
of earning ourselves more time to
13:50
live the life that we want, we
13:53
prioritize work and are always chasing
13:55
that that edge of the bottom line. Societally,
13:59
I think we could do better. with that, but
14:01
it's always going to be a reality. And I think
14:04
this is where we have to learn and grow and adapt.
14:07
If we sit still, to James's point
14:09
earlier, that will not
14:11
behoove you long term. So learning,
14:14
adding value in different ways and adapting
14:17
to this new technology is key, I
14:19
think, to increasing our value
14:21
and having
14:22
some more longevity in
14:24
our roles.
14:25
That said, I think the roles are going to change.
14:28
And again, we're not new to this. Our
14:30
roles have changed completely. We had sysadmins,
14:33
and now you rarely see that job title. But
14:36
the population of people in technology
14:38
roles has only grown from there.
14:40
And so I think that there's extreme opportunity
14:43
if, again, we lean in and we're not approaching
14:46
this in a fear-based mentality of
14:48
trying to dig our heels in and
14:50
maintain the current system
14:52
as it stands. I
14:55
feel like we need to be more controversial. No,
14:58
I don't have it. I'm saying all those things
15:00
I agree with as well. To your point
15:02
earlier in your talk, again,
15:05
I forget the exact phrasing, but we kind
15:07
of had to go through the ironclad situation
15:11
to learn what the pitfalls were and to
15:13
then get to this next iteration of building
15:15
ships that was so much better in so many ways. And
15:17
I can see a scenario where what
15:19
you're saying happens, and I can
15:21
see them getting bit in the ass really quickly from
15:24
not having developers for when things go wrong,
15:26
because as we all know, no matter who writes the code, stuff
15:28
goes wrong. And somebody has to fully understand
15:30
that. And maybe somebody with
15:33
non-technical background can go into chat GVT and
15:35
say, here's what I'm getting, what's going on?
15:37
But probably in that case, you really want someone with
15:39
a technical experience. I
15:41
just think it's such a slow, although
15:45
it seems super fast, I think it's a much slower
15:47
process than we give credit for. And I think
15:49
we just go down this rabbit hole of really thinking it's happening
15:52
now. And it's just not. And if that
15:54
has happened with a company, please share a story.
15:56
But I just haven't heard of that, but I can see
15:58
a time. And I think there'll just be... with that, but I
16:00
also go back to the Devin's paradox of like we
16:02
still approach this conversation now with
16:05
a fairly limited mindset of what we can think about
16:07
being capable of building right now and we
16:10
just don't know what else we'll be building and 100% agree
16:12
jobs will be augmented but
16:14
not really in any different way,
16:17
although maybe slightly accelerated, than how they've augmented
16:19
over the course of time because that's what inventions
16:22
are for. So I really, I just go back to that
16:24
when I kind of go down maybe the fear
16:26
rabbit hole or question marks of the
16:28
benefits going forward. Okay.
16:29
If there was a 30% cut
16:32
and I didn't want to be a part of it, what
16:34
would I do today? Learn. We
16:36
have to learn. And you
16:39
know, chat GPT has come up a lot and that's
16:41
like sort of the leader right now. We
16:43
don't know that that's going to
16:44
stay that way. And so you're going to see
16:46
a ton of new tools come
16:48
forward. You're going to see a ton of startups
16:51
get funded. This is where venture
16:53
capitalists are putting their money right now. There's
16:55
going to be a lot of new
16:57
tools entering the market and a lot of churn as
17:00
we sort of hone in on
17:02
who the big players will be long-term.
17:04
So I think learn. I think you have
17:06
to sort of make demands
17:09
where you can, right? I've talked about responsible
17:11
AI. This is super critical. And
17:14
we are in the place where it is truly
17:15
our responsibility to push
17:17
for this and push against
17:19
the
17:20
sort of market forces that would
17:22
say, you know, we're moving forward
17:24
quickly with a profit-based
17:27
approach to this, a profit-first
17:30
approach. We have to
17:32
go forward with a set
17:34
of guidelines and standards that
17:37
protect
17:38
everyone and use this
17:40
in that responsible way. So
17:42
that for me is key as we proceed
17:44
and really owning that as
17:47
the people who not only build these tools
17:49
but utilize these tools that
17:52
we are clear on our approach
17:54
and our tolerance of that
17:56
behavior. I'll double down
17:58
and go a little bit deeper on the owner's
17:59
piece of learning and
18:02
if like we're really honest we're
18:04
in a really shitty time right now like economically
18:07
and jobs and I feel like every month I
18:10
have a friend of mine who reaches out or
18:12
I just hear about having God let go I was let
18:14
go from my role like at the before
18:16
really this started like a year and a half ago that summer and
18:19
the reality is that's happening and it really
18:22
really sucks and it's really really hard but
18:24
I think your your skill
18:27
set has never been more important your
18:29
ability to communicate what you bring to the table has
18:31
never been more important I talk about this a lot from a
18:33
career perspective like you have
18:35
to be able to share your benefit and
18:37
your value and you have to be able to communicate that effectively
18:40
and also confidently when you go into potential interviews
18:42
or just how you show up and talk to people in general that's
18:45
never been more important I also think and I go
18:47
back to this a lot because it's very important to me community
18:50
like you never know when someone in this room
18:53
might be the person that helps you find your next job
18:56
you never know one of those connections is and I I
18:58
always clarify this like from a networking perspective
19:01
it doesn't mean find people that work at a company so
19:03
that when you go to apply there you can just have
19:05
an end like that's not why you do it you invest in
19:07
the community you show up you're a part of the conversations
19:10
and you're genuine and that will have
19:12
a significant return or at least can a little
19:15
personal story when I was let go for my
19:17
job a year and a half ago it was kind of a debate
19:19
for me of whether or not I was gonna go full time to work
19:21
for myself something I've been thinking about for a while
19:24
and so I posted on Twitter saying like if anyone is
19:26
hiring for several or develop relations
19:29
or like management positions in that realm
19:32
send me a message and I got 50
19:35
or so DMS of people like not
19:37
only saying we're hiring but also like kind
19:40
of like we'd like to hire you and I don't say that from like
19:42
a braggy perspective what I'm saying is like my
19:44
network at that point I had nothing to worry about
19:47
because I could find an opportunity because I'd earn
19:49
trust in a community and so all the people
19:51
that you're sitting next to the people that you talk to the people
19:53
on stage you never know what that's
19:55
gonna do for you so there's in recent
19:57
times never been a more important
20:00
time for your skill set to be very
20:02
sharp and for you to be continuing to evolve that
20:05
like you said and then also your network and
20:07
how you show up in community because you just never know.
20:09
I love that emphasis on community and
20:12
we are not a collection
20:14
of individuals who form a community.
20:16
We are a whole and not everyone
20:19
should have to have the gumption or tenacity
20:21
or privilege to demand certain things from
20:24
their specific workplace
20:26
or role and I think part of being a
20:28
community is protecting each other and
20:31
standing up for each other and showing up for
20:33
each other and if you have the room
20:35
to do that or the natural personality to
20:37
do that the more that you can kind of
20:39
be a leader in this community and push for
20:41
those things in your own workplaces and
20:44
locations the better off we will all
20:46
be.
20:47
I love like it just
20:50
sparks so much. The
20:52
automotive industry right now is going through
20:55
strikes and stuff and they did it in an interesting
20:58
way where they did it in bits and pieces
21:00
of taking more people off the line
21:02
so they can continue to budget to be able to do that
21:05
longer. There is also an acronym
21:07
for writers like the strikes and writing. I
21:10
think the power of community and people being able to
21:12
come together as a community to stand up for what they
21:14
think they deserve and I don't know that we are
21:17
here right now but I think it is just an example of
21:19
what people that come together with a common goal can
21:21
do for an entire industry and
21:24
maybe we get to a point where like we unionize
21:26
against AI I don't know like that is maybe not
21:29
but like the power of those connections
21:31
I think can lead to being able to really
21:33
make positive influence wherever
21:35
we end up.
21:36
Unionize against AI. I heard it
21:38
here first. Let's
21:41
dive into the adoption weeds
21:43
a little bit. So we talk about learning,
21:46
adopting, trying things.
21:48
What have you all found is particularly
21:52
beneficial today how
21:54
I would go about adopting and
21:56
things that let you down. For instance
21:58
I will get one because I writing elixir which makes
22:01
me a little bit weird. AI
22:03
does not know elixir very well.
22:06
So yes, it's here
22:08
but it's not evenly distributed. For
22:10
our more obscure technologies you're going to have worse
22:13
generations, you're going to have worse advice. It's
22:15
all good. So I use it less in that context.
22:18
When I'm writing the front end stuff it knows
22:20
JavaScript very well. So that's just an example
22:23
of what's good and what's not good. I've heard
22:25
the advice that you should use
22:27
it to generate your tests and then you write
22:30
the implementation. Maybe that's a good idea.
22:32
Maybe that's backwards. Maybe
22:34
I should write the, have
22:37
it write the implementation and I write the tests because
22:39
I am the verifier. So thoughts in the weeds of like
22:41
what's good at today, you don't have to go
22:43
into the future but like if I was actually going to go code
22:46
after this and I was going to adopt
22:48
or die, what would I do that would really
22:51
level me up. You
22:54
can probably speak more actually to how
22:56
good or bad in different scenarios or maybe I don't know. But
22:59
I can't do that. So I've used it. But
23:01
I also come from a perspective of I know nothing
23:03
about how AI works. And so it's interesting that you
23:06
were saying like you're learning about how it works
23:08
and what the underpinnings are and stuff like that. And I've taken
23:10
a different approach where it's like I'm just
23:12
a regular developer, like I have none of that
23:14
knowledge and I'm just seeing what it does for me. I think
23:16
there's a time where we continue to get better
23:18
and learn more. So I think the adoption
23:21
for me, and no specific advice of like how
23:23
well it does in different segments of the industry.
23:25
But just throwing it in there and seeing
23:27
because I think it's going to change from language to language,
23:29
framework to framework. And it's up to you to
23:32
kind of figure out what works for you and maybe
23:34
your team and just kind of figure that out for
23:36
yourself again, not super specific. So maybe you can
23:38
help me out there.
23:40
I think right now we see various
23:42
tools on the market. I
23:45
can think of about five that are sort of
23:47
leading the way. I think we're going
23:49
to see a lot more models be
23:51
developed and released and kind of see
23:53
where that goes and
23:55
experiment there. I think your point
23:58
about the languages is such a good one. you're
24:00
seeing a ton of JavaScript. Obviously,
24:03
I expect Python to be there. Python, yep. The
24:07
data people love Python, so I get
24:09
it. But I think as
24:11
we proceed, making
24:14
it an even playing field as far as code generation,
24:17
but also keep in mind that part
24:19
of the major issue with generative AI is
24:21
you take a prompt and it generates something based
24:23
on expectations. And so it produces
24:26
what we call hallucinations, right?
24:29
Gen AI is on drugs. And. And.
24:32
And.
24:32
And. And. And. Lots
24:34
of breaking news of this panel. Breaking news, yes.
24:37
And so what
24:39
happens is it will just hallucinate
24:41
something, and it kind of goes off on these tangents. And
24:43
you see this when it becomes really verbose
24:46
in its language, or it kind of goes off, or
24:48
if an image, someone's missing an ear, that
24:51
type of thing. And those
24:53
exist right now, and they're fairly common
24:56
in Gen AI. And I expect as
24:58
we kind of move closer and again
25:01
hone these models, that that
25:03
becomes better and better, and we have fewer
25:05
of those. But right now, that is one of the
25:07
major challenges with Gen AI.
25:11
If you have something, if you really
25:13
want to be entertained slash trigger
25:16
warning, very weird. I
25:18
had this video in a slide, and I took it out because it's
25:20
so weird. If you're interested, search
25:22
for the Toronto Blue Jays
25:27
AI-generated hype video. That's for the baseball
25:29
team. Fair
25:32
warning if you want to. It's very
25:34
entertaining, but also extremely weird, going back
25:36
to people missing ears and stuff. Check
25:38
it out if you want.
25:40
So when we talk about it being hallucinatory,
25:45
what that really is is it's wrong. It gave
25:47
the wrong answer, right?
25:48
And as an experienced developer, I'm sure many
25:50
of you here are experienced developers, I can look at
25:53
the wrong bit of code. Maybe I'll execute
25:55
it once, but I can be like, meh.
25:57
That's not right.
25:59
What does this do to people?
25:59
learning software because
26:02
they don't they can't do what we can do and say that's
26:04
not right they're just gonna be like all right let's
26:06
rock and roll and throw this into production
26:08
is that what you did when you were a junior because
26:10
I did not do
26:12
that okay well different paths I
26:14
appreciate the
26:17
Yolo approach to production there
26:19
no I think you bring up so many different
26:21
things so yes it's wrong it doesn't know
26:23
that it's wrong yet yeah and you
26:26
know when we go through and we're talking
26:28
about juniors someone on Twitter right after the keynote
26:31
mentioned that well Gen AI
26:33
is getting rid of juniors I don't believe that for
26:36
a moment and please please don't take
26:38
that approach into your companies that's
26:40
going to be bad I think
26:42
the same approach with juniors with Gen AI should
26:44
exist as we always have which is where
26:47
the more experienced senior and principal
26:49
engineers not only review that code
26:51
but also coach the juniors on what
26:53
works and what doesn't and why so
26:56
that we can all learn and progress together again
26:58
such an emphasis on learning and
26:59
evolving as a community I also
27:03
think this is where I know for Amazon code
27:05
whisper when it generates
27:07
code you have options so it will give you
27:09
a few options that you can scroll through and
27:12
read and decide which works best for you
27:14
and I love that approach because one
27:16
you can see multiple ways of solving the same problem
27:19
and two you still have some ownership
27:21
and direction that you can inject into
27:24
the code based on your again personal style
27:26
or approach or belief knowing the whole
27:28
system right from that one comment
27:31
no code sidekick is going to know exactly
27:34
what is actually happening at the large scale it can pick up
27:36
on things as it learns
27:38
but being able to see it as the whole
27:40
and not just that one piece of code is
27:43
is really one of the values of you
27:46
my initial reaction to
27:49
the impact our influence
27:51
on learning when you use AI
27:53
is several things
27:56
first and foremost is the
27:58
fact that you have to understand
28:00
what you're accepting, whether you're copying, pasting,
28:03
or pressing enter, or tab, or whatever to get that
28:05
code, you have to understand that, because you
28:07
have to be able to decide, is it gonna work? Hopefully
28:09
you're not just shipping directly to production,
28:12
although, you know. And
28:14
in some ways it's not that different than how we've always been,
28:17
right? Like Stack Overflow has been here for
28:19
years. We have memes about Control
28:22
C, Control V keyboards, because that's all we need, right?
28:24
Like we've done that for a long time, and
28:26
we've learned sometimes to be responsible
28:29
of how we do that. So I
28:31
think we have to take time,
28:33
especially for people that are early on, to
28:36
pay attention to what's there, maybe go and do
28:38
outside research about what's there, to really have
28:40
at least a decent understanding of what's there. But
28:42
I also got a different perspective from
28:45
Rizel, who was at
28:47
GitHub as a developer advocate, and now I forget the company
28:50
name that she's at now. But she had
28:52
a different take on the learning experience, and she
28:54
was kind of going the other way
28:56
of saying like, AI
28:58
enables us to move faster and learn
29:01
some things while obscuring other things. So
29:03
if you're intentional about like, I wanna learn
29:05
this piece, I can have AI generate other
29:08
pieces that I don't need that are then enablers
29:10
for me to build the thing while focusing
29:12
my learning journey on this one individual
29:15
piece or a few different individual pieces. So
29:17
that was kind of an eye-opening thought for me. I hadn't thought
29:19
about it in the reverse of like, it still is
29:21
enabling us to do more, but I
29:23
think you do have to use it intentionally about what
29:26
is it that you don't know that you're trying to learn? What
29:28
is it that you don't know that you don't need to know yet? And
29:31
then what is it maybe down the road that you're definitely gonna need
29:33
to learn at some point too? Well
29:37
said.
29:38
All right, stereotype warning, here comes
29:40
one. Software developers
29:43
are
29:44
generally speaking, this will be generally true
29:46
and specifically false.
29:47
We're pedantic. We like,
29:50
we think about the tiny, littlest
29:53
details,
29:54
because historically we've had to.
29:57
I mean, some of us are still writing machine code, right?
30:00
So like that's I know pedantics
30:02
a pejorative but if we just take
30:05
it literally we think about the little things and a lot
30:07
of times we take joy in those
30:09
little things right so we think
30:11
about the impact of AI on developers.
30:15
Is this stealing some of our joy like
30:18
will we continue to do what we do at a higher
30:20
level and be more productive
30:22
and make more money and all
30:24
the things that are great but actually what
30:26
we like to do was to write that function
30:29
to sort that array the exact way we want it to. I
30:32
think you have a point. Okay, I would
30:34
say pedantic
30:35
feels negative.
30:38
Is there a better word? Is there a better word? Jared
30:42
here in post. I thought of that
30:44
better word. Okay, chat GPT thought of
30:46
it meticulous. I should have said
30:48
meticulous. Pretty similar meaning. No,
30:50
that negative baggage. All right, let's get back
30:53
to it. Okay, focused
30:55
and specific on those types of issues
30:58
because I think we all carry those moments that
31:00
we saw something fail spectacularly
31:02
right or you know you're you're actually
31:05
looking at something and as an expert you can notice
31:07
you can notice right away what is
31:09
wrong with something and that pattern recognition is
31:11
something that makes us really powerful. I
31:15
think as we sort of proceed with
31:17
this. I think that's the joy
31:19
for some people. It's not the joy for others.
31:22
Sure. I'll speak for myself. I'm
31:24
I'm a second career in tech. I was a
31:26
writer and I worked in politics and
31:29
nonprofits. And so coming
31:32
from that into tech coding
31:35
was not necessarily the thing that brought me joy.
31:38
It's not to say that when you don't finally hit
31:40
that thing and then it runs and it's perfect.
31:42
It's like, oh, that feels so good. But
31:45
for me it was building tools that matter to people
31:48
and that is what
31:50
brings me joy. And I think that's going to the spark
31:52
of joy is going to be different for all of us and
31:55
finding joy in our work, no matter how
31:57
it evolves and changes. I think is important
31:59
for all of us.
31:59
of us as humans
32:02
and for our personal growth.
32:04
But I think it's, again, we set
32:06
the standards here. This is not happening to
32:08
us. It is happening with us. It is happening
32:11
by us. And taking ownership
32:13
of that and really kind of saying, OK, well,
32:16
these are the areas that we want to maintain
32:18
and grow and evolve with, and these are the areas
32:20
that we want to give up. I don't want
32:22
to write a CRUD service again. I just don't. I've
32:25
done it 1,000 times. We're good. That
32:28
can be done away with. I
32:30
want to solve the really complex problems. I
32:32
want to think about, OK, this hasn't been
32:34
done before. It's only been done at scale by a handful
32:36
of companies. How can I apply this to my
32:39
specific constraints and resources? That's
32:41
interesting. And I think it's that kind of problem
32:44
solving and looking higher up
32:46
in the stack and having that holistic view that
32:48
will empower us along the way.
32:50
Well said. You want to add? Yeah, I think
32:53
very similar. I can speak from just
32:56
my perspective of what I enjoy. I think it's the exact
32:58
opposite. This is what I've always said.
33:01
The exact same is what I meant. Sorry. I was trying
33:03
to bring Java, and I just don't naturally have it. Can
33:05
you disagree on something? I'll
33:08
try. On the next one, I'll come up with something. But
33:10
my favorite thing about being a developer is being
33:13
able to build. And with
33:15
code, we can solve
33:18
most problems. Now, there's other aspects, like
33:20
hardware and things that come with it.
33:22
But we solve the problems
33:24
of the world on a daily basis. And that's what's cool
33:26
for me. I can't remember if it was your talk or someone else's.
33:29
The way some people look down on no code, low code,
33:32
environments, or platforms, or whatever, I don't
33:34
care. I want to build the thing and see people
33:37
use it or just build a solution to a problem
33:39
I have. So I don't know. Same perspective.
33:42
On the next one, I'll come up with something controversial,
33:44
I promise.
33:46
A nice analogy might be stick shifts,
33:48
automatic cars, where no one's stopping
33:51
you from writing that function. Just go ahead and have fun. Write
33:53
it. But the rest of us are going
33:55
to use the thing to write the function for us. And if you
33:57
take joy from that, just go ahead and write
33:59
your fun.
33:59
There you go.
34:01
Got one. Don't know how to use one,
34:03
drive one. So boom, controversy. Hey,
34:10
they disagreed. All right,
34:12
let's get slightly
34:14
more philosophical and broader sweeping.
34:17
So we talked about the details. What about like
34:19
big picture changes? I'm thinking about open
34:22
source software. I'm thinking about
34:24
ownership of code. If an AI
34:26
writes 30% of my code, do I get 70% copyright
34:29
on that? Do I get 100%?
34:31
Does my employer get all the copyright probably? But
34:34
what about open source? Because this is like, you know,
34:37
these things are trained, you know, famously
34:39
and infamously on
34:41
publicly available source code. And so
34:43
that's our labor, whether we gift
34:45
it or not.
34:46
It is. And so what is this impact
34:48
of the lives of us developers who are either
34:51
working on open source or simply using
34:53
open source? It touches all of us. I
34:56
imagine some maintainers will
34:59
maybe think twice about having stuff be
35:01
truly open source. And I like, I think there's
35:04
a whole deeper conversation
35:06
about the impact of just like reading from people's code
35:08
and leveraging that to do other things and ownership
35:10
and stuff. So I could see some people
35:12
just kind of like bowing out of that and
35:15
kind of coming back into themselves, which
35:18
would be a shame, right? For that to not to be available.
35:20
I don't
35:22
know, there's there's so much that goes into it, like
35:24
from a political perspective, from an ethical perspective.
35:27
Honestly, if like you asked me that, and I'm overwhelmed, just
35:29
thinking about it, there was someone last night at the speaker's
35:32
sponsor dinner. And he talks about how
35:34
I think today, like he's worked on multiple
35:37
revisions of a pitch for
35:40
either ethics and AI or something like that over the
35:42
last like year, and it was giving another pitch
35:44
last night, and they were going to go through it. I
35:46
think we will have a lot to catch up on
35:48
to define that I have none of those answers.
35:50
And they drastically overwhelm me because
35:52
I can't begin to comprehend like the those implications.
35:55
But there has to be
35:57
like legally more
35:59
ethically, open-sourcedly,
36:02
like there has to be things that kind of catch up and
36:05
and give some sort of guidelines to the stuff
36:08
that we have going on.
36:09
Yes, and this is why I keep
36:11
pushing on responsible AI.
36:14
We have to have these conversations and they're
36:16
gonna be heard. There is an
36:19
economics, there's this concept of the tragedy
36:21
of the commons, it comes from a pamphlet of the
36:24
same title, and the focus
36:26
was really around the shared
36:28
common land that cow
36:31
herders or any kind of farmer
36:33
would utilize for their herds to
36:36
eat off of. And as individuals,
36:39
it benefits each herder to have
36:41
their cows graze the most, with
36:44
no limitations. But obviously
36:46
shared resources are finite
36:48
and and they are limited. My favorite
36:51
quote from that pamphlet is, ruin is the
36:54
destination toward which all men rush. And
36:57
I think we have to be
37:00
truly careful as we proceed here. A lot
37:02
of this is a common resource and it's
37:04
based off of a common resource. And
37:07
this is where I think communities around this is
37:09
really, really important and recognizing
37:12
our own power and influence on
37:14
pushing toward a holistic
37:17
and appropriate approach to
37:19
responsible AI.
37:21
That quote I thought you were talking about my code again
37:23
for a second. Yeah, I should
37:28
probably
37:28
go revert that comment. Um,
37:31
okay, have you guys seen this new thing you
37:33
can do? It's like robots.txt,
37:36
but it's for your, this is for website
37:38
copy. So I mean we're in the same realm, but
37:41
it's like no, no GBT.txt.
37:44
I can't, what's the actual technology you can do? No
37:46
crawl maybe. I
37:47
don't know, it's a brand new thing
37:49
they're working on where the
37:52
LLM crawlers will skip
37:55
your website, much like you can tell Google not to index your
37:57
website.
37:58
Is that something people
37:59
will do? Is that something that can
38:02
have an application into the world of open source?
38:04
I mean, maybe you said opting out of. Does
38:06
that mean not even publishing at all? Because
38:09
there's no guarantee that the
38:12
language model creators will necessarily
38:15
comply with a robots.txt, for instance.
38:18
What are your thoughts on that, the analogy there
38:20
and how it applies? I find it to
38:22
be unacceptable that
38:24
companies would push forward
38:27
with a profit only mentality and
38:30
not take these things into consideration.
38:33
And to some degree, between
38:35
our work and also where we spend our money, we
38:37
have to tell the
38:39
market that that is not acceptable. I
38:42
don't want to live in a world where we're trying to hide from
38:44
crawlers. I want to live in a
38:46
world where we have decided on standards and
38:48
guidelines that lead toward
38:51
responsible use of that information so that
38:53
we all have
38:55
some compromise around how we're
38:57
proceeding with this. I think it's super important.
39:01
Trusting people is a big ask. When I
39:04
said the thing about
39:06
people potentially retracting from open source,
39:09
as soon as I said that I kind of wanted to backtrack that in my
39:11
head and find out another way, and I immediately
39:14
thought about like a flag on GitHub that says don't
39:16
look at this code if you're an LLM. Something
39:18
like that could be useful. I think longer
39:21
term, having it
39:23
all figured out is definitely better. I could
39:25
definitely see that being a thing
39:27
that people would use, I imagine, if they
39:30
don't want their code to be used in LLMs
39:34
to just be able to opt out. That seems like a reasonable
39:36
intermediary step along the way.
39:39
I think we would have ‑‑
39:42
we would start to argue around definitions of open
39:44
source because the freely available ability
39:47
to use without restriction is
39:50
part of the tag line.
39:52
Maybe it's source available kind of
39:54
things where maybe in these starts to say
39:57
I'll put my source code out there, you can do everything except
39:59
this.
39:59
and we have a new license that's not open source,
40:02
but it's something else. I think time will
40:04
tell.
40:04
And it just gets so hard to prove too,
40:07
right? Like it's like cheating on a homework
40:09
assignment in college, which I never did, question
40:11
mark. Like they had these things that
40:13
would compare your code against other people's assignments
40:15
or whatever from previous years. I'm sure that's gotten more and more
40:17
sophisticated now. So that would
40:19
be one of those things where if you had a opt
40:21
out flag and then you come across a
40:24
repo that has code that looks like yours, like
40:26
there's no way you could prove that without diving into
40:28
like the logs from the AI that generated
40:31
that kind of like you just have, I don't know, that'd
40:33
be so hard to prove. Again, coming back to like ethically
40:36
and legally, we have a lot to
40:38
figure out, I think.
40:39
Okay, how much time do we have? I think, is it, is this 12 15?
40:43
I think so. We got five minutes.
40:45
Okay, anything that wasn't addressed that
40:47
you wanna make sure gets addressed here. I'll take the mic.
40:50
I'll run it to him.
40:51
You stand up here and answer. There's
40:56
been a lot of discussion about, you
40:58
know, how gen AI has been hyped or overhyped.
41:01
My question is, maybe this is a way for
41:03
you to disagree. What do you think is the most underhyped
41:06
technologies around AI? I
41:09
think I kind of agree with Emily that the trustworthiness
41:11
of AI is the most underhyped, but
41:14
what do you guys think? Especially
41:20
in the conversation here from a technical perspective,
41:22
I think the most underhyped
41:25
thing is how much it can be used to things
41:27
that are not just writing code. And I mentioned this earlier,
41:30
just from a spark of creativity, like I sometimes
41:33
limit myself mentally because I don't think I'm creative,
41:35
although like if you look for pieces
41:37
of things I do, like it's there, but like I can
41:40
use something to just give
41:42
me ideas for stuff when I'm stuck and it doesn't
41:44
have to be technical. And I think that's super,
41:46
super value. And thinking about like an onboarding of how
41:48
to incorporate it, what easier way
41:50
to incorporate some AI into your life and
41:53
to just like give me an idea for something
41:55
to do this weekend that would be fun with
41:57
my like partner's mouth or whatever, right? So I think just.
42:00
on a regular outside of code perspective, there's
42:02
so much that you could get out of it from a creativity
42:04
spark. And I think that's a lot of fun, and I think it's
42:06
easy to get started that way.
42:08
I keep coming back to the, for
42:11
me the hype is around the speed and scope
42:13
of AI. When
42:16
I quoted Marvin Minsky, bless him,
42:17
who believed by 1980 we'd have
42:19
a human analog. Obviously
42:22
that's not true. And when
42:24
you think about how quickly this kind
42:26
of came to market, it feels really fast,
42:29
but a lot of that had to do with 2018
42:30
transformers coming about and us
42:32
being able to actually proceed with this. But
42:35
when you look at all of artificial intelligence,
42:37
it's truly been eight decades
42:40
at a minimum.
42:41
And so we're kind of coming to a place where
42:44
there is that distribution, but
42:46
I fully expect it to still take some time
42:49
before widespread adoption, before
42:51
efficient uses, certainly
42:53
affordable uses, and where
42:56
we can actually apply this
42:58
to higher risk scenarios and industries.
43:01
Time for one more, I think. Yeah, and this
43:04
use of the term tools in a kind of a neutral
43:07
way to describe AI kind of broadly,
43:09
I think what's been left out maybe
43:11
is that different tools have different side
43:14
effects.
43:16
So for instance, video games
43:18
have certain characteristics, shovels
43:20
have other characteristics, and
43:23
still other characteristics. Where
43:26
do you see these tools right
43:28
now and maybe in the future where we have to
43:31
look at societally, are
43:34
they more like shovels or opiates?
43:37
Oh, I like that.
43:39
I like that last line there. Good question.
43:43
That last line took a hard left.
43:46
I think we don't know.
43:48
There's no way to know. I think we can sort of think
43:50
about the next three to five years and where we think
43:53
this will go, but I think anyone
43:55
who claims to be a sort of futurist or
43:57
believes that they can tell you in 50 years what this
43:59
is. looks like, they're just guessing.
44:02
You might as well throw a pin against the wall. We
44:05
just don't know. But I think, truly, I keep
44:07
coming back to this, we have ownership and responsibility
44:10
over this. And we can kind of determine what
44:12
this actually looks like in usage.
44:16
Shuffle versus Opiate is like
44:18
a t-shirt waiting to happen. It's
44:21
such a good and kind of easy call
44:23
out for people. It's kind of funny, but I think it's very serious.
44:27
All the ethical legal implications, we talked about that, like
44:29
there has to be catch up. I think we also just
44:32
have to acknowledge that this is also the same
44:34
as every other advancement that we've ever had. Like you think
44:36
about, I don't
44:38
know, people that want to use things for
44:40
nefarious ways, people that want to use things
44:43
for their own purpose that
44:45
hurts other people or affects other people in
44:47
negative ways. It exists,
44:50
unfortunately. And so I think it's even more
44:52
important for the concept
44:54
of responsible AI. But
44:57
also just acknowledging that there's probably
44:59
a point where we need to have limitations, like what that means
45:01
and what that looks like. I don't know. Do we get to a point
45:03
where we're in iRobot and that's where we're living
45:05
on a day to day basis and we have to prevent that? I
45:07
don't know. I
45:10
think with great, what is it, with great power
45:13
comes great responsibility. And I think that's absolutely
45:16
true here.
45:17
One more quick one.
45:18
So there's a lot of talk about AI
45:21
tools that help you write code, but as
45:23
a developer, a lot of my time was spent actually
45:25
supporting code or maintaining code. And
45:27
there isn't a lot of tools out there that helps
45:30
you fix bugs, or I don't
45:32
want to read someone else's code and fix their bugs,
45:34
but that's what I spend my time doing. So
45:36
why do you think we're in the state we
45:38
are now and what can we do to build
45:41
more tools that eliminate that
45:43
tedious part of coding?
45:48
So for my first, I think I have
45:50
seen at least people talking
45:52
about that use case. I don't disagree
45:54
that there's more tools focused on the generating
45:57
of code, but I have seen people post on Twitter
45:59
and things. of like give it a code snippet,
46:01
tell me what's wrong with this or explain this piece
46:03
of code. So I think that's starting to get into
46:05
what you're saying, although the toolage may not specifically
46:08
exist as much as we may want for that use case. What
46:11
I think is really cool, and I think this goes back to probably
46:13
the most undervalued aspect of AI,
46:16
is the fact that not only does AI exist, but AI
46:19
exists in a way that we as developers can consume
46:21
it to build other things. That means that
46:23
we see a gap in tooling to
46:25
address exactly what you're saying. We don't
46:27
have to build all of that logic from scratch. We
46:29
can build a nice UI on
46:31
top of an already existing LLM
46:34
and be able to start to provide the things that you're
46:36
looking for more specifically. Now eventually you get
46:38
into more custom trained LLMs and that sort of stuff.
46:41
But I think that's the beauty of having
46:43
it be accessible, at least in certain ways, for
46:46
us as developers to build on top and go and solve
46:48
those use cases.
46:51
That was well put, and I expect more tools
46:53
in the future. I think we we led with the thing
46:55
that we knew we could execute on as
46:57
an industry, and that seemed like the most
47:00
straightforward path. And as we
47:02
diverge from there, I think you'll see a ton of tooling
47:04
around solving those problems. But yeah,
47:07
I still believe that those kinds
47:09
of the fixes, the plugging
47:11
everything together, the integrations, that will be
47:14
probably something that takes a long time.
47:16
Okay, that is all the time we have. Thank
47:18
you all for coming and let's hear it for the panelists.
47:36
Special thanks to Todd Lewis
47:38
and his amazing team of organizers for bringing
47:40
us out to All Things Open. This panel
47:43
was just one of the many conversations that
47:45
we recorded from the show floor. Subscribe
47:47
to the changelog podcast if you haven't
47:49
already for more All Things Open
47:52
goodness. Thanks once again to our partners,
47:54
fastie.com, fly.io
47:57
and typesense.org and to our beat
47:59
freakin' residents, Breakmaster Cylinder.
48:02
Daniel and Chris return next week, and
48:04
they're joined by Nathaniel Samar, the
48:06
creator of a deep learning framework and Rust,
48:09
called Burn.
Podchaser is the ultimate destination for podcast data, search, and discovery. Learn More