Episode Transcript
Transcripts are displayed as originally observed. Some content, including advertisements may have changed.
Use Ctrl + F to search
0:00
Hello and welcome to python bites where
0:02
we deliver python news and headlines
0:04
directly to your earbuds this is episode
0:06
three hundred and sixty nine sixty
0:08
nine recorded january twenty ninth twenty twenty
0:11
four and i am brand knocking
0:13
a michael kennedy. And
0:15
this episode is sponsored by us so
0:18
check out our courses at talk python
0:20
training and the complete by test course
0:22
and thanks to patreon supporters and really
0:25
everybody that just like spreads the love
0:27
and shares our our podcast with other
0:29
people thanks so much. The first item
0:31
is gonna be granny and meet
0:34
yes before we get to that i just
0:36
wanna sort of set the stages you know
0:38
when you running python apps web apps in
0:41
production there's usually something that talks to the
0:43
web browsers. And then there's the part that
0:45
runs your python code right the part that
0:47
talks to web browser is in next
0:50
Apache caddy one of these things
0:52
that does ssl it has no
0:54
knowledge of python. What is behind
0:56
that step we've got micro whiskey
0:59
and we've got g unicorn
1:01
and those types of things right so
1:03
that's where your python code runs usually
1:05
run it'll create multiple of them
1:08
and both those two that i
1:10
named big fans of micro is
1:12
awesome super fast low memory usage
1:15
but only does wsgi does
1:17
no async stuff and
1:20
that's a huge drawback right it doesn't do
1:22
a sink doesn't allow you to like properly
1:24
scale async and wait to unicorn on the
1:26
other hand allows us uv a
1:28
corn workers inside there which
1:30
is kind of like one more chain
1:32
and not loop but it when you
1:34
deploy that way you can do a
1:37
sink and wait which is awesome but
1:39
there's this new ish. Thing
1:41
come along called grainy and from
1:43
the image framework which is a
1:45
new i thought a web framework
1:47
this is a rust based a
1:49
server for python applications so a
1:52
rust. Version of micro
1:54
whiskey or g unicorn yeah cool
1:56
that's kinda cool it has one
1:58
point five thousand. on stars on GitHub,
2:00
been coming along for a while, and
2:04
it's created by a guy
2:06
named Giovanni, I believe. And
2:09
Giovanni says, well, why I build
2:11
this thing? Couple of reasons. It's
2:14
a correct HTTP implementation supporting
2:16
version one, two and working
2:18
on HTTP three, which is
2:21
awesome. It avoids the G
2:23
unicorn, UVA corn, HTTP tools,
2:25
tendency composition when deploying in
2:28
production. So this natively supports
2:30
async and await, like right in
2:32
it, along with WSGI. So
2:35
whatever kind of app you've created, you can just
2:38
run it right there in this thing without chaining
2:40
stuff together. And one of the things that's nice
2:42
about it is it's not a ton faster, but
2:44
it's way more stable. There's less
2:47
jitter in its performance profile, which I
2:49
think is super cool. I'll talk about
2:51
that in a second. But yeah, it
2:53
has HTTP one and two, excellent. Supports
2:55
HTTPS and web sockets directly. I'm not
2:57
gonna send HTTPS traffic to it. I'm
2:59
just using for now, nginx, whatever.
3:01
Also supports RSTI, which is a Rust
3:04
server gateway interface, I guess, like ASGI
3:06
and WSGI. So it does all the
3:08
Python things, plus it has a Rust
3:10
direct version if that was the way
3:12
you went down it. And it's super
3:15
easy to run. But from
3:17
the performance perspective, you look, it'll compare it
3:19
down here to, let's
3:22
see, against UVA corn and
3:24
hypercord is another one I should
3:26
have mentioned. That's like a parallel
3:29
to UVA corn plus unicorn. But
3:31
that one I think handles it all directly from
3:33
Philip Jones, which is great. But if you look
3:35
at, so let's just say like the ASGI get,
3:38
it says it'll do numbers please, 1.3
3:40
million requests at, okay,
3:45
that's just totally done many things. It could be over
3:47
three weeks. 85, 86,000 requests per second. Or
3:51
maybe this one's a little bit better. A
3:54
different get for 94,000. Compare
3:56
that against the
3:58
UVA corn one, which is. nineteen thousand
4:00
versus ninety four thousand or the
4:03
hyper coordinate twelve thousand versus ninety four
4:05
thousand which is great but if you
4:07
look at the variation like response time
4:10
on let's say uv acorn
4:12
is on average eight point seven
4:14
milliseconds that's really good but the
4:17
max is three hundred and twenty
4:19
milliseconds whereas if you look at
4:21
this one it's
4:23
two point seven but the max is only eight
4:25
point six right so that that variability or jitter
4:27
or I don't have a however the heck you
4:29
say it is way more stable and you
4:32
just kind of look across the board like another example
4:34
six versus seventy and so on so I thought that
4:36
was pretty cool so I switched if you come over
4:38
here Brian the Python bytes this is running on
4:41
granny in right now over my Docker cluster for
4:43
the moment so I just thought I'd see
4:45
how it goes and it's been going perfectly from
4:48
what I can tell so so it's pretty easy
4:50
to switch then yeah I mean all you
4:52
got to do is pip install granny in and
4:54
then change the start command no matter
4:56
however you run if you're running in the system
4:58
D on like a VM you change
5:00
the system D exact command if you
5:03
do it in Docker you just change
5:05
the entry point man for a micro
5:07
whiskey this or G unicorn that too
5:09
it's basically just another startup command yeah
5:11
okay interesting cool indeed indeed one thing
5:14
I did want to add for people
5:16
who are considering this I gotta
5:18
move this over so one thing
5:20
it doesn't you can't set it up so
5:23
it'll do logging but it doesn't do like
5:25
easy logging out of the box so I
5:27
actually was messing around like maybe I should
5:29
just do my own logging if we're
5:31
not for my app started
5:33
up hey somebody clicked this button but
5:35
just request response login which is pretty
5:37
common so I actually ended up playing
5:39
with it and using logguru to come
5:41
up with a color-coded added some middleware
5:43
that came up with color-coded request response
5:45
logging that does all sorts of cool
5:48
stuff like see how some of the
5:50
sizes in this log are red and
5:52
some are white if it's
5:54
like over 500 K in the request
5:56
size then it was it read or
5:58
if the response time slow it'll
6:00
tell her color like yellow if it
6:02
gets really slow it'll color it red
6:04
or it's a 400 or 500 code
6:06
error it'll color that part of the
6:08
request red or you know so you
6:10
can like look right at right away
6:12
and see so I've decided doing your
6:15
own logguru request response stuff is pretty
6:17
excellent actually so that's a kind of
6:19
a cool consequence of playing around with
6:21
this as well cool so is it
6:23
like that your use of logguru that
6:25
a custom thing where you you look
6:27
at the response times and color it
6:29
differently or something yeah exactly exactly like it'd
6:31
be easy enough to just go print this format
6:33
the code is this the URL is that but
6:35
I said it would be a lot more useful
6:38
if it was color-coded and
6:40
meant stuff right like if it's a 404 it
6:42
should be a different color than if it's a
6:44
200 or if it's a
6:46
redirect it should be a different color if it's really
6:48
really slow if it's you see these times
6:51
like 10 milliseconds 9 milliseconds 8 milliseconds 12
6:53
milliseconds if that was a second maybe that's
6:55
a problem color that a different color and
6:57
I did that by installing middleware and this
7:00
is pyramid but it could
7:02
also be whatever right you do some fast
7:04
API or whatever it just says begin the
7:06
request do something pass it
7:08
down to the framework and then end
7:11
the request and just times it and
7:13
logs it and colors it there okay
7:15
neat yep indeed indeed well
7:19
I I also want to talk about something
7:21
new and old at the same time so
7:23
nice pytest is been
7:25
around for a while but pytest 8 is brand
7:28
new so pytest 8 just came
7:30
out this weekend I'm super excited to start
7:32
running with it actually I've already started running
7:34
with it I we're
7:37
gonna put in the show
7:39
notes I put a highlighted blog post
7:41
of just pytest 8 is here and
7:43
links to the change the full changelog
7:46
but the what they did was they
7:48
spread out if people are running seven
7:50
like old 7x pytest
7:53
which that's what I was
7:55
using before the change looks
7:57
a little little just takes a
7:59
little bit it to parse because they spread it along
8:02
the RC1, RC2, and the final 8.0
8:05
release. The changes are all
8:07
there. So I pulled the highlights out. So
8:10
the thing I'm really excited, there's two things
8:12
I'm really excited about. One is when
8:15
you had an exception, that would
8:17
just be a red block of
8:19
exception stuff. And there's a
8:21
whole bunch of cool differences. So there's improved
8:25
diffs when you fail an exception,
8:28
especially if you do dash VV, so
8:30
very verbose or verbose verbose, however
8:32
you want to think about that.
8:34
But you get a colored diff
8:36
instead of the big chunk of
8:39
red. There's also... Back to this
8:41
color thing, right? Yeah, it's also
8:43
more colors. Normal, like
8:45
syntax highlighting, we're used to syntax
8:48
highlighted code. So there's error reports
8:50
are now syntax highlighted. And
8:53
there's the different sections of
8:55
the error report are separated
8:57
better. And then also,
9:00
there's better support
9:02
for standard library containers for
9:05
diffing. There was usually, it was a
9:07
pretty good tuple diff, for
9:09
instance, but if you had big lists, it
9:11
was a little bit hard to read, it's
9:13
a little bit better now. And then more
9:16
comprehensive assertory writes for other comparisons,
9:18
not just equal, but things like
9:21
not equal, less than equal, other
9:24
comparisons. So that's really cool.
9:28
Help people debug their code. That's super
9:30
nice, because you want to be able to say, are
9:32
these two things the same and not write code around?
9:34
How do you do that, right? Yeah.
9:37
And also, comparisons like less than or
9:39
less than or equal, it's really nice
9:41
for the PyTest to go out and
9:44
really tell you why that check failed,
9:46
and highlight the part of your data
9:48
that where it failed. So really, really
9:50
fun to see that. There's
9:53
the thing that the next thing I
9:55
wanted to talk about for PyTest 8
9:58
was probably pretty obscure
10:00
for people that are not using X
10:02
fail. So X fail is a way to
10:04
say, um, I expect
10:06
this test to fail. And for a
10:09
lot of people, why would you expect
10:11
it to fail? Uh, for, but for
10:13
large organizations, it's pretty common to file
10:15
a defect and you don't have
10:17
control over it. So you can't just go
10:20
fix it. Somebody else is responsible for fixing
10:22
it. So that's how we use it. We,
10:24
and I Mark, uh, Mark a test as
10:27
failing as expected to fail and give it
10:29
a defect number and then, uh, and
10:32
then when, uh, when it, and that's
10:34
not new. Um, but if it,
10:36
if it passes, there's, there's decisions on how
10:38
to make it X pass or
10:40
what to deal with, do with if an
10:42
X failed passes. Um, I've talked
10:44
about that much, a bunch on the other
10:46
podcast. However, um, the change for
10:49
Pytest eight is, uh, that
10:51
with X fails, the traceback never showed
10:53
up, it didn't used to. So the
10:55
change is now, if you, there's a
10:57
way to turn that on with, uh,
10:59
with the dash R command, you can turn on,
11:01
uh, X fail tracebacks.
11:03
So that's really nice for, uh, CI
11:06
test runs to be able to see
11:08
what the failure was in CI and
11:10
not have to try to rerun it
11:12
again. So really cool. Uh,
11:14
excited about that. There's lots
11:16
more things. So check out the, uh,
11:18
change log. Um, the reason why they
11:20
bumped to eight, I think is because
11:22
there was a bunch of, uh, they
11:24
changed the way collection works and,
11:26
um, and that change, that
11:28
behavior change of backwards compatibility
11:30
made it so that it made sense to bump
11:32
the, the, the number. I think that's the reason
11:35
I'm not sure. Um, but also it's
11:37
good to bump the number every once in a while
11:39
so that we have, uh, that so that you can
11:41
deal with that deprecations, you can get rid of the
11:43
code that you were meaning to get rid of. Uh,
11:45
so I encourage people to run
11:47
Pytest eight and check it out and turn
11:49
on, um, you know, turn on, uh, the
11:52
full strict mode and everything so that you
11:54
know, if, uh, if anything breaks, you can
11:56
roll back if you need to. Awesome.
12:00
Christmas for you, right? A major
12:03
version release. Yeah, and it
12:05
also made me, when I was looking at this, made
12:07
me realize that I think I need to add more
12:09
color to my blog. It's just black and white and
12:12
it's not very fun, so I have to add
12:14
more color. Color's always fun. I
12:16
mean, that's the theme of this episode. Last
12:18
week it was let's just ship announced open
12:20
source projects we're releasing. This is like color
12:22
episode. The color episode, yeah. Or
12:25
the midnight audience puts it out there. It's
12:27
a readability episode. Very good.
12:30
Indeed. Well, let's
12:32
harken back to a couple of things
12:35
on my side here. So I talked about this
12:37
Docker work that I've been doing. That also
12:39
kind of led me to the Grannion stuff and
12:41
playing with that as well. I also forgot to
12:43
give a quick shout out to the people,
12:45
like both Andy Shapiro and Bill Crook pointed out,
12:48
said, hey you should check out Grannion. So
12:50
when I give them credit for sending that in,
12:52
thank you. But as I was
12:54
doing all these Docker things, you
12:57
know when you're in an unusual or an
13:00
unfamiliar, uncomfortable space compared to what you
13:02
normally do? Like right now my
13:04
little browser in this episode has maybe nine tabs and
13:06
that's like kind of a lot. But when I was
13:08
doing the Docker stuff, I'd have 30, 40, 50 tabs
13:10
and then I would close
13:13
30 of them because I saw some problems. It
13:16
was just tabs everywhere. So it was just like exploring
13:18
stuff all over the place. Right? Like wow, I must
13:20
not know what I'm doing right now. True.
13:24
But you know that's how you get to where you know what's
13:26
going on. Along that, I would be like,
13:28
oh people talk about, oh my gosh, totally giving
13:30
up on doing this. I'm using Orbstack or Podman
13:32
or if they support this OS. I'm like, what
13:35
are these things? So I just want to go
13:37
through a host of Docker goodies
13:39
that I think people will enjoy. Not exactly
13:41
Python specific but certainly relevant if you're doing
13:43
Python in any form of containers. Right? So
13:46
the first one of three, there's actually more,
13:48
Brian. I'm not going to come from all
13:50
here. I'm saving some for later because it's
13:52
just, it'll be a little out of control
13:55
at that point. So Orbstack, if you use
13:57
Docker and you use Docker on Mac or
13:59
Windows to Typically what you do is you
14:01
get Docker desktop, right? So that
14:03
gives you the Docker commands that allows you
14:05
to run Docker locally. I think
14:07
it might use, maybe use VirtualBox
14:09
or something on Mac and it uses probably
14:12
Windows Subsystem for Linux on Windows. But
14:14
you can run Linux containers on
14:16
top of some hidden thing of
14:19
Linux, right? Yeah. So
14:21
Orbstack is kind of that. And they say,
14:23
say goodbye to slow and clunky containers and
14:25
VMs. It's a light and
14:27
easy way to run Docker containers and Linux
14:30
on one machine, right? So basically
14:33
gives you a nice UI around it. It
14:36
is 100% compatible with Docker. So you
14:38
could say Docker run, Docker exec, Docker
14:40
compose up or whatever it is you
14:42
say. And instead of using
14:44
the Docker engine, it'll use this Orbstack
14:46
engine, which is pretty neat. It
14:49
also has its own CLI if you
14:51
want to directly work with it. But
14:54
basically, it's kind of a one open
14:56
source and two more lightweight. They've got
14:58
a bunch of cool commands that they show
15:00
down here somewhere. Speed,
15:03
you're going to open it says
15:05
open edX. I guess that's probably
15:07
the Docker compose setup
15:10
for edX. I'm going
15:12
to provision a development environment for
15:14
it. It's pretty long still, because I guess that's a
15:16
beast of an app. 17
15:18
minutes on Orbstack, but 45 minutes on Docker
15:20
desktop to build post hog, whatever that
15:23
is. It's like a quarter or
15:25
a third of the time it uses.
15:27
If you're on a laptop, it uses like less than
15:29
25% of the battery as well. Or
15:32
depends if you're using super based or
15:34
Kubernetes or whatever, right? But pretty cool,
15:36
right? Yeah. Yeah.
15:39
And oh, another thing I think, I think let me look.
15:41
Yes, this is a big go you and I were just
15:43
talking about this before we press record. It says you can
15:45
run Linux machines without a fuss. So one of the things
15:47
I can do is I can go to like parallels. And
15:51
I can run Linux, Ubuntu or
15:53
whatever on my Mac, my M2
15:56
mini, my M2 pro mini. How are those words
15:58
to go together? Whatever order. makes the
16:00
right sense there. I can run them, but
16:02
I can only run the ARM64 versions because
16:05
I only have an ARM64 processor. But
16:09
this one allows you to
16:11
run Intel machines on Apple
16:13
Silicon with Rosetta, which allows
16:15
you then to run Intel-based
16:17
Docker images and basically be
16:20
closer to what your production environment is if you're
16:22
using Apple Silicon. So that's also a nice feature
16:24
of this. That's pretty cool. Yeah, it does cost
16:26
money if you're a company, but it has a
16:29
free version. If you're just a person. So,
16:32
not an endorsement, but I thought people might
16:34
find this to be pretty useful. It looks
16:36
pretty cool. Next one, Podman. Podman
16:38
is the best free and open
16:40
source container tools. So you can
16:42
manage Docker containers, pods, and that
16:44
term I believe comes from Kubernetes,
16:46
like the unit of execution in
16:49
Kubernetes is called a pod. And
16:51
images with Podman. So yeah, it
16:53
lets you seamlessly work with containers
16:55
and Kubernetes from your local environment. So this
16:57
is also really cool. A lot of people
16:59
are doing interesting stuff. GitHub
17:01
Action plugins, it's got a Visual
17:04
Studio Code plugin and
17:06
different things. And then the third one,
17:08
actually 3.5, because
17:10
it kind of is too, is this is
17:12
pretty interesting. So one of the things that
17:14
kind of is inspiring once you get all
17:16
this Docker stuff going is like, okay, well,
17:18
if that open source big complicated thing and
17:20
some technology I didn't know how to run,
17:22
like PHP or whatever, but if it has
17:25
a Docker container or a Docker composed set
17:27
of containers, I can run it. All I
17:29
gotta do is just tell Docker to update
17:31
it when there's a new one and just
17:33
run it. That's super easy to do, right?
17:35
So there's this, like taking that to kind
17:37
of the extreme is there's this thing called
17:39
Casa OS. Have you heard of this? No.
17:42
I need it, but it has 20,000 GitHub stars, which
17:44
is pretty cool. They call it
17:46
your personal cloud operating system, community-based
17:48
open source software focused on delivering
17:50
simple personal cloud experience around the
17:52
Docker ecosystem. So basically, if there's
17:54
a thing that runs Docker, this
17:56
is like an OS for running
17:59
in a. platform for running all
18:01
that. So it gives you a UI
18:03
and into this OS that they
18:05
give you. And it says, look, you can
18:07
collect all your precious data. I don't like
18:09
tie together Google Drive, Dropbox, iCloud,
18:12
OneDrive, etc, etc. hard drives and everything
18:14
into just one drive view. And then
18:16
you can just access it and like
18:18
map that drive over to your TV
18:21
or your computer or whatever.
18:24
There's somewhere in here where they've got all
18:27
these apps that you can go, just grab
18:29
and install supports. A lot
18:31
of them are unfamiliar to me because I
18:33
haven't done this enough. But like couch potato,
18:35
duck DNS, photo prism, and they just plug
18:38
into this thing. This is pretty interesting, right?
18:40
I think. Yeah, you think, right? Yeah. I
18:42
don't know what I would do with it.
18:44
But you know, yeah. So one
18:46
of the things that I think this is sort
18:49
of coming out of is they have this thing
18:51
called Zimma Cube, which instead of running all your
18:53
stuff on the internet, this is kind
18:56
of like a really fancy NAS, okay, network
18:58
attached storage, but it also runs Docker
19:00
and all these things. So it says you can
19:02
have up to 164 terabytes of SSD. But then
19:04
it also runs all of
19:08
these things. And this is on Kickstarter. And
19:10
it was already funded at around 1.1 million
19:12
US, which
19:14
is a lot apparently. So this
19:16
is kind of the intended
19:19
destination for that. But I think you can
19:21
run it anywhere. So pretty cool, right? Yeah,
19:23
I mean, some people think over a million
19:25
dollars is a lot. I don't, you know,
19:28
chum change, but you know, yeah. I
19:32
don't think this thing is shipped yet. I think it's in development
19:35
still now that their Kickstarter ended. But
19:37
yeah, so there's a whole bunch of
19:39
different fun things. So orb, orb stack,
19:42
pod man, Casa and Zimma cube. Okay,
19:44
so I have a question about the
19:46
orb stack. Is that
19:48
that doesn't generate Docker images though, does
19:50
it? I mean, I have to have
19:52
orb stack stuff on the server as well,
19:54
right? No, it will manage
19:57
it is a transparent.
20:00
API or CLI to the Docker CLI.
20:03
Okay. So, so if you just install
20:05
this, then you can go to your
20:07
command line and type Docker build, whatever,
20:09
you know, download and do all the
20:11
things. But then when you ship, ship
20:13
it to production, you could have real
20:15
Docker there. Okay. Okay. Right. I didn't
20:17
understand. More like it's more, I think
20:19
it's mostly around kind of, yeah, it's
20:22
mostly on the desktop side of things.
20:24
So it's like a simpler, lighter way to
20:26
do desktop stuff. Possibly you could run it
20:28
on your own. Like I said, it
20:31
does have its own CLI for doing
20:33
its things its way, but I think
20:35
people will just use it as a
20:37
Docker desktop alternative. Yeah. It looks like
20:40
it's the, the, the business and commercial
20:42
use pricing is slightly cheaper than Docker
20:44
desktop right now. So there's benefit there.
20:47
That's right. I didn't remember that Docker
20:49
had gone commercial on that side as
20:51
well. So that makes them
20:53
more comparable, right? As opposed to, yeah, there's this
20:55
other free thing. It's not terrible though. I mean,
20:57
like I just, we just had to re-up our
21:00
Docker at work and it's
21:02
what we, I think I paid 300 bucks
21:04
for five seeds per year. That's not bad. Yeah,
21:06
that's not bad. Not when it's not your money
21:08
is fine. It's not my money. You're
21:11
here to spend not your money. Yeah. Um,
21:14
so, uh, yeah, I
21:16
love not your money spending, but
21:19
anyway, pretty cool. Uh, I'll look forward to
21:21
checking that out. I actually think it'd be
21:23
fun to have one of those Zima cubes.
21:26
I do too. I would really consider it again. Wouldn't
21:28
it be awesome to just have all those cool apps
21:30
running plus, you know, a hundred terabytes of
21:32
storage. Yeah. So it's like the cloud, but
21:34
it's at home. So it's just topped in.
21:39
It's more like fog. It's
21:41
way lower down. It doesn't fog.
21:44
It's personal, personal cloud fog.
21:48
That's nice. Okay, cool. Um,
21:51
okay. So that's a, that's a good news.
21:53
Uh, and I, the next step, I don't
21:55
know if this is good news or bad
21:57
news. It depends on your perspective. So I'd
22:00
like to talk about GitHub
22:03
Copilot and other assisted
22:05
AI stuff. So Visual
22:09
Studio Magazine came out with
22:11
this article called New GitHub
22:13
Copilot Research Finds Downward Pressure
22:16
on Code Quality. So
22:19
the question really was kind of if
22:21
I'm using code coding
22:23
with Copilot, if I'm using
22:27
Copilot to help me write some stuff,
22:29
is the Copilot kind of like having
22:31
a junior
22:34
developer? Is
22:37
it more intern or is it more senior dev? Yeah,
22:39
is it more intern or more senior dev? And
22:42
so the result
22:45
was, or their question was,
22:47
is it more senior dev or more akin to
22:49
just the just jointed work of a
22:51
short term contractor? Which I thought that
22:53
was appropriate because a
22:56
contractor might be very skilled, but they
22:58
don't really care about, they might not
23:00
care about maintenance too much. Yeah, exactly.
23:03
So yeah, the technical debt is not a problem
23:05
when you're done next week. Yeah, and you don't
23:07
have to, yeah. One
23:10
of the best ways to not deal with legacy
23:12
code is to shift, switch jobs. Anyway,
23:16
the answer is summarized
23:18
of this white paper
23:20
is summarized by,
23:23
we find disconcerting trends
23:26
for maintainability. Code churn, which is
23:28
the percentage of lines that are
23:31
reverted or updated less than
23:33
two weeks after being authored, is projected to
23:35
double in 2024 as compared to 2021 pre-AI
23:40
baseline. We further find that the
23:42
percentage of added code and copy
23:45
pasted code is increasing in
23:47
proportion to updated, deleted, and
23:50
moved code. In this
23:52
regard, AI generated code resembles
23:54
an iterant contributor prone to
23:56
violating the dryness that don't
23:59
really work. repeat yourself of the
24:01
repositories visited. So, this isn't that
24:03
surprising to me, but it's interesting
24:05
that there was a study done
24:07
by Get Clear. Also
24:10
interesting that it was printed
24:12
in Visual Studio Magazine, but
24:15
I guess the magazine probably
24:17
doesn't have any ties to
24:20
Microsoft or
24:22
GitHub. But anyway, interesting. So,
24:27
Bart out in the audience has a different
24:29
analogy I'd like to adapt. This is
24:31
not Junior or Senior, it's a pair that recycles
24:33
what it found on the web. So, I'm thinking
24:35
more snack overflow copy and paste action type of
24:37
stuff here. And Grant says, I
24:40
read this to you, the research makes
24:42
sense, sounds right, but Co-Pilot has benefits
24:44
to senior devs like applying good patterns
24:46
faster. Yeah, it's pretty interesting. There
24:50
was some comment in the article that
24:53
essentially stuff still got done
24:55
faster, or at
24:57
least got to something working
24:59
faster. It's just
25:02
that people often go back later
25:04
and modify and refactor the code.
25:06
And that's not necessarily bad, but
25:08
you know. I get the sense
25:10
that Co-Pilot is very focused on
25:13
what prompt did you give me? I'm
25:15
going to do that. Rather than I
25:17
understand what your entire project is, I've
25:19
seen all 200 files, I've thought that
25:21
through. Now, in that context, the
25:24
answer to your question is X rather than
25:26
Y. I don't think it does that. I
25:28
think it would be a very high level
25:30
of token usage that
25:32
it would have to take into account. I think
25:34
it probably just goes like, all right, well, you
25:37
asked me a sentence, here's the answer Python. Yeah,
25:39
I actually want to play with it more. I
25:41
haven't really played with Co-Pilot too much. It's not
25:43
something I can use at work, but on personal
25:45
projects, I think it'd be fun to use it
25:47
more. But the... We're
25:51
still... I mean, it seems like, you
25:53
know, it seems like ChatGPT is kind
25:55
of old now and Co-Pilot is like
25:57
really old, but really, we're really... really
26:00
just starting to use these tools in development,
26:02
like they said pre 2021. So it's still
26:05
only a couple of years that we have
26:08
under our belt working with these things. And
26:11
I think that the tools can get
26:13
better. And I'm looking forward to being
26:15
able to... I both worry about
26:18
the developers that are going to lose jobs
26:20
and stuff because of this, but I also,
26:22
I think that the tools will probably get
26:24
better. Like you said, if it can look
26:27
at your entire project and say, hey, in
26:30
this context, this is the right
26:32
call. You're repeating something. You already
26:35
implemented that. Let's call this function
26:37
instead. That would be great. And
26:39
also if we could have AI
26:41
tools to help maybe keep
26:44
a style similar or just a
26:46
general philosophy similar around a project,
26:49
I think there's room for that
26:52
once it gets there. Yeah, I wouldn't be surprised
26:54
to see that happen. All right.
26:56
Extra time. Extra, extra.
26:58
Oh, yeah. So my extras
27:00
are like, did you know
27:02
that PyTest 8 is out? Pip
27:05
install dash u. Pip install dash
27:08
u. But also if PyTest
27:11
is one of those things that you've always
27:13
been thinking about doing, head
27:15
on over to courses.pythontest.com
27:18
and you can learn it really fast using
27:21
a course or you can grab the book,
27:23
of course. Yeah, excellent. Yeah, that's
27:25
very exciting. Do you have any extras? I
27:28
do have a couple things. I want to quickly
27:30
give a shout out to one. This has
27:32
been around for plenty long, although it's changed
27:34
behind the scenes, not in a way, any
27:36
way, really necessarily noticed, but it has. And
27:38
that's just, I want to encourage people to
27:41
join our newsletter. If you go to Python
27:43
by set of them right below the hero
27:45
image, there's a thing that says newsletter. If
27:47
you go to newsletter, you go over there,
27:49
become a friend of the show, put your
27:51
information in there. This is actually a revamp
27:53
as part of my work that I did
27:55
with List Monk, the private self-hosted
27:57
email stuff moving away from me.
28:00
and others. We talked about that last week, two weeks
28:02
ago, some number of weeks ago. And that
28:05
means it doesn't go anywhere. We don't share
28:07
it with anyone, but Brian and I are
28:09
planning some fun stuff and trying to do
28:11
more with newsletters and reaching out and connecting
28:13
with you all. So we would
28:15
love for you to go to Python by
28:17
SutterM, click on newsletter and put your information
28:19
in there. We won't share it, but we'll
28:22
try to make it worth your while. But
28:24
we also haven't emailed a lot on it
28:26
yet in the past. So when
28:28
we start using it, don't think that we bought
28:31
your name off some list. It's that we're
28:33
just starting to use it more. Yes, absolutely.
28:35
It drives me crazy when people mark. They'll
28:37
come sign up for your newsletter and then
28:39
they'll mark it as spam, which means other
28:42
people have a harder time getting it. It's
28:44
like you hyped your information into there and
28:46
then you marked just, it doesn't unsubscribe. Just
28:48
please use the unsubscribe. Just use the unsubscribe.
28:51
Yeah, I actually wrote a ton of software.
28:53
I have a whole separate Docker thing running
28:55
that monitors for people marking stuff as spam
28:57
because there's ways that you can receive hooks
28:59
about that information and automatically unsubscribe
29:01
people if they do that, even if they don't
29:03
unsubscribe. But it's kind of a sense that damage
29:05
is still done a little bit. Email is a
29:07
complete nightmare. All right, let's get away from email
29:09
because it makes me upset. All right, so
29:13
pedantic. Pedantic's awesome. Sydney
29:15
Runkle, who works the
29:18
pedantic company, was on
29:20
Talk Python recently released a
29:23
brand new version, version 2.6.0.
29:25
And Samuel Colvin said this
29:27
is probably the biggest, most
29:29
important release since 2.0. If
29:31
you scroll through the release
29:33
notes for pedantic 2.6, there
29:35
is a lot going on
29:38
here. And even just
29:40
the new contributors is massive. But apparently
29:42
a ton of speed up, some other
29:44
things going on here that you can
29:46
check out. If you're
29:48
using pedantic, everything is excellent. Just
29:51
no more Python 3.7 because we've already had
29:55
the thanks and goodbye to Python
29:57
3.7. We're on to 3.8 as the
29:59
minimum reasonable Python these days. Cool though, huh?
30:01
Yeah, very cool. A lot of
30:03
contributors. There's a lot going on here. It's
30:06
a popular library. If we go over here and
30:08
we see, okay, how many things depend on it,
30:10
whereas the used
30:12
by 318,000 projects. It
30:15
lists one, two, three, four, five. It
30:17
lists like six, and it says plus, as in there's more,
30:19
it says plus 317,946. That's
30:22
not really representative, but okay.
30:25
I understand the UI. Anyway, yeah, it's used by
30:27
a lot of people. Used
30:29
by more projects than there's stars. Some
30:32
people are using it and don't like it apparently. Yeah,
30:34
come on. Star this up, people. The
30:37
only reason you don't see a star
30:39
for me is I'm not logged in.
30:41
All right, and finally, I wrote a
30:43
new essay called Use Custom Search Engines
30:45
Way More. This is not ducked up
30:47
Go versus Google versus Bing, but rather,
30:49
if you use a proper browser like
30:51
Vivaldi or Firefox or even Chrome, though.
30:54
Anyway, you can go and set custom search engines for
30:56
all sorts of cool stuff. One I
30:59
set was PyPI, Brian. If I go to my
31:01
address bar, I don't know if I've done it
31:03
on my streaming one here. Let me see. No,
31:06
only my proper one. This is like
31:08
a separate user account over here, but
31:10
I could go and just type PyPI
31:13
space Pytest and it will search using
31:15
PyPI.org's search results directly for Pytest or
31:17
whatever it is you typed there. I
31:20
didn't know you could do this. It
31:22
is awesome. If you want to search
31:24
Unsplash for stock
31:27
photos, just type U space and you type
31:29
the thing or S O space.
31:31
You directly search on Stack Overflow. Instead of
31:33
searching for it, oh, I was
31:35
looking for Stack Overflow. You scroll through
31:37
until you find the result and you
31:39
go over. You're just like boom, just
31:41
like a super short or GH for
31:43
GitHub. Just search only repositories, not users
31:46
or whatever you want to type in.
31:48
Incredibly easy. That's my essay, my quick
31:50
little. That's not built in already.
31:52
You have to, well, it's supported, but you
31:54
have to configure it on your browser. You
31:56
have to type. Yeah, you have to type
31:59
basically. You go to
32:01
Vivaldi search and then you find, just go
32:03
enter a new search engine, or there's ways
32:05
to do it in Firefox, there's a way
32:07
to do it in Chrome, they're all different.
32:09
But then you just figure out, if you
32:11
just search a site, like if you search
32:13
Stack Overflow, you'll see it's stackoverflow.com/search question mark
32:15
Q equals some string, and so you just
32:17
put percent S there and say that to
32:19
the search engine. Okay, I think I'll do
32:21
a PB for Python bytes. Oh,
32:24
you know what? I'm feeling, feeling
32:26
like we could totally do this.
32:29
I mean, the URL's right up there. Yeah. Q
32:32
equals that, yeah, why not? Just give one
32:34
step, beautiful. Anyway, that's my set of extras.
32:36
Very cool, thanks. Well,
32:38
how about funny stuff? This one's
32:40
quick and short. Okay. It's a
32:43
picture, but you don't need to know anything
32:45
about the picture. It's just a lawyer arguing
32:47
a case. Hey, Brian, says, your honor, my
32:49
client didn't know they were pushing to the
32:51
main branch. That's funny. Yeah,
32:53
that's it. Didn't know I pushed to
32:55
the main branch. This is my defense.
32:58
I'm sorry I took down the website during
33:00
Black Friday. I thought it was my
33:02
fork. That's
33:05
funny. Yeah, how often have you like, well,
33:07
I guess you don't do this too much,
33:09
but it's a three-day weekend. I wanna make
33:11
sure that I have my stuff pushed to
33:13
the central repo, so push,
33:15
but- You're working at home, right? You wanna
33:18
like, think it back up or whatever. Yeah,
33:20
but make sure you're on a branch. So,
33:24
okay, we were talking about junior versus
33:26
senior a little bit on AI stuff.
33:28
So I wanted to share a little
33:30
picture also of, this
33:32
is, I saw this on Mastodon.
33:36
Junior versus senior developer. Junior,
33:39
it's a timeline thing. So
33:41
the junior developer working on project encompasses
33:44
the entire time. Senior developer finding the
33:46
motivation to start takes up like 90%,
33:48
80% of the time, and
33:51
then actually doing it at the end, and
33:53
the total time is equal. Yeah,
33:57
that's amazing. I would have altered it to make
33:59
the senior. like the total time is like
34:01
a little bit less. It's just, um, yeah.
34:03
Yeah. And the junior needs a little bit
34:05
of finding the motivation, but just a tiny
34:08
bit. It's
34:10
still a good one. Cool.
34:13
Well, um, thanks again for oops.
34:15
Uh, such a great episode. Good to talk to
34:18
you this week. It's good to have everybody in
34:20
the, the showing up for the live show. Thank
34:22
you very much. If you want to,
34:24
what's that link again? If you, if people want
34:27
to go watch the live show, um, they
34:29
can buy the mindset of M slash live. Yeah.
34:31
All right. Cool. Plus if you just go
34:33
to Python bytes.fm, it's right at the top.
34:38
Thanks a lot. We make it easy. We make it easy for
34:40
people. Yeah. All right. See you next
34:42
week. Bye.
Podchaser is the ultimate destination for podcast data, search, and discovery. Learn More