Podchaser Logo
Home
Serverless Limitations

Serverless Limitations

Released Monday, 28th November 2022
Good episode? Give it some love!
Serverless Limitations

Serverless Limitations

Serverless Limitations

Serverless Limitations

Monday, 28th November 2022
Good episode? Give it some love!
Rate Episode

Episode Transcript

Transcripts are displayed as originally observed. Some content, including advertisements may have changed.

Use Ctrl + F to search

0:01

Monday. Monday. Monday. Open

0:03

wide depth fans. Get ready

0:05

to stuff your

0:06

face with JavaScript CSS. Load

0:08

Module. Barbecue tip. workflow. Break

0:10

dancing, soft skill web development.

0:12

The hasty s, the craziest, the

0:15

tastiest, web development traits

0:17

coming in hot. Here is Where?

0:19

Sarah CUDA, bars. And Scottsdale,

0:22

Toro, lungo, Tolitzky.

0:28

Welcome to you, Syntaxfm

0:30

this Monday hasty treat, we're gonna

0:32

be pushing it to the limit. And

0:34

I'm talking about the limit of serverless. We're gonna

0:36

be talking about Serverless limitations in

0:39

some some situations maybe where serverless

0:42

has changed how you work

0:45

Not not things you can't get around, but maybe some

0:47

things to be aware of. My name is Scott Tielinski.

0:50

I'm developer from Denver. With me, as

0:52

always, is Wes BOSS. Hey,

0:54

everybody. Hey, Wes. How's it going, man? That's

0:57

going good. I'm excited to talk about serverless

0:59

limitations. Yeah. Serverless

1:02

and we're gonna be sponsored by two amazing

1:05

companies that have absolutely zero limitations. I'm

1:07

talking about Century and magic bell. Century

1:09

is the perfect place to log and capture all

1:11

of your errors and exceptions and keep track of

1:13

what's going on in your application at any

1:15

given point. Whether that the UI,

1:17

the API, the app, anything you could

1:20

possibly imagine, you can collect all of that

1:22

information. Log in into Century

1:24

See how many people it's affecting. See,

1:27

you know, what kind of reach this thing has. Also,

1:29

see when this was introduced, maybe this was introduced

1:31

in a specific version. maybe it was introduced

1:33

by a specific person. You

1:35

can go ahead and and create

1:38

GitHub issues directly from Century.

1:40

You can see performance metrics for

1:42

your application to see how fast it's performing.

1:44

And overall, this is one of those tools that if

1:47

using anything that people are are using

1:49

in the real world, you're gonna need some kind

1:51

of monitoring, and century is really the

1:53

best. So check out at century dot i o. Use

1:55

the coupon code at tasty treat, all lower

1:57

case, and all one word. and you'll get two months

1:59

for

1:59

free. Let's talk about our other sponsor magic

2:02

bell. They are a notification inbox

2:04

for your application. If you wanna add

2:07

notifications to your application,

2:09

you need to think about, okay, well, there needs

2:11

to be something in the app. but

2:13

there also needs to be like a a push

2:15

notification maybe to their phone and maybe

2:17

to Slack, maybe to email, it

2:19

gets kinda complicated. Magic Bell

2:21

makes all of that super easy.

2:24

They also they rolled out this thing called

2:26

segments, which

2:28

is an entire UI for

2:30

segmenting your entire customer

2:32

base, kind of like how you would do it in an

2:34

email in an email

2:36

newsletter program. And you can say,

2:38

like, These people are in

2:40

this this segment. These people that

2:42

their email matches this, or they can be dynamic

2:44

when somebody switches a payment

2:47

tier, then send them this notification. And

2:49

It's a whole UI. You don't have the right code for

2:51

having to figure out those complicated things. And

2:53

quite honestly, whenever you're doing a

2:55

lot of these complicated segments.

2:58

These people or these people or people

3:00

who have opened the app in the last

3:02

ten ten days and

3:04

they have an email address of

3:07

google dot com, then send them this.

3:09

Makes it really easy. Check it out. magic bell

3:11

dot com sick. Serverless

3:15

West, do you wanna give a little bit of background on,

3:17

like, where this idea came from and

3:19

what what you're thinking here? Yeah. So

3:21

I've was just going through a little bit of

3:23

updating of my personal website. And

3:26

I've got, I don't know, four or five different serverless

3:28

functions for doing things like generating

3:32

the images

3:34

for Twitter preview. I got another

3:36

one that fetches my Instagram feed, another

3:38

one that fetches my Twitter feed, and A

3:40

couple other ones in there. And

3:43

I I hit a couple of road bumps here

3:45

and and over the years

3:47

of writing little serverless functions here or

3:49

there, I've I've hit a couple little bumps.

3:51

And I thought, like, yeah, it's

3:54

serverless is awesome. However,

3:56

the upside of of getting

3:59

something that is infinitely

4:01

scalable and cheap and all that is

4:03

there's always some sort of constraint. Right?

4:05

whenever you work yourselves into and and

4:07

we could probably extend this not just to

4:10

serverless functions, but also edge

4:12

functions. And anytime that

4:14

you get something that is better, you

4:16

generally are giving up something else.

4:18

And that that's just a general rule in

4:20

life, I guess. So I

4:22

thought, like, let's just, like, rattle through a bunch of

4:24

sort of limitations. These are not necessarily

4:27

bad things about Serverless. but

4:29

just things that you have to think

4:31

about your application in a little bit of a

4:33

different way than maybe

4:35

you have thought about in the past with a

4:37

traditional long running server

4:39

rendered application or server app, not

4:41

server rendered. That's that's something

4:43

totally different. So the first

4:45

one is a function limit

4:48

So there is generally a limitation

4:50

to how big a function can

4:52

be. Fifty megs is the one

4:54

for AWS. You might

4:56

think, like, fifty megs. Like, I'm never

4:58

gonna write fifty megs of JavaScript. Take

5:01

a peek in your node modules. How big is

5:03

your node modules folder? for

5:05

for everything you want. You can get

5:08

out of the something

5:10

as simple as, like, a text

5:12

to emoji library, you can

5:14

rack up sixty megs

5:17

real quick of just

5:19

libraries because server

5:22

server libraries don't have to

5:24

worry about size while they do

5:26

because that's why we're talking about it. But so

5:28

there's generally not as much of a -- Yeah. --

5:30

historically. Yeah. Where

5:32

I hit it was I was using puppeteer,

5:35

and puppeteer is a headless chrome

5:37

browser. And what pop what I use

5:39

puppeteer four is

5:41

I have a page on my

5:43

website that renders thumbnails out,

5:46

and I do that so I can have

5:48

full HTML, full CSS

5:51

full JavaScript to make my thumbnails

5:53

for my Twitter previews

5:56

Open Graph Previews. Right? Which is And I

5:58

know because I do

6:00

Claudinary and, like -- Yeah. --

6:02

trying to no

6:03

shade on Claudinary because I love them.

6:05

But it's certainly not HTML and CSS.

6:07

I wish I was doing it that way. Yeah.

6:09

It's true. And if people always send me,

6:11

Verstel has like a SVG version of

6:13

it, I want full ass, HTML,

6:15

and CSS. And I wanna just wanna take a screenshot

6:17

of it, you know, full ass

6:20

HTML ones. That

6:22

should be important. talk

6:25

about Lumi. That's a chink

6:27

good domain name. So

6:30

the problem is is that

6:32

Chromium

6:32

has it grows.

6:34

Every time they add a feature to Chrome,

6:36

the bundle of Chromium grows.

6:38

And we hit we've hit an inflection point

6:40

now where it's forty nine

6:42

point seven megs. And

6:45

so you literally I was running

6:47

Chromium twenty lines of

6:49

code and some

6:51

other library to load in puppeteer for it. And

6:53

I was going over by

6:55

three megs. And I was like, I brought it up

6:57

with the package author and he says, yeah. Like, that's

7:00

Chrome is just bigger now. So

7:02

you can no longer put Chromium into

7:04

a serverless function option.

7:07

Okay. The solution there

7:09

is you use and AWS

7:11

has a thing, is anytime you have something larger

7:13

like an entire browser that needs

7:15

to go on a serverless function. Use something

7:17

called a layer in in AWS Lambda.

7:20

And the layer will just kind of have it

7:22

ready for you. and then you only have to ship the

7:24

code that actually runs it and it's it'll be

7:26

like ten k

7:28

instead of fifty megs. But

7:31

for sale, Nellify, all these

7:33

companies don't and all

7:35

these companies that are, like, easy to host stuff,

7:37

they don't give you access to layers. That's like just

7:39

a AWS thing. Actually, begin dot

7:41

com does. So

7:43

I have to move to something else or

7:45

I talk to the author, he's gonna bundle Chrome

7:48

with some less flags. Like like, there's

7:50

stuff in Chrome that I don't need, like,

7:52

GPU stuff and and three

7:54

d stuff, so maybe they can bundle

7:56

chrome with that. But that is the function

7:59

limit. You'll be you'll be surprised

8:01

how quickly a

8:03

serverless function can go over fifty megs

8:05

once they start getting everything in.

8:08

Solutions to that ES build, that

8:10

brought it down about five megs for me

8:12

switching from webpack to e s build.

8:15

Tree shaking will help as well. Were you

8:17

using ESM Is that why?

8:19

or did you say in CJS with

8:21

that? You just I'm better. I

8:24

was in common j s for the

8:26

thing. I believe that I tried to

8:28

switch it over? because that that was

8:30

gonna be my question. It's like, can ESM

8:32

help here as well? I don't think so

8:34

because ESB build

8:36

and Webpack knows how to

8:38

tree shake regardless of

8:41

of which which type that

8:43

you're in. And the dependencies are not

8:45

necessarily all shipped as ESM. So you

8:47

still have to have a conversion process

8:50

there. Next one we have here.

8:52

Node support. this is

8:54

more of an edge thing and I believe

8:56

that it will be going away soon.

8:58

But when you run a

9:01

function in cloudflare workers is probably

9:03

the big one, but also if

9:05

you like Vercell, NextJazz,

9:07

middleware, those things are running in

9:09

edge functions and they don't have

9:11

full blown node support

9:13

in there. It is just a pared

9:16

down JavaScript environment. So if

9:18

you wanna run a whole bunch of

9:20

stuff in that, I you

9:22

don't have access to Olive node. So again,

9:24

there's rate that comes with the

9:26

ability to run fast.

9:28

I believe I saw something the other day

9:30

that Cloudflare is rolling out full

9:32

NPM support. because that

9:34

has always been a sticking point

9:36

for me -- Mhmm. -- is that, like,

9:38

yeah, I'm fine with this, I

9:40

guess, as an edge function, functions generally sits

9:42

in between your request and your

9:44

response and sort of does stuff in the middle,

9:46

kinda like middleware. That's why they use it

9:48

in an XGS

9:50

middleware. But,

9:52

yeah, it's it's you don't always

9:54

have full node available, and then that

9:56

really limits packages that you could

9:58

possibly use inside of that

9:59

function, totally. Next one, Braun

10:02

Jobs. I have a tweet

10:04

that's like years old and people reply to it

10:06

every couple months, hey, is there an a solution

10:08

to this yet? Not

10:10

every serverless provider

10:12

or framework gives you

10:14

the ability to do kron jobs. Ark,

10:17

Brian Laroo, begin dot com,

10:19

does have it, but things

10:21

like Vercell doesn't have it. I

10:23

Does

10:24

Nellify have it? Let's do a

10:25

quick Google. Yeah. So

10:27

Nellify Nellify has

10:29

scheduled

10:29

functions Yeah. Render has something

10:32

like that as well. Yeah. Render is

10:34

not serverless though. Oh. I thought they

10:36

had they have functions.

10:38

They

10:38

have over prime jobs.

10:41

You can have a service that is just

10:43

a straight up crowd job. I guess that's not --

10:45

Yeah. -- that's

10:45

not a little section. That's a thing with

10:47

Serverless. A crown job is -- Yeah. -- that's

10:50

the most simple thing ever. You set up a

10:52

crown job and it runs when you want it with

10:54

serverless function is the

10:56

they don't always have the ability to

10:58

do crown jobs. And the solution that

11:01

everybody always says is just use

11:03

this service. And that always

11:05

kills me when part of your

11:07

infrastructure is eight dollars

11:09

a month to run a crun job. That's not a

11:11

good solution to me. Next

11:13

one, local development isn't one to one. So

11:15

again, we talk about different environments. Cloudflare

11:17

has done a really good job at making

11:19

this local thing called mini flare.

11:22

It tries to replicate it,

11:25

but I still run into issues,

11:27

not just Cloudflare alone. This

11:30

is all things is that your

11:32

local environment does not look like

11:34

your deployed environment. And

11:36

in in the past, people would use Docker

11:38

to do that And I guess you still

11:40

can deploy Docker, but it's still

11:43

hit bumps bumps in the

11:45

road. Mhmm. Database access

11:47

isn't straightforward. Oh, yeah. That's actually

11:49

a concern of mine. Yeah. If you wanna use

11:51

something as simple as what's

11:54

the text based database

11:56

that is awesome as execute. If you

11:58

only use SQLite, serverless

12:00

functions are spread amongst many

12:03

servers when they scale up. So there isn't, like, a

12:05

just one server that has it. that that also

12:07

is the case when you do you

12:09

have multiple servers running on, like,

12:11

something like Digital Ocean as well. But

12:14

Same with database access, you need to

12:16

pool connections because you fire up

12:18

a thousand serverless

12:20

functions at once. You're gonna make a thousand

12:22

connections to your database. and

12:24

that's not ideal. So then you have an additional

12:27

step and an additional infrastructure to

12:29

pool your connections, which is

12:31

you have one service that connects

12:33

to the database, and then all of

12:36

your serverless functions talk to

12:38

the pool instead of going

12:40

directly into the database. Sharing

12:43

code, not always easy. So

12:45

if you have six serverless

12:47

functions and you have a bunch of, like, shared code between the

12:49

two. You can't just like

12:51

some sometimes the bundlers don't do a

12:53

great job at sharing code

12:55

between between the two. And this is something I've

12:57

hit many times over again

12:59

or it's just like, can I just be able to

13:01

require something from a different folder and

13:03

you figure it out? it's been a pain in my

13:05

side for a while. Environmental

13:08

variables. This is something I hit

13:10

with NetLify the other day. AWS,

13:12

so I should explain. Versal,

13:15

NetLify, begin all of

13:17

these these things are not running their own

13:19

serverless functions. They sit on top of what

13:21

is called AWS Lambda. and they get

13:23

they make it a lot easier to do this type of thing,

13:25

and they provide a whole

13:27

bunch of, like, tooling and infrastructure on

13:29

top of it. So

13:31

the environmental

13:33

variables on an AWS Lambda.

13:36

Lambda is four k, which is

13:38

one thousand twenty four characters.

13:40

You might think, oh, that that's quite a bit.

13:42

But sometimes you have these very long

13:45

generated strings that need to be set

13:47

as environmental variables. and

13:49

you run over it. In my case, I was

13:51

on NetLify, and NetLify

13:54

sends all of your environmental variables

13:56

including things like production

13:58

server, dev server. I had a bunch of

14:00

URLs as NetForty. And I ran

14:02

out. I hit that I was

14:04

here renaming my variables to

14:06

be the shortest variable names

14:08

as possible. Oh

14:10

my gosh. Yeah. Yeah. and

14:12

because they have to send it all to the serverless

14:14

function because I've my site wouldn't

14:16

deploy. The next morning, they

14:18

announced that you can

14:20

scope things to just be

14:22

NetLify and just be serverless functions. So

14:24

that's no longer a concern,

14:26

but it's something you think about. Keep

14:28

your environment, variable names

14:30

short. Yeah. Yeah. I've never even

14:32

that's not anything that I would have thought about ever.

14:35

No. Me neither. Until it hit, and I

14:37

was frustrated. Time outs.

14:40

Cloudflare is ten second

14:42

or sorry, ten millisecond time

14:45

out. You must reply that's the reason.

14:47

Cloudflare is a pared down environment. You

14:49

don't have three seconds to go to an

14:51

API, fetch something and come back on

14:53

Cloudflare. You have ten milliseconds

14:55

to do what you want because generally,

14:57

Cloudflare is sitting you sort

14:59

of do the work on the way to your website,

15:01

not as the endpoint. That's the

15:03

difference between the edge function Serverless function.

15:05

Mhmm. Most serverless

15:07

functions tap out at ten

15:09

seconds So if you wanna

15:11

do something for a longer amount of

15:13

time, scheduled functions, which are

15:15

functions that don't run when

15:17

somebody hits a URL in the browser, but

15:19

they run every thirty minutes or at

15:21

two o'clock every morning. Those have a thirty

15:23

second time out. So If you need to

15:25

fetch a bunch of data and you need to

15:28

wait ten seconds between the two, then you have

15:30

to split it over multiple functions

15:32

and then you're dealing with databases because there's

15:34

no shared memory between them and --

15:36

Oh. -- that could be a bit of a pain. Yeah.

15:38

Yeah. See, like, these are things that you

15:40

take for granted. in

15:42

the regular old server world

15:44

that are not

15:46

easy in -- Yeah. -- serverless

15:48

world. SaaS is

15:50

expensive. I had a tweet the other day where

15:52

everyone's like serverless is super cheap,

15:54

but here I am signing up

15:56

for my ninth nine dollar a month around

15:59

my website and you realize, oh, okay,

16:02

this can get really expensive. And that's a

16:04

joke because AWS

16:06

straight up is very, very cheap. Their

16:08

free plan is probably more than

16:10

I would need for most of my projects.

16:12

But once you

16:14

start realizing, I need a build

16:16

pipeline and a test runner and a

16:18

GUI and all that type of stuff

16:21

that you realize, okay, maybe maybe

16:23

I'm not using it. AWS has a bunch

16:25

of tooling around it. And quite honestly, I need

16:27

to I need to

16:29

familiarize myself

16:32

with it. because

16:32

it

16:33

seems like that's probably the way to go

16:35

with a lot of the stuff. Yeah. Just get

16:37

good at AWS. Yeah. does

16:39

that's why people are good at it. So they don't you're not paying

16:42

somebody else who knows how it works and

16:44

and sitting on top of it. Like,

16:48

like, what's the minimum I don't know if we

16:50

should talk about how much pricing now.

16:52

I'll leave that. We're getting kinda here. But

16:54

just look at, like, a lot of these companies

16:57

that do serverless

17:00

for you. Look at how much it

17:02

costs just to own

17:04

an account. with them, not to run-in anything, run-in to do

17:06

any bandwidth or something like that. Often, you

17:08

just have to pay per seat. You got five

17:10

developers on your team. You're paying nine bucks

17:12

a month. for every

17:14

developer. Cool. That adds up quickly. For

17:16

a lot of companies, maybe not.

17:18

But

17:18

for for some people, it is where whereas you're

17:20

used to spending five bucks a month for a digital

17:22

ocean droplet and and you're good to

17:24

go. Yeah. Right. I asked on Twitter as

17:26

well what people thought Brian

17:29

Larew from begin dot com. He said infrastructure

17:31

as code is crucial. That's kinda a

17:33

really good one is you

17:36

can't rely on somebody

17:38

knowing which buttons to click in

17:40

the AWS console. You can't

17:42

rely on someone because if you have to set it

17:44

up again. You're not gonna remember that. So

17:46

your infrastructure has to be a configuration

17:49

file. It has to be JSON

17:51

or Yamal that you can easily redeploy in

17:54

the future. A lot

17:56

of people said cold starts,

17:59

were

17:59

an issue, which is essentially

18:02

when you don't run a

18:03

serverless function for a

18:06

while, it

18:06

will go to sleep. running on

18:08

any server anywhere. And that's that's why

18:10

serverless functions are so nice is that if

18:12

you you're trying you're trying

18:14

to share the Amazon services

18:17

servers with the rest of the world. And if your

18:20

generate PDF function

18:22

that you run once a month for

18:24

two minutes is not

18:27

gonna run for twenty nine more days

18:29

after that, then that thing goes to sleep and

18:31

it's not using any resources. That's the

18:33

benefit of it. But the

18:36

downside is that if you need to to have

18:38

that spun up and

18:40

reply very quickly, that

18:42

there could be a cold start issue

18:44

there. That's not something I'm super familiar

18:46

with, but it seems like

18:48

it's becoming less and less of an

18:50

issue every single time that I talk about it. So I'm not sure

18:52

about that. Yeah.

18:54

Seems like those functions

18:55

are going to sleep really

18:57

easily. and can't we could

18:59

have those functions talk to my kids

19:01

and say, yay. Love it.

19:04

And then the last I have your

19:06

search offerings are

19:09

not ideal.

19:10

Because with with search,

19:12

you need to, like, be constantly indexing

19:15

your database for things.

19:17

And that's a

19:18

serverless function can only run for thirty

19:20

seconds at a time. you

19:22

need a you need like a a server that's constantly always

19:24

doing that type of stuff. So

19:26

everybody always says, what's the big

19:28

one out there? What's

19:30

the now? forgetting. Yeah.

19:34

Algoia. That's the wild.

19:36

Algoia. Yeah. Yeah. So

19:38

the solution, again, to a lot of this,

19:40

is use the service. But and

19:43

I love Ogloo. Yeah. I think it's amazing,

19:45

but it is a very expensive once you get

19:47

going on large data sets. So you

19:49

gotta you gotta be careful there as

19:51

well. That's my thoughts on

19:53

serverless limitations. just

19:56

things you need to know about when you

19:58

are approaching a new project with

20:00

serverless and hopefully that's some helpful stuff in

20:02

there. Yeah. I

20:02

learned a lot. Holy cow. Well, thanks so

20:05

much, Wes. Alright. No problem. Catch you

20:07

later. Peace. Peace. Head

20:09

on over to f

20:11

m for you. full archive of all of our shows. And

20:14

don't forget to subscribe in your podcast

20:16

player or drop a review

20:18

if you like this show.

Rate

Join Podchaser to...

  • Rate podcasts and episodes
  • Follow podcasts and creators
  • Create podcast and episode lists
  • & much more

Episode Tags

Do you host or manage this podcast?
Claim and edit this page to your liking.
,

Unlock more with Podchaser Pro

  • Audience Insights
  • Contact Information
  • Demographics
  • Charts
  • Sponsor History
  • and More!
Pro Features