Podchaser Logo
Home
Kubernetes Cost Optimization

Kubernetes Cost Optimization

Released Wednesday, 14th September 2022
Good episode? Give it some love!
Kubernetes Cost Optimization

Kubernetes Cost Optimization

Kubernetes Cost Optimization

Kubernetes Cost Optimization

Wednesday, 14th September 2022
Good episode? Give it some love!
Rate Episode

Episode Transcript

Transcripts are displayed as originally observed. Some content, including advertisements may have changed.

Use Ctrl + F to search

0:00

Today's sponsor is Datadog, real time

0:02

monitoring platform that unifies metrics,

0:04

logs, and distributed traces from your cloud

0:06

containers and orchestration software. Datadog's

0:09

container centric monitoring features allow you

0:11

to track the health and performance of your dynamic

0:13

container environment. The container map

0:15

provides a bird's eye view of your container fleet

0:17

And the live container view searches groups

0:20

and filters your containers with any criteria

0:22

like tags, pods, or workspaces.

0:25

To start monitoring container clusters, sign up

0:27

for free trial today, and Datadog will

0:29

send you a free t shirt. Visit datadog

0:31

dot com slash container dash

0:34

cloudcast to get started. That's data

0:36

dot com slash container dash

0:38

cloudcast.

0:38

Cloudcast media presents from the

0:41

massive studios in Raleigh, North Carolina.

0:43

This is the cloud cast with our undeveloped

0:46

and Brian Graceland, bringing user

0:48

of cloud computing from around the

0:50

world.

0:52

Good

0:52

morning, good evening, wave y'all, and welcome back

0:54

to the cloud cast. We're coming to you live from our

0:56

massive podcast studios here in Raleigh,

0:58

North Carolina. And

1:00

as I hit record on this, it is

1:02

the middle of September. And

1:05

probably from a news standpoint, there's

1:08

two big news articles we need

1:10

to talk about, and we're gonna go ahead and jump

1:12

right in. First of all, VMware

1:15

Explorer, formerly known

1:18

as VMware World, happened recently. And

1:20

Alex William, over at the new stack,

1:22

put together a really good piece.

1:25

Kinda highlighting some maybe

1:28

some of the less covered aspects of

1:30

it, but of a particular interest to

1:32

our listeners. And so this is less

1:34

about virtualization and less about

1:37

infrastructure, but more about

1:39

cloud native, about containers,

1:42

tanzoo, and also the

1:44

rise of what they're referring

1:46

to as a platform engineer. So a platform

1:49

engineer, of course, is somebody

1:51

that is building the platform for

1:53

the developers because you see a lot of companies

1:55

these days. How do we talk to developers? I wanna

1:57

talk to developers. At the same

1:59

time, they may not be the ones

2:03

doing the purchasing at times, or

2:05

they just don't care because

2:07

they don't care about the platforms underneath it. It's

2:09

not something they think about on a daily basis.

2:12

So interesting article

2:14

that also goes in to mention just

2:16

the dev experience in general

2:18

and VMware's history there

2:21

and They also mentioned Backstage,

2:24

which is an open source project VMware

2:26

uses to support its developer experience.

2:29

They mention Pivotal and

2:31

the history with Pivotal and the spin

2:33

in and spin out and they mentioned Heptio

2:36

and a good friend of the

2:38

show, chip Childers who

2:40

is the chief open source officer

2:42

at VMware has

2:44

some things to say in the article as

2:46

well. So definitely worth

2:48

checking out if that is of interest to you.

2:51

And for our second article, the other big thing

2:53

going on in the industry right now, if

2:56

you follow crypto, is Ethereum,

2:58

and the Ethereum blockchain merges

3:00

and changes coming up. I'm

3:02

not gonna go into this too

3:05

in-depth, but I did want to mention it because

3:07

it is a pretty big news piece, because it

3:09

has been attempting

3:11

to happen for a long time now. I wanna

3:14

say, gosh, a good six months

3:16

or so now they've been trying to do this merge.

3:18

And if you're not familiar

3:20

with it, what is it? Well, you have really

3:23

two ways to really do blockchain.

3:25

One is called proof of stake. The

3:27

other one is called proof of work.

3:30

Proof of work is the older way. It's what

3:32

the original Bitcoin was built

3:35

around in Ethereum as well. Most

3:37

new ones are what we call proof of stake.

3:39

Why is proof of stake necessarily better?

3:42

Each one has their benefits and downsides,

3:44

but the big benefit for proof of stake

3:47

much, much lower energy

3:50

consumption. Proof of work requires

3:53

a lot of computation. When you think

3:55

about a Bitcoin miner, or the,

3:57

you know, the big crypto miners and the big

3:59

rigs and with big GPUs and lots

4:01

of fans and taking all the energy, they're

4:03

typically proof of work. So

4:05

by moving over to proof of stake,

4:07

hopefully that will alleviate some

4:10

of the concerns around that in Ethereum.

4:12

I mean, the two big downsides

4:15

to Ethereum have always been. It's

4:17

it tends to be a little slower than a lot

4:19

of the others, and it tends to be a power hog.

4:21

So this will at least

4:24

start to fix some of that, and we'll

4:26

lay the foundation for more improvements

4:28

over time as well. With

4:31

that, I'm going to wrap up cloud news of the

4:34

week. Coming up right after the break,

4:36

we are gonna be talking about Kubernetes

4:38

cost optimization.

4:40

offering.

4:42

Oh, annoying. Did

4:44

you know that seventeen out of twenty people stopped

4:46

watching a video because of stalling and rebuffering?

4:48

Don't let your users click away to a competitor

4:50

site. If your business lies in online

4:52

media, rely on CDN seventy seven

4:54

to deliver a seamless online experience to

4:56

your audience. CDN seventy

4:58

seven is one of the leading global providers of

5:00

content delivery network services. They

5:02

power the world's most popular websites

5:04

and apps such as Udemy, ESL

5:06

gaming, live sports TV, and social

5:08

media platforms. Aside from their massive and

5:10

redundant global network, you're gonna love their

5:12

no BS attitude and skilled team of engineers

5:15

ready to help twenty four seven. No

5:17

chatbots, no tickets bouncing around unresolved

5:19

for days, just people who know your use

5:21

case can immediately help you pinpoint and fix

5:23

the problems. Don't wait until your users run

5:25

out of patience. Go to CDN seventy

5:27

seven dot com slash cloudcast

5:29

and ask her a free trial with no duration or

5:31

traffic limits. That's CDN seventy

5:33

seven dot com slash cloudcast.

5:35

in

5:38

And we're back. And folks, as

5:40

we get closer and closer to Quban,

5:43

North America, and Detroit, and if

5:45

October of this year. We've been spending a little more

5:47

time this summer throughout the last

5:49

few weeks and so forth. Really kind of digging

5:51

into the things that are kind

5:53

of adjacent to the core Kubernetes

5:56

project. We've obviously talked about that quite a bit

5:58

and we've been looking at more and

6:00

more areas in which as more people

6:02

deploy Kubernetes, as more people optimize

6:04

their Kubernetes as they look to

6:06

kind of expand where it's being used and so

6:08

forth. We continue to look areas in

6:10

which, you know, there's technologies, there

6:12

are new ways of thinking about how to make those

6:14

environments better, how to make them more efficient, how

6:16

to make them more responsive

6:19

to your business in order for you to be able

6:21

to to go solve business problems and not have to think

6:23

about Kubernetes so much. So You know, we

6:25

continue to kind of dig into that and, you know, one of the

6:27

things that we love to do on the show is not only

6:29

look at the new technologies that are making that

6:31

happen, but also you know, new companies

6:33

that are, you know, at the forefront

6:35

of making that happen. And so today, we're really excited to

6:37

kinda do both of those things. So so really

6:39

excited to have Shahar, Yakhov who is

6:41

product manager at Granite, who is an

6:43

intel company. Shahira, welcome to

6:45

show. Great to have you on. Hey, Brian.

6:47

Happy to be here. Yeah.

6:49

So excited. You know, we we love having

6:52

new companies. You know, we love having

6:54

companies focusing on new areas. Tell

6:56

us you know, before we dive into into granulate

6:58

and all the things you guys are doing, tell us a

7:00

little bit about your background and, you know,

7:02

ultimately, what what, you know, what you

7:04

guys focus on at granulate? Sure.

7:07

Yeah. So first of all, my name

7:09

is Shahriar Khong. It's really hard to

7:11

pronounce it in Hebrew. In the English because

7:13

it's an Hebrew name. And so

7:15

Sharia Cobb. Currently working as a

7:17

product manager at Granite, and

7:20

I started my journey around there.

7:22

decided to go as a software engineer in the

7:25

Israel Defense Force. And since

7:27

then, I had the opportunity to work with

7:29

different companies, and specialized

7:31

mostly on the dev side

7:33

and CSD aspect and

7:35

some things about monitoring and observability.

7:38

And I did a transition from a software

7:40

engineer into project manager somewhere

7:42

in the middle. And right now,

7:44

as I said, I'm working on twilight, really

7:46

excited about that because I have the opportunity

7:48

to both use my technical skill

7:51

that I got from my my technical

7:53

background and my product run with

7:55

skills. Yeah.

7:56

No. That's that's great. We've had a number of

7:58

of alumni of of of Israeli defense

8:01

really really smart people. They trade you

8:03

very well. You you work on very challenging

8:05

problems. So, yeah, when we see

8:07

when we see those folks move into

8:09

into the technology product space, we

8:11

always know that there's really good foundation

8:13

there. Let's let's start by

8:15

talking, you know, I highlighted at the beginning, we're

8:17

gonna talk a little about cost optimization.

8:21

As you think about it as a as product

8:23

manager as to, you know, how you

8:25

help companies that are running Kubernetes

8:28

or any large compute environment,

8:30

like, how do you think about, you know, what are the

8:32

things that we can do to help people around

8:34

cost optimization? What's kinda what's the

8:36

bigger picture the way you think about it?

8:39

Yeah. So I I

8:41

think you highlighted it's pretty good. Right

8:43

now, it's pretty pretty

8:46

important to understand where your

8:48

money goes to? What what is the cost that you are

8:50

paying for your environment? Whether it is,

8:52

like, on premise environment or

8:54

cloud environment? Yep. It doesn't matter. Because

8:56

right now, the burn rate, it's really

8:59

important thing to understand

9:01

where the burn market and the burn market

9:03

there, everything to change, the economics

9:05

So understanding how to improve

9:07

your margins, I think it's pretty pretty

9:09

important right now. Right. And

9:12

and and as an optimization

9:15

company, and we have multiple solution.

9:17

Yeah. We have a good solution on portfolio,

9:19

mostly focusing on cost

9:21

optimization. and performance optimization.

9:24

And also, when we are focusing on

9:26

performance optimization, most of the time we

9:28

translate those kinds of optimization, those

9:30

kinds of performance optimization, into

9:32

cost optimization. There are

9:34

sometimes when we translate them into

9:37

improvement of the SLA and

9:39

competitive SLA and that stuff, but

9:41

mostly what care of the company that

9:43

we are working on, what really care about them and

9:45

what the biggest pain pain point of

9:47

that is the cost. how much they

9:49

pay for the cloud provider, how much they pay

9:51

for their infrastructure, and they want to

9:53

eliminate it. And I think

9:55

that that the few things on their

9:57

side, like, is it boarding to other

9:59

step? And the the the step

10:01

where the company gets to the point

10:03

where they need to understand, like, how

10:05

much they paying for stuff? Sometimes

10:07

it get, like, too far from the from

10:09

the point where it important on the

10:11

standard. And so we think the first and

10:14

foremost things to understand that cost

10:16

optimization is too important, then

10:18

that need to be handled. And

10:21

I think this is the the the most important

10:23

point

10:23

to to understand right now. Yeah.

10:26

Yeah. Well, and and I think and you highlight

10:28

something that's really important and this isn't

10:30

always obvious. You know,

10:32

what I've found having worked around

10:34

Kubernetes for a long time is, you know,

10:36

there's there's a period of time when when people are

10:38

just getting started. They're trying to figure out

10:40

the technology. And and oftentimes,

10:42

you know, they'll they'll look at other things.

10:44

They look at cost optimization

10:46

or or even security or stuff, and they

10:48

and they say, well, we only have one

10:50

cluster right now. I don't I don't really need to

10:52

worry about those things. And

10:54

what inevitably happens, and

10:56

we we see this time and time again

10:58

at different companies. Once they

11:01

figure out the basics of making things work

11:03

and they realize, okay, I can I can

11:05

help deploy applications faster. We can

11:07

scale them automatically. Like,

11:09

their ramp their ramp up is really

11:11

fast. And I and I think what you're really

11:13

highlighting is, while the

11:15

ramp up in terms of getting applications can

11:17

go fast, like, the cost can can

11:19

grow really quickly too. And you you you

11:21

need to be thinking about that early

11:23

in the process rather than waiting till you

11:25

get those first big bills and then not

11:27

knowing what to do. Yeah.

11:28

Yeah. Exactly. And as

11:30

I said, there there are several ways to to look

11:33

at that. there is a like, you you

11:35

can look at that in in many points of

11:37

view. The first one, you can look at that on

11:39

a cost visibility view. Like, where the money

11:41

goes to, which services are cost the

11:43

most. And what the resources that

11:45

derive the most cost and the the

11:47

like, wasting the the most of money

11:49

or investing the the most of money. The

11:51

second way to look at that is, like,

11:53

how do I manage my infrastructure better?

11:56

how do we manage that that the different

11:58

resource that that loads pools from

11:59

Kubernetes or autoscaling groups from

12:02

a a VM based application,

12:04

compute based application. And

12:05

our approach at graphite is looking

12:08

looking of that on a really unique

12:10

and other way from a point of

12:12

view of the application says As

12:14

the application is really, really dynamic.

12:16

And as the company, the the

12:18

massive growth, the rapid growth, as

12:20

the company grows, and

12:22

you it's becoming the

12:24

hardest place, the the the lost

12:26

densify area for the

12:28

company because microservices, age,

12:31

microservices, like, top every every

12:33

deal too in the environment. And

12:36

from a big program company, the trial

12:38

is, like, Right? Two to three deployments, it

12:40

become really very

12:42

fast, like hundreds of deployments and

12:44

multiple clusters and multiple staging.

12:47

and most of them in the production. So

12:49

and our approach is to look in that in the

12:51

application level layer, really, like, from the

12:53

application level perspective. Okay.

12:54

Yeah. And I was I was gonna ask you you sort

12:56

of led me into my next question. It's,

12:59

you know, there's there's always ways you can

13:01

focus. Do you focus on infrastructure? Do you

13:03

focus on on application layer.

13:05

You know, kind of walk me

13:07

through a little bit as your as

13:09

your, you know, as your products

13:11

and platform are looking at the application

13:13

layer. what are some of the common things

13:15

that that you see? And

13:17

and, you know, some of the areas that people often

13:19

make mistakes or or have, you know,

13:21

problems that lead them to higher bills?

13:23

Yeah.

13:23

Sure. So as I said,

13:25

our first solution were to

13:27

to to check there that to to

13:29

get cost reductions through performance

13:31

improvement So we gather first solution

13:33

in the portfolio, the the continuous

13:35

runtime optimization, which trying

13:37

to identify utilization pattern

13:39

in your application, and then by

13:42

identifying this button, we know how to leverage

13:45

all this button and open a button

13:47

next some bottlenecks in

13:49

your application. And that's most of the

13:51

time leads to performance improvement, which

13:53

we translated in the cost optimization.

13:54

But as we look in that

13:57

from the predictive point of view, And I

13:59

want that a little bit to that.

14:01

It's that looking on the competitive point

14:03

of view, it's like become

14:05

pretty different. It's like behaved really

14:07

different. because as

14:09

a as a competitive container's

14:11

work, if you'll improve the

14:13

performance utilization part of each

14:15

application utilization part, did

14:17

did did did not mean necessarily that

14:20

anyhow

14:21

your your computer, your cost

14:23

did improve. because Kubernetes, the the

14:25

way it works, the the scheduler part of

14:27

Kubernetes is just taking

14:29

those those atomic box

14:31

and which we call deployments

14:33

workloads, products, whatever, and

14:35

taking those atomic boxes

14:37

and trying to locate them into

14:39

the different nodes. And by

14:41

locating the those those

14:43

boxes into the the the node is now

14:46

that the orchestration, the command is

14:48

orchestrate the rock. Yeah. Schedule

14:50

the rock. And but yeah.

14:52

Yeah. So what we actually did, you know, for

14:54

our Kubernetes users, we

14:56

did the first phase of the performance improvement.

14:59

reduce the utilization part,

15:01

but has has the

15:03

like, we we show them the the performance

15:05

improvement They didn't achieve any

15:07

significant cost reduction immediately.

15:09

So

15:11

we we put our in investigator

15:13

and the research ahead and trying to

15:15

understand why why the immediate cost reduction reduction

15:18

haven't achieved. So

15:20

after investigating that and beat,

15:22

we identified the hover provisioning

15:24

gap. And by what I mean

15:26

by say over provisioning is that

15:28

there is the requested resource set from

15:30

Kubernetes, the the requested value. that

15:32

each deployment of prod like

15:34

set, and there is the utilization part of

15:36

that. And there is a huge

15:38

gap. Something's a huge gap. Something's a

15:40

little less, but there is a gap between

15:42

the actual reserve resource to

15:44

the actual utilized resource.

15:46

And without changing the the

15:48

requested resource, cost

15:50

reduction cost reduction can

15:52

be achieved because as I

15:54

said, credit is works on the atomic box

15:57

determined by the the reserve spot.

15:59

the And that's what you actually do. We're

16:01

looking at the application ever from the

16:03

requested resources at point of

16:05

view.

16:05

Yeah. It's it's a little bit, you

16:08

know, So sometimes in in technology, we

16:10

see the same problems come up again

16:12

from from one generation or the other. So

16:14

we we used to see back in the

16:16

days of virtualization people were

16:18

buying really big boxes

16:20

to run an application, and

16:22

they vastly underutilized them.

16:24

But because they just

16:26

knew they had overhead. They they

16:28

worried more about, okay, I don't want the box to be

16:30

overrun. And Kubernetes

16:32

came along and and sort of told

16:34

everybody hey, we're gonna we're gonna schedule

16:36

your stuff. We're gonna automate it. And everybody

16:38

went, oh, that that's great. That's gonna be really

16:40

good. But it's like you said, Kubernetes

16:42

does exactly what you tell it to do. So if

16:44

you tell it to take a bunch of resources to

16:47

deploy things, it's gonna do

16:49

that perfectly well, and it's gonna make sure you

16:51

always have those resources And I think

16:53

what you're really highlighting is some of

16:55

the same problems we used to have with

16:57

virtualization, which was people

16:59

tend to over provision things

17:01

just because they're not sure how the application's gonna

17:03

run, or they're worried about memory leak or

17:06

whatever. And and so we are sort of coming back

17:08

to the same same challenge again

17:10

just in a new, you know, in a new

17:12

environment. And and given

17:14

how fast Kubernetes works versus, you

17:16

know, the old days of virtual machines,

17:19

you know, you you need systems more paying attention

17:21

to this than just people with a spreadsheet that look

17:23

at it once a month? Yeah.

17:26

Exactly.

17:26

Like, when you start

17:28

using Kubernetes, you're assuming that everything

17:30

will work out of the box. Everything will

17:32

work perfectly. Right. And nothing's

17:34

special with Appen. Nothing nothing, you know, intervention

17:37

from your side should be appen. But

17:39

as

17:39

we, like, change that around, like, the

17:41

the the usage of

17:42

relatives, where they standard like,

17:44

there are two two main challenges for

17:46

us to related users. The first challenge

17:48

is to set those requested

17:50

resource sets for each container.

17:53

because if not if not like

17:55

something that you can understand prior

17:57

to the the setting of this value,

17:59

you can't really know what should be the

18:01

value. So it's most of the and I'm

18:03

setting the copy paste from another containers

18:06

since sometimes they tried an error. Right. But

18:08

they tried an error. There is

18:10

some risk try to get there. I was remembering

18:12

you get throttling. So,

18:14

like, this approach wasn't really hard

18:17

because like, most of competitive

18:19

users want to utilize both

18:21

their vertical scaling concept and their result

18:23

of scaling concept. But I can

18:25

use anything in, like, like, you're leveraging

18:27

both of them to improve. So to

18:29

get, like, maximum cost reduction, it become

18:31

really complex. So this was, like, the

18:33

first challenge that we faced. And the second

18:35

challenge that that we saw that happening

18:38

is like there are different

18:40

persons in each company that

18:42

in charge of different areas. So

18:44

if you know, competitors allow allow us to, like,

18:46

move some of their of their ability

18:48

to their engineers so they

18:51

can and beat it to fast

18:53

microservices. They can deploy it really fast.

18:55

There are CICD tools that do it for

18:57

you. And the but

18:59

have the the like, this

19:01

whole area of move to to the

19:03

engineering, and they need to to set

19:05

the value the values of Like,

19:08

the CPU memory presses for each

19:10

container, there are other departments,

19:12

which is the DevOps department that's in charge

19:14

of the budget. So, you know, as

19:16

they engineer infecting the values, they

19:18

had their hopes engine the dev of engineer, dev

19:20

of department need to,

19:22

like, give the explanation. Why is the budget? Why is

19:24

the billings that high? Like, the

19:26

the this bill side of

19:28

become, like, rechallenge in

19:31

many company.

19:32

Yeah. Absolutely. Absolutely. Granite

19:36

just recently launched a

19:38

a product service SaaS service

19:40

called gmeistro, your product. Tell

19:43

us a little bit about about how it works. You

19:45

know, what are the what are the

19:47

the most common ways that people start using it

19:49

and then some of the benefits they see?

19:51

Yeah. Sure. So first and

19:54

foremost, like, the the

19:56

the as I said, the motivation guys in France was to to

19:58

solve those two big challenges

19:59

that the the company that the community

20:02

has. And

20:04

What is

20:04

it? Are are

20:05

a solution? The GMS

20:08

solution? What actually, that is,

20:10

like, continuous right sizing. continuous

20:12

rightsizing your calls and your workloads.

20:14

And we achieved really good results

20:17

around sixty percent cost reduction from

20:19

a rightsizing operation. And,

20:21

actually, it's a pretty straightforward

20:24

concept. We just remove the gap and

20:26

reduce the gap and get the gap

20:28

between the utilize resource set

20:30

to their action to the reserved resource

20:32

set. And the way we do it in

20:34

technical matters, we just deploy a

20:36

single pod into a cluster First,

20:38

we provide a visibility, a full visibility

20:40

into the Kubernetes cluster.

20:42

I'm showing all the deployment

20:45

information, the deployment digitization, whether we

20:47

need deployment, roll out schedule

20:49

set, all the queries object.

20:51

And second, a few minutes after, we

20:53

provide the first recommendation, state

20:55

of recommendation on outright side

20:57

of the workload in your capabilities.

21:00

And we provide you and we generate

21:02

those kind of recommendation. by

21:04

checking out all the peak usage

21:06

on your workload, taking

21:09

into consideration out of

21:11

memory and throttling effect that

21:13

already happened. and checking,

21:15

like, we want to make sure that your

21:17

that you don't need to compromise on your

21:20

competitive SLA. And that's actually how we

21:22

build the recommendation. We build the

21:24

recommendation for you. We show it to you.

21:26

And the continuous part of that, that

21:28

we allow you to apply this recommendation with

21:30

the click of a button, and we make sure

21:32

that every time your workload

21:34

change, evicted, pretty dynamic.

21:36

We right sized your workload again

21:38

or over again. So you

21:40

always, like, use the the the

21:42

list resources needed. to

21:44

end up the load that the workload need to be ended.

21:46

Okay.

21:46

Good. So it's it's not a not sort of a one

21:49

time scan and and, you know, it's a it's

21:51

continuous sort of it's

21:53

watching what's going on, making recommendations, and

21:56

and are the recommendations done

21:58

on a on a dynamic

22:00

basis? Does it sort of see what's there

22:02

and and fixes it for you

22:04

or or gives recommendations back to

22:06

the to the user and and they, you

22:08

know, they they can decide what changes

22:10

to make or what you know, what areas to

22:12

evolve? Yeah. So

22:13

it really depends on the user. It

22:15

can take those recommendation. We

22:17

provide the the required yummin

22:20

changes to provide the patch command to

22:22

to update the workload

22:24

deployment that the recommendations

22:26

will generate their own. But If

22:28

you don't want, we can apply them

22:30

for you for them, sorry, and

22:32

ensure that and I said it is continued

22:35

solution. We ensure that every time that the the vocalist

22:37

changes at the time passes and the

22:39

utilization part of change as well,

22:41

and there are, like, different peaks and

22:43

different spikes, We can take that into

22:45

account and make sure that every time and every

22:47

change that the world changes, we are

22:49

making sure they said that

22:52

the the workload remained

22:54

the right side.

22:55

Now do you find do

22:58

you find some optimizations or

23:00

are sort of more immediately impactful

23:02

or, you know, some some are more

23:04

focused long term. Like, how do

23:06

once once people sort of see what's

23:09

available to help them, you know,

23:11

how do they tend to to think through it? Is

23:13

it, okay, these are these are immediate

23:15

because they're just you know, they're they're too big

23:17

and bulky and then others are

23:20

you just you're making little tiny

23:22

incremental things over time. What what

23:24

tends to happen as they as they see the optimizations

23:26

presented to them. Yeah.

23:27

So most of the

23:29

time, people, when they feel recommendation, they want

23:31

to figure out how we generate those

23:33

kinds of recommendation. after checking your

23:36

recommendation of tool and making sure

23:38

that this is a quality recommendation, they

23:40

can apply. And most

23:42

of the time after applying those recommendations, they cluster when

23:45

they just cluster autoscaling, autoscaling,

23:47

direct magic, and reducing

23:50

the loads the loads amount.

23:53

Sometimes when the cluster is like

23:55

a static cluster, we need to change it

23:57

manually and we provide a recommendation to change

23:59

that manually for you as well. But more

24:01

than the time competitors does it magic by

24:03

itself. But I think the the

24:05

concept of rightsizing is,

24:07

like, after you right side your

24:10

workload one time, you need to make

24:12

sure to right side your workload every

24:14

time and continuously. Since if

24:16

your workloads wanted to try size in the

24:18

first place, and it can gather out

24:20

of your room, it can end up throttling, And

24:23

there is also a lot of waste in terms of resource.

24:26

But once you write in the

24:28

workload, we need to make sure that it remains

24:30

right side over time. So,

24:32

like, the first impact should be immediate. And

24:34

as the time goes, you just make sure that

24:36

you don't raise much more resources,

24:39

and you're running in the most optimized

24:41

way. Gotcha. Gotcha. And and

24:44

Jim

24:44

Maestro is is deployed as a as

24:46

a SaaS service. Correct? Yeah. Yeah.

24:48

It's a SaaS service. You

24:51

can you know, open our platform,

24:53

you know, from the environment

24:55

side, open the GMX platform, sign up

24:57

with the screen offering. You can use it

24:59

for free, check out your recommendation, and

25:02

start using it right now with multiple

25:04

cluster on multiple application as well.

25:06

Gotcha. And and will it work for obviously, it'll

25:09

work for workloads

25:11

Kubernetes workloads in the cloud? Will it also

25:13

work if a if a cluster is is

25:15

on premises? Yeah.

25:17

So if you have,

25:20

like, an Internet connection from your

25:22

over VPC from your own premises and you

25:24

can allow an outbound

25:27

networking. It will be no problem to to

25:29

connect our view management to your

25:31

platform. Okay.

25:31

That's excellent. That's excellent.

25:33

So we've talked a little bit. Obviously, we could

25:35

we could spend a lot a lot of time going

25:38

in-depth. For people that are are looking at this and they're

25:40

going, yeah, we're we're beginning to see this

25:42

this sprawl of clusters in our kubernetes

25:44

environment. We're starting to, you know, get some

25:46

bills that we don't love what are

25:48

some of the best ways for,

25:50

you know, companies to engage with not

25:52

only the the GeoMistro platform, but

25:54

also potentially engage with

25:56

your team. Sure.

25:57

So they can open the open

25:59

our website, but I dot I'll check

26:01

the different solution we have. They

26:03

said their runtime optimization solution,

26:06

and Paul, the ride saving solution,

26:08

Jim Mastro. And Jim Mastro,

26:10

it's pretty you you definitely need

26:12

to sign up and deploy our our

26:14

Jim Mastro pod and then

26:16

you already get any sort of a combination. You can use

26:19

it on multiple cluster. You can use it

26:21

on single cluster, whatever

26:23

they they desire. And they

26:25

are the solution that you just need to schedule a

26:27

demo with one of our experts.

26:29

Excellent.

26:29

Excellent. Yeah. It's the other thing

26:31

I think it's it's really interesting, you

26:33

know, you you granulate is an

26:36

intel company. Back in the

26:38

day, Intel used to make a lot of

26:40

investments to drive

26:42

more and more CPU usage, you know, video

26:44

and and other things. It's it's really interesting

26:46

to see them evolve to now looking

26:48

at, you know, how do we be really efficient

26:51

with the the CPUs that you're using, the compute that you're

26:53

using because, you know, it's

26:55

it's so easy now for people to

26:57

to drive more usage. There's lots of business projects

27:00

that drive more compute usage and, you

27:02

know, AI and and data science and stuff. So

27:04

it's it's really good to see them

27:06

being on the other side and giving people

27:08

access to tools that that make them efficient

27:10

as well. So that's that's exciting to

27:12

see. You you know, last

27:14

last question, you know,

27:17

what's the best way you know, give us an example, what are the

27:19

best ways for folks to to engage with your

27:21

team? What are some of the, you know, kind of earliest

27:24

ways to beyond just

27:26

signing up for the account? Do you see

27:28

what are maybe some of the immediate types

27:30

of things that you hear feedback from from

27:32

people like, hey, we just save a

27:34

bunch of money or, you know, hey, you know, we we were

27:36

surprised we we could optimize some things. What are some of

27:38

the early, you know, stories

27:41

you hear back from people using the

27:43

platform? Yeah.

27:43

So we launched our

27:46

platform around there months ago. We got some pretty

27:48

amazing feedback from the community.

27:50

And we have a Slack channel. like

27:52

a like a platform for the

27:54

community, which we ask the

27:56

questions from the community, we have we

27:58

answer for any support provided

27:59

over there, but if

28:01

someone need to contact us, he's a

28:03

great child to do so.

28:06

But we got great feedback.

28:08

We we we are already checked that platform

28:10

before releasing it to the to global

28:12

available. We checked that in our

28:14

internally customer current customer, and

28:16

we achieved amazing

28:19

amazing cost reduction around an average

28:21

of sixty percent It

28:23

depends on how much the

28:25

environment was over provisions before our

28:27

first solution. but most of the time it

28:29

was over provisioned, as I said.

28:31

And I think the

28:33

best way to contact us just

28:35

join the community, join the Slack community,

28:37

talk with us. You can send me a

28:39

private message. You can send me a link

28:41

in link in message. And

28:44

But this will be the best way to listen. Excellent.

28:46

Excellent. Good stuff. I appreciate the time

28:48

today. It was really, really good not only to speak

28:50

to you again, but also to learn more

28:53

about what granulates and what the

28:55

GeoMistro platform is doing. So folks, as

28:57

always, we've been highlighting

28:59

more and more around Kubernetes, not

29:01

only how to run run the

29:03

platform, keep you up to date with the technology. But

29:05

with that, we

Unlock more with Podchaser Pro

  • Audience Insights
  • Contact Information
  • Demographics
  • Charts
  • Sponsor History
  • and More!
Pro Features