Podchaser Logo
Home
Storage Spotlight with Sean Derrington and Nishant Kohli

Storage Spotlight with Sean Derrington and Nishant Kohli

Released Wednesday, 14th September 2022
Good episode? Give it some love!
Storage Spotlight with Sean Derrington and Nishant Kohli

Storage Spotlight with Sean Derrington and Nishant Kohli

Storage Spotlight with Sean Derrington and Nishant Kohli

Storage Spotlight with Sean Derrington and Nishant Kohli

Wednesday, 14th September 2022
Good episode? Give it some love!
Rate Episode

Episode Transcript

Transcripts are displayed as originally observed. Some content, including advertisements may have changed.

Use Ctrl + F to search

0:12

Hey, everyone,

0:12

and welcome to episode three nineteen

0:15

of the weekly Google Cloud Platform Pod

0:17

webcast. This is Stephanie Wong.

0:19

And today, I'm here with

0:21

Drumroll, please. Stephanie

0:24

Wong. Yes.

0:26

You are all in for a treat because it's just

0:28

me here today hosting, but

0:31

hey, I'm here on every other week

0:33

or so anyway. And, you know, I think

0:35

my teammates needed a well deserved break.

0:38

They're either in the heat wave of California

0:40

that we just had and they're out on a pool.

0:43

or they went to Burning Man

0:45

or they just plain wanted to take extra

0:47

few days off with Labor Day so I don't blame them.

0:50

but we have some great content up here today.

0:52

We I, I should say, just

0:54

had a great conversation with Sean and Nishant

0:57

who are the product leads for our

0:59

storage products. And

1:01

if you're thinking storage really

1:03

well, we had a lot come out. We just had a storage

1:06

launch day or storage spotlight, if

1:08

you will. It's this event online. And

1:11

they both had some great presentations about some of the

1:13

biggest launches of this year so far.

1:15

And so they're here today to talk about

1:17

any and all things storage, block storage,

1:19

object storage, storage for GKE.

1:22

There's a lot going on, so we're gonna get

1:24

them in here in just a second. But

1:26

of course, Even though it's just me today, I'm not

1:28

gonna skip out on the cool things of the

1:30

week.

1:36

First cool thing in the week is something that

1:38

I cannot cannot forget to mention.

1:41

Google Cloud Next is happening. Yes.

1:43

virtually still, but

1:46

it is open and available to register

1:48

for online at cloud dot with

1:50

google dot com slash next slash

1:52

register yes, we're gonna have that link in the show

1:54

notes as well or just search CloudDEX

1:57

register. And you're gonna be able

1:59

to, for free,

2:01

join us live online along

2:03

with one million builders, dreamers,

2:06

leaders for Googled Next two

2:08

thousand twenty two, and we're all gonna learn and grow

2:10

together to Take on what's next.

2:12

Again, access is complementary. I am

2:14

going to be in a couple of the sessions. I'll

2:17

let you know in just a little bit about which ones

2:19

those are, but yeah, definitely don't miss it. We

2:21

have more launches on top of what we were talking

2:23

about today for the storage. Now, I

2:25

have

2:25

two other cool things because it's me and I can do

2:27

whatever I want today. So my

2:29

next cool thing is a really awesome

2:31

blog post by one of our

2:33

developer advocates for DataFlow.

2:36

Dataflow is one of our data analytics

2:39

tools for batch and streaming.

2:42

And dataflow recently announced the general

2:44

availability support for Apache beams,

2:47

generic machine learning prediction and inference

2:49

transform. It's called run inference.

2:51

So in this blog post, they take a deeper dive

2:54

on that transform capability.

2:57

And you're gonna go through a

2:59

actual full blown Example and walkthrough

3:01

of how you can use run inference to

3:03

transform and remove large chunks

3:05

of boilerplate data pipelines that incorporate

3:07

machine learning models. So

3:09

take a look at that. It's very detailed.

3:12

And if you are in the data engineering world,

3:14

you're gonna love this because it will span your capability

3:17

to be able to do machine learning in production.

3:19

And then the last cool thing of the week that

3:22

I wanna mention is just a fantastic

3:24

blog post written by Forrest Brasil. He's

3:26

ahead of content here. He actually had

3:29

me featured in one of the videos that

3:31

I wrote about how to get into cloud from

3:33

non traditional backgrounds, but that is

3:35

just one video in series. There

3:37

are a bunch of videos talking about how you

3:39

can get into cloud from IT. how you can

3:41

get in cloud from help desk and

3:43

how you can get into cloud without a degree.

3:46

Priyanka did that video along with forest

3:48

and couple others in our team.

3:50

So definitely don't forget to check that out. If you're

3:52

interested in getting a career in cloud, these videos

3:55

give you some actual tactical tips.

3:57

Forest has a lot of background in this area. So check

3:59

out the blog post

3:59

for a series of them. Alright.

4:02

Well, that's enough cool things from one person, I would

4:04

say. So Let's go ahead and dive into our conversation

4:06

about storage with Sean and

4:08

Nishant.

4:13

Sean and Nishan,

4:15

thanks so much for joining for another

4:17

exciting

4:18

storage launch day. Thanks, Stephanie.

4:20

It's great to hear back on the show. Hard to believe

4:22

it's already been a year. Yeah. Thank you, Stephanie,

4:24

for having us.

4:25

Yes. We've had Sean on last year for

4:27

the inaugural storage spotlight. And this

4:29

time around, we're having the second time around,

4:32

none less exciting than last year. Now, Sean,

4:34

I know this is your first time on the podcast, but why

4:36

don't you start with giving us a

4:37

quick intro of what you do here at Google, and then I'll pass it

4:39

back to Sean to do the same. Sure. Thank you, Stephanie.

4:41

Yeah. I'm one of the outbound product managers

4:43

for our Google Cloud Storage product.

4:45

So excited to be on and love to

4:47

chat about key exciting things we're announcing.

4:50

like

4:50

I said, Stephanie, it's great to be here in the storage

4:52

product management team here as well looking across

4:55

our portfolio of offerings. And as you said, we've got

4:57

a number of exciting announcements that we just made

4:59

last week.

5:00

we have a storage spotlight day.

5:02

If you all listening haven't checked it out, definitely

5:04

go ahead and take a listen. We've had

5:06

you both do some talks there and so

5:08

those are up on demand, and we have those links

5:10

in the show notes. But storage, why

5:13

are we talking about storage? Because we're

5:15

talking all the time about these services and

5:17

products that are coming out in the cloud. That

5:19

store just sort of automated and taken care of

5:21

for you in some cases. but it's still

5:23

so hugely important for a company to

5:26

really keep top of mind. So from your perspective,

5:28

Sean, why storage? Why are you in

5:30

the storage space? Storage

5:32

is one of those unsung heroes,

5:34

if you will. Everybody needs it, and not

5:36

everybody unfortunately thinks about it from the beginning.

5:38

And one of the great things about Google Cloud is

5:40

that we have a lot of experience

5:42

of delivering massive scale applications,

5:45

Continental even global scale applications, I think

5:47

YouTube and Gmail, etcetera, which is

5:49

all really finding good. And that's all built

5:51

on our cluster file system colossus.

5:54

This is just a ridiculously large capacity.

5:56

Can't even mention how big it is, but it's just like,

5:58

as big as you're thinking it's bigger than

5:59

that. Right? This is a foundation that we've been running

6:02

applications on for years. This is

6:04

now the same basis, if you will, the same technology

6:07

that we're now building solutions that enterprises

6:09

can use for their business. And it's a different delivery

6:11

model of Google delivering a service

6:13

to an application versus delivering

6:15

a storage service that enterprises can

6:18

consume whether it be blocked, file, or object, and we'll get

6:20

into some of those details. But this is all just

6:22

crazy technology stuff that really is

6:24

something that can be thought of an afterwards

6:26

in terms of I'm just gonna subscribe to so much performance

6:29

or capacity and really take advantage of a

6:31

managed service. However, there are a lot of nuances

6:33

that have to come into play when you design

6:35

a continental scale application that

6:37

there are differences and things you have to think about

6:40

with storage. As easy as we're trying to make it,

6:42

we're not done yet. And that's some of the things that we're gonna

6:44

talk about a little bit.

6:45

Yeah. I mean, to have a storage day

6:48

dedicated to is a testament to how much

6:50

exciting stuff is happening in this space. So

6:52

what about from you and Sean? No, I think

6:54

Sean kind of summed it really well. It is on

6:56

Sun Hero, but it does need

6:59

storage to function everything, especially

7:01

I kind of talk about storage in the sense of

7:04

working on human generated data and that's

7:06

transformed into non machine generated

7:08

data and the volume of data that's being generated

7:10

is far exceeding what we can manage. So

7:13

having capacity at scale and

7:15

a continent scale is a key attributes

7:17

that definitely excites me. I've been in

7:19

the objects store industry for over decade

7:21

plus and I don't think we're going anywhere

7:24

slower. It's just going faster and faster. kick

7:26

talks of the world and other capabilities generating

7:29

more content that we can store. So

7:31

I think that just keeps us exciting out in our

7:33

capability wise and especially focus in the enterprise

7:35

space. Yeah.

7:36

Just from a consumer standpoint, when I think

7:38

about how I'm constantly paying a little

7:40

bit more, when I have more Google photos,

7:43

backing up all my photos. I'm like, how is this trend

7:45

gonna keep continuing for Eons into

7:47

the future? So it's a big problem space,

7:49

and I'm sure in the last decade, you've seen it change

7:51

in the industry. so much from a technological standpoint.

7:54

So we have these launches this year.

7:56

Can you talk about some of the themes that we've

7:58

been focusing on since

7:59

last year's storage day? What have

8:01

we been

8:02

really doing in this space? Sean?

8:04

We've really been focusing on optimization.

8:07

This is clearly as you'd expect from product development

8:09

active. These don't happen overnight. Right? So this is

8:11

really long term planning that we've had in place

8:13

about how customers can optimize their environment.

8:16

And

8:16

that's optimizing not only the cost,

8:18

but also storage utilization and performance,

8:20

and how do we optimize their own resources? because

8:23

as you started off by asking about, it's just

8:25

a managed service. that's true, but somebody

8:27

still has to check a few boxes,

8:29

make a few mouse clicks, type in a few commands.

8:32

So, of course, we still have ways to go to make it

8:34

even easier than what we have been doing today. And

8:36

so as we talk to customers, there are really kind of

8:38

three overall pillars that we are

8:40

focusing on. That's number one, enterprise readiness.

8:43

companies that are gonna bring applications into

8:45

Google Cloud, how are we gonna deliver

8:47

enterprise scale applications, both in

8:49

terms of reliability, and file store

8:52

enterprise is an example with four nines of

8:54

regional SLA availability for those critical

8:56

applications like SAP and even GKE.

8:59

or high performing applications need

9:01

with files to high scale, where you have high

9:03

performance computing workloads that need

9:05

twenty six gigabytes per second of throughput

9:07

for that, or improving our block storage

9:09

with persistent disk extreme. So

9:11

there are things that companies just have to do to kind

9:14

of run of the mill, if you will, the day to day things,

9:16

but there are also a lot of the data driven application

9:18

and they're consuming all that storage you're

9:20

referencing for the photos and so forth with

9:22

cloud storage enhancements for analytics. And

9:25

if you look at what we've done to make it

9:27

easy for companies to deliver continental

9:29

scale applications with our dual

9:31

region option for cloud storage where

9:33

they can replicate between nine

9:35

different regions across three continents, all

9:38

to a single bucket, and all to the same API

9:40

access regards to storage class. is something

9:42

that's unique to cloud and unique to Google cloud.

9:44

And this is one of the things that we're doing for

9:47

availability of those applications in addition

9:49

to making sure that they can not only have a recovery

9:51

time objective of zero to be all failover for

9:54

one region in a single bucket to actually

9:56

add that option to replicate the data

9:58

what we call turbo replication with a fifteen

10:01

minute RP0

10:01

Right? So you have an r two of zero

10:04

and RP0 of fifteen minutes for

10:06

petabytes of capacity or tens of petabytes

10:08

of capacity is just unfathomable.

10:10

And then lastly, really around storage everywhere.

10:12

So this is really the notion of yes,

10:14

a lot of applications are coming to Google

10:16

Cloud, but there is still an edge presence

10:19

on premises. And that's where our

10:21

Google distributed cloud edge appliance comes into

10:23

play where you have companies like neuro

10:25

that have distributed on premises

10:28

workloads. They have their autonomous

10:30

driving vehicles collecting data all day.

10:32

They pull into the shop, they upload the data, the appliance,

10:34

and it's really a transition into getting that data

10:36

into the cloud for further training in

10:38

AI ML workloads. And it's not a transfer.

10:40

Now a one time transfer is more of a daily occurrence,

10:43

and so it's bridging that cloud

10:45

boundary, if you will, down to the customer's

10:48

data center.

10:49

while wrapping your head around what you just said,

10:51

twenty six gigs of throughput needed and

10:53

also having storage options with a fifteen minute

10:55

RPO for petabytes of data. Can we

10:57

just sit with that for a second? Because it is pretty impressive.

10:59

Just thinking back to where we've started when it

11:01

comes to storage, like, when the first storage

11:03

area networks were developed and where we

11:06

are today with having multiple users and

11:08

the type of latency needs that we have

11:10

today,

11:10

it's quite impressive. And and

11:13

Stephanie, those are the things we talked about largely

11:15

last year and have delivered just this year. So we're not even

11:17

talking about the new stuff this year's story spotlight

11:19

yet. This is all things that we've been we've been

11:21

busy this year even ahead of the announcements last

11:23

week. So I think

11:24

that's a perfect segue because I was gonna ask about some

11:26

of the key launches and what are

11:28

we talking about this year? Maybe start

11:30

with things that kind of fall into the enterprise readiness

11:33

space.

11:34

It's really around I would characterize it as a block

11:36

storage. Right? If you take the typical application, I'm gonna

11:38

run SQL Server, I'm gonna run SAP

11:41

and Google Cloud, how would it make it easier

11:43

for customers to optimize their deployment? so

11:45

that's where we announced the next generation of persistent

11:48

disk called Hyperdesk. So

11:50

Google Cloud Hyperdesk is really the option

11:52

that gives customers three levers to

11:54

turn, if you will, iOPS throughput

11:57

and capacity. And so you're going to

11:59

start to see this

11:59

rollout next quarter in Q4 this year.

12:02

particularly focused on the higher end workloads

12:04

as it'll be called hybridistic stream. As the

12:06

name implies, it's gonna be Phil's most demanding

12:08

database workloads where you've got

12:10

to get the sizing right. Right? Coming back to the what

12:13

you said opening, Stephanie, about sizing

12:15

application that what makes storage interesting is

12:17

that people have to think about it from the beginning. And

12:19

historically, They've gone through a long planning

12:21

cycle about iOPS and trying to figure out what

12:23

the application is gonna need from the

12:25

business, not only in day one, but also

12:27

day three sixty five. and how does that change

12:30

over time? And so with HyperDisc, we're really giving

12:32

customers that option to optimize and

12:34

make it easy to provision iOPS throughput

12:36

and capacity on day one, but as the

12:38

application changes over time, tune

12:40

those knobs. Make those adjustments as needed without

12:43

going through lot of arduous tasks of

12:45

migrating data around. And the other

12:47

is really around files to enterprise with GKE.

12:49

You know, we're seeing more and more stateful workloads

12:51

running in Kubernetes, and that

12:53

really depends upon availability

12:56

of data because

12:56

you can't have a stateful workload that

12:58

you can't number one backup or have

13:01

multiples pods have access to the same

13:03

data. right, for failover and availability reasons.

13:05

And so now that same four

13:07

nine SLA of regional availability with file

13:10

store enterprise, we've brought to GK

13:12

already, but now we're actually optimizing how

13:14

customers can maximize a storage utilization

13:17

and carve up hundred different shares of capacity

13:19

for hundred pods to be able to write

13:22

to the same data at the same time. And

13:23

that really changes the way customers and the flexibility

13:26

that gives customers to deploy stateful

13:28

workloads in GKE? Yeah,

13:30

that's been an initial challenge I think

13:32

with containerized environments as having to have these

13:34

modularized architectures that need to

13:36

access the same storage in order to have state

13:39

full applications be deployed on Kubernetes. So

13:41

it's like we've matured a lot in

13:43

this space, and storage has been a huge part

13:45

of that calculation. Nishan, is there

13:48

anything else that tested

13:49

out to you in this storage day for enterprise

13:51

readiness? Oh, yeah. There's a lot of citing stuff

13:53

that we've introduced in our cloud storage

13:55

world. And Sean talked about just volume

13:57

of data that we have to manage now. Customers

14:00

are taking advantage of our cloud stores at a confident

14:02

scale, but at the same time, they're worried about

14:05

cost. So the three pillars that we

14:07

talk about cost performance and capacity.

14:09

One of the key aspects is cost. How do I

14:11

manage my cost at billions and trillions

14:14

of objects or petabytes at

14:16

bytes of data. That seems to be a daunting

14:18

task for our customers. And often, they

14:20

don't know what workloads are coming

14:22

to our object stores. They're just

14:24

turning applications or bringing in applications

14:26

from business use case perspective. So

14:28

managing that whole aspect of

14:31

what application create what kind

14:33

of workload and managing what data should

14:35

go and what class of storage tends

14:37

to be pretty daunting. So we are now introducing

14:39

what we call our auto class capability

14:42

for cloud storage, where it's really

14:44

our easy button. You click it and forget

14:46

it and set it. And now based on your

14:48

access patterns, we'll ensure that

14:50

their data is sitting in the right storage class

14:53

based on your access, and we'll make sure

14:55

that we're optimizing your costs along the

14:57

way. So a lot of our customers have to

14:59

start to figure that out at a get go where

15:01

they have to understand the application workload

15:04

and start to plan that. And so we

15:06

wanna take that planning away or

15:08

have to reduce that aspect of

15:10

planning. So autoclaves really

15:12

helps sets us apart from our competition

15:15

and also helps our customers. Now

15:17

I think of it in two ways. We do

15:19

give our customers flexibility to

15:21

be the what we call the advanced users.

15:24

Do it yourself. They can build their own

15:26

patterns and usage and decide which

15:28

storage class they wanna retain the data in.

15:30

that's perfectly fine. So you continue

15:33

having those advanced users, but the ones

15:35

that just wanna have simplicity and

15:37

ease but still manage their costs on

15:39

their behalf, and Autoclast does that

15:41

for them. So from that perspective, the

15:44

capability gives them a great

15:46

scale. but also the flexibility of

15:48

just simplicity that we can provide. It'd

15:51

be hard to achieve that on your own.

15:52

And I have a lot of thoughts about what you said because

15:54

when it comes to cost optimization with storage,

15:57

we actually had an episode on FinOps,

15:59

which is a kind of growing area, especially with

16:01

regards to cloud. And So it's this intersection

16:03

between finance and operations. And so

16:06

the more that we can automate these things and do

16:08

these recommendations and activists' related

16:10

features we have to optimize your

16:12

storage, which is can be a larger operational

16:15

burden for OpEx. This is great.

16:17

I mean, I haven't seen it across industry,

16:19

and I think you just mentioned it might be one of our unique

16:21

launches this year or two, so that's super exciting.

16:24

And when it comes to the fifteen minute turbo

16:26

replication and having multi regional single

16:29

buckets. I heard this phrase the

16:31

other day. It's like when starts are thinking about

16:33

choosing a cloud provider, choose a provider

16:35

and first start to think about multi region,

16:37

not multi cloud. And that's what you really

16:39

need to prioritize when it comes to having

16:42

that enterprise readiness and availability

16:44

before you start to think Okay. Let me just choose across

16:46

all these providers. It's like, can we get

16:48

one that really gives us the capability to be multi

16:50

region?

16:51

Spot on and And then lastly, another

16:53

amazing feature that you introduced is really

16:56

around our storage insight. Now when we talk

16:58

about the volume of data you're generating,

17:00

As much as we can automate, there are certain aspects

17:02

we cannot. So we do want to give

17:04

insight into the customer what type

17:07

of data they're storing. So our initial launch

17:09

of storage insight is gonna be ging this

17:11

year or early next year as well, which is

17:13

giving them knowledge of what capacity they're

17:15

storing. In those trillions of objects?

17:18

What kind of data are they storing? Can we give

17:20

them the visibility into that? Are there

17:22

any patterns in that storage data

17:24

that they can then facilitate And also

17:26

other aspects could tie into you talked about

17:28

fan ops and compliance space. Customers

17:31

are looking at insight of what type of data

17:33

I'm showing in my buckets. Am I

17:35

giving access to customers that shouldn't

17:37

have access to that data? Should that data

17:39

not be publicly accessible? There could be

17:41

PII data in there? So really all

17:44

those aspects, giving our storage and site

17:46

service to begin with, can give them

17:48

kind of directional guidance, not just

17:50

from a managing capacity perspective,

17:52

but managing information. So we really

17:54

wanna grow from, of course, pod

17:56

storage. It's all about storing data at massive

17:59

scale, but we wanna get to managing information

18:01

for our customers and then giving them insight

18:03

how they can operate their business better,

18:05

how they can manage their application better,

18:08

how they can optimize their application for

18:10

best workload. So for example, they

18:12

have analytical workloads. We talked about

18:14

dual region, how we can co locate compute

18:17

and storage together. So getting that

18:19

depth of knowledge from their data, they can then

18:21

decide what's the right storage class

18:23

for them or right storage operations

18:25

they wanna do and cool features and functionalities

18:28

that we're introducing when to set it.

18:30

And then also thinking about from a security

18:32

perspective, some of the challenge we've heard

18:34

from our customers hey, am I managing

18:37

all my data across with the right

18:39

keys, encrypted keys across the board? Are

18:41

we rotating them enough? So Having

18:43

that global policy level

18:45

of information across all their buckets is another

18:48

key attributes that they can take advantage from our storage

18:50

insight service. there's

18:51

a lot happening clearly. And recently,

18:53

we just had an episode on GKE because they

18:55

just turned seven. And we have much

18:57

more exciting features coming out with that product,

18:59

but I know you were talking a little bit about what

19:01

we have for storage when it comes to GKE

19:03

Shawn. What are we announcing around

19:05

GKE and the theme of storage everywhere?

19:08

Yeah.

19:08

Happy belated birthday to us. Right, Stephanie?

19:10

Yeah. So I talked about

19:12

files to enterprise. So that's kind of on the storage

19:14

side for those stateful workload deployments. But the

19:16

other part that goes along with this is have

19:19

to be able to recover an application that you really care

19:21

about. If you can't recreate the application or

19:23

the data easily, you've got to back it up. Right?

19:25

And we've been doing this for long time in

19:27

VMware on prem or GCVE in

19:30

Google Cloud or GCE. But

19:32

now we're actually delivering this for Kubernetes.

19:34

And this

19:35

is actually something that is it's our

19:37

own development. We developed this to

19:39

be integrated directly within the cloud console.

19:41

So now you've got a couple mouse clicks choose

19:43

which workspaces or pods you wanna protect,

19:46

where you wanna protect them locally or in another

19:48

region. And then in the recovery side, you have

19:50

that same granularity of I'm gonna recover

19:52

everything. I'm gonna recover portion of my

19:54

environment. And you really have to

19:56

have that forward thinking about protecting

19:59

the applications having the file

20:01

store data being available across multiple

20:03

pods for availability. We're

20:05

using persistent disk with GKE.

20:07

Right? You back up that data with backup for GKE.

20:09

And so it's a really changing of a mindset

20:11

about being able to run more and more different

20:14

applications that are typically more important,

20:16

if you will, in terms of stable, but not necessarily.

20:18

but it really is supporting that customer journey

20:20

throughout their transformation and optimization process

20:23

because not every customer is gonna go directly to

20:25

GKE. They're gonna have a portion of things that are gonna

20:27

run-in VMs. and others are gonna be running

20:29

containers. Yeah.

20:30

That's a good point. It's usually very much so

20:32

heterogeneous environment, and so it needs to be

20:34

adaptive in terms of being able to support these

20:36

different deployments. So I know

20:38

we mentioned security just a moment ago

20:41

with storage insights, but is there any more you

20:43

can talk about when it comes to the security of

20:45

data?

20:45

yeah, it really comes back to the heterogeneity.

20:47

You mentioned Stephanie in that backup for

20:49

GKE is obviously protecting that environment.

20:51

However, you've got to think about everything else.

20:53

and this is where we're announcing Google Cloud

20:56

for backup and DR. This is also now

20:58

directly integrated into the cloud console.

21:00

For those of you familiar, this is from the Actifio

21:02

acquisition. that we made little bit ago,

21:04

but now it's fully managed service. So there's

21:06

nothing else for a customer to install. Just

21:08

point click, which applications you wanna protect,

21:11

where you wanna protect them and what that policy looks

21:13

like. And so for GCVE with

21:15

VMware or GCSE and VMs,

21:18

databases and applications, we can now protect

21:20

applications running in the cloud consistently. You

21:23

have the protection for GKE applications. So

21:25

you're really providing that consistency for

21:27

security and data protection. across

21:30

all of the environments, across the heterogeneous deployment

21:32

you're mentioning. But then the other thing that with

21:34

backup and DR that we're announcing is

21:36

that, yes, the in cloud protections there,

21:38

But if you're running VM or on premises, you

21:41

can use the same capability to protect those

21:43

VMs on premises and back them up to Google

21:45

Cloud. So as you're thinking about

21:47

the potentially a hybrid deployment and

21:49

a migration strategy of what's gonna be moved when

21:51

into Google Cloud, you have that option to backup

21:53

from on prem to the cloud within the cloud?

21:56

You

21:56

have options. Yeah. I love it.

21:58

We talked a little bit about cost optimization

22:01

earlier in the conversation, but I wanna circle back

22:03

to it just because I know storage, as I mentioned,

22:05

such a fundamental building block, but there's

22:07

also ballooning storage, which we said, and

22:09

that comes with ballooning cost and OpEx

22:11

especially in today's age is tightening up for a

22:13

lot of companies. So what are we doing to help with

22:15

cost optimization?

22:16

Yeah. So I would say couple of things,

22:18

Stephanie, on the operational side. Nishan

22:21

talked a little bit about auto class where we've

22:23

now given them the easy button, if you will. So

22:25

they only can apply this auto class policy

22:27

to an entire bucket very simply.

22:29

And that can manage billions or trillions of files

22:32

and move data down. And if you think about

22:34

the cost of standard all the way down

22:36

to archive, Archive

22:37

is a fraction of the cost of standard.

22:39

And as cheap as cloud storage is,

22:41

if you're taking, you know, ten, fifteen, twenty percent

22:44

the cost of standard, that's a significant cost savings

22:46

over time. Right? that's number one. Number two,

22:49

I've talked about hyper disc and how giving

22:51

customers three knobs to turn between

22:53

IOPS throughput and capacity. so they

22:55

can right size that. But then also, we're introducing

22:57

storage pools for hyperdesk. And this

22:59

is gonna be the notion of, think

23:02

of thin provisioning now in the cloud.

23:04

So we've done this on premises for years where

23:06

you provision block storage and you over provision

23:08

and you actually are using less physical storage.

23:11

Same notion here. We're now giving customers that

23:13

flexibility. to optimize the storage utilization

23:16

with hyper disc and storage pools so

23:18

that they can actually, as you'd expect, save

23:20

some money and and actually rightsize how those

23:22

applications are consuming storage. And then also

23:25

with with file store enterprise and multi shares,

23:27

carving up file store instances to

23:30

be more granular and more increase that storage

23:32

utilization with smaller shares

23:34

just helps to right size again those pod deployments.

23:37

with GK.

23:38

Very nice. I love the themes of rightsizing and

23:40

auto classing. This is a trend

23:42

that is definitely super valuable.

23:45

And of course, I know that for many

23:47

of our lunches, we don't do this alone.

23:49

We take so much feedback from

23:51

our customers to help guide where we wanna

23:53

take the products next course.

23:54

This is not an exception. So

23:56

were there any customer use cases and stories

23:58

that you have from some of these

23:59

launches? Yeah.

24:00

No, Stephanie. That's valid many launches

24:03

we have prior. We do work with closely with

24:05

our customers. And similarly here, we'd be DevIS

24:07

is one of our customers that they're in the research

24:09

industry. They aggregate research data, and

24:11

then they provide that to the end user or the applications.

24:14

They're from analytics perspective or

24:16

research aspect perspective. And

24:18

one of the challenges that they generally have

24:21

is they don't know when that research

24:23

data is important. Now, they have to

24:25

retain that data forever. but

24:27

in early stages of research that data

24:29

is fairly active, but then eventually becomes

24:31

dormant. So they wanna be able to have some sort

24:34

of capabilities, i e, auto class, And

24:36

based on the access patterns where they don't have

24:38

to continuously pull the users, are you done

24:40

with this data? Just migrate or move

24:42

that data to the right storage class and

24:45

start saving them costs. And

24:47

one thing that struck us when we were talking

24:49

to them is it's more about human

24:51

failure or accidents that happen

24:53

where when we do move the data

24:56

to the lower storage cost,

24:58

but, you know, when you access that data,

25:01

it starts to to hear more

25:03

cost to you because there's operation

25:05

charges there. So with our auto class capability,

25:07

we automatically move that data to standard

25:09

storage and ensure that on subsequent

25:12

reads, you're not paying that heavy penalty

25:14

of retrieving that data. So not only

25:16

we're automating, getting used to the lowest

25:18

cost tier, But at the time of access

25:21

and when you wanna get that data, we'll

25:23

move it to the right storage class or the more

25:25

of the hotter storage class and ensure that you

25:27

can use that data at any given time.

25:29

Throughout this whole thing, you know, the application

25:32

doesn't need to change. It's a single API.

25:34

And even at our lower cost

25:36

storage, the latency to read that data

25:39

is milliseconds. So from the end

25:41

user perspective, they don't have to change their architecture.

25:44

They don't have to change their application. They

25:46

get the same functionality they were getting

25:48

regardless of what storage class are sitting in.

25:50

So from their perspective, it

25:52

made their life as a storage admin

25:54

or the owner of the infrastructure, so

25:57

much simpler that they can then focus

25:59

those engineering resources in the

26:01

part of the business that mattered is really

26:03

collecting that research data. One

26:05

of the things the customer said once this

26:07

thing is GA and ready to go, they

26:10

expect ninety percent of their data will be managed

26:12

under this capability. So which is huge

26:15

saying from their perspective, and I think they

26:17

can take huge advantage of this capability as

26:19

it comes out. Yeah.

26:20

No. That's amazing. I was gonna say that

26:22

I'm sure the red flag that people

26:25

weigh when they hear out of close. That sounds great,

26:27

but if it's gonna affect how easily

26:29

I'm going to be able to pull that data

26:31

back down, if there's cost differences,

26:33

if it's gonna be difficult for my application

26:36

to be compatible with such differences,

26:38

and that's gonna be a big decision factor.

26:40

But if it's easy and seamless, then

26:42

why not? Right? That's

26:44

great. So Can you step

26:47

back real quick, Sean, and summarize some of

26:49

these launches and where they fit into our broader

26:51

portfolio to help

26:53

our customers secure their data, simplify

26:55

operations, and optimize the cost.

26:57

If you take a look at what we've done, it's really across

26:59

the portfolio of the storage that we have.

27:02

lot of different applications are gonna come to the cloud,

27:04

Not everything is gonna be leveraging object storage.

27:06

Not everything is gonna be leveraging block storage. So

27:08

you have to really think about what we can do to optimize

27:10

deployments as customers are thinking about their enterprise

27:13

readiness or the data driven applications

27:16

or if you look at storage everywhere. And

27:18

so if you think about block storage, introducing

27:20

HyperDisc. The next generation persistent disc

27:23

give customers knobs a tune, the performance

27:25

and capacity. We've announced capabilities

27:27

around files are enterprise to right size that storage

27:30

utilization within stateful workloads for

27:32

GKE and multi writer deployment scenarios.

27:34

We've got the data protection side of things.

27:36

backup for not only GKE, but also

27:39

Google Cloud backup and DR for GCVE

27:41

and GCE and other databases. then

27:43

we also have cloud storage. And the auto class you're just

27:46

talking about of making it easy to tier data

27:48

back and forth between the different storage classes

27:50

without the additional charges. So optimizing that

27:52

placement and giving customers the information

27:55

that they need to make informed decisions about

27:57

what objects are where it was storage insights. And so

27:59

We certainly am busy again across the portfolio

28:02

of block file, object, and data protection

28:04

this year. Howard Bauchner:

28:05

Yes, you certainly have. And to anyone

28:07

who's questioning like storage. We're talking about storage.

28:10

Yes. There's so much that has to do with

28:12

storage. It's so fundamental still. And so

28:14

thank you so much. Is there anything that you wanna mention

28:16

before we start to wrap up? I think

28:18

you mentioned at the top of the show, if you didn't

28:20

have a chance to see stores spotlight, it's available

28:23

on demand. Yes. For

28:24

anyone who is listening out there, We

28:26

have the links in the show notes, storage spotlight,

28:28

second year in a We're gonna keep with this trend because

28:31

clearly there's a ton happening. And, Sean,

28:33

and Sean, I just wanna thank you

28:34

again for coming back on. Now thank you for

28:36

the time. Yeah.

28:37

Thanks Stephanie. Alright.

28:39

I wasn't wrong. Right. tons of stuff happening

28:41

around storage when it comes to optimization. cost

28:43

optimization capacity optimization, performance

28:46

optimization. We talked a lot about

28:48

rightsizing, auto classing,

28:50

file store enterprise hyper disc. We

28:53

have lots going on around the theme of simplicity

28:55

and ease of use with data protection. We have

28:57

GKE backup and Google

28:59

Cloud backup and disaster recovery. and

29:02

I love the use case that they went

29:04

over when talking about redivis. So

29:06

I'm really glad that we got Sean back on and included

29:09

Nishant this time around. It's gonna be an annual

29:11

transition that we just have storage, spotlight, and

29:13

have them jump on because I know a lot of you

29:15

might not have the time to check out

29:17

the full on demand video of that happening.

29:19

And so the podcast is such an easy way

29:21

to get the product team on and give you a great

29:24

summary of some of the exciting things that

29:26

are happening in the storage space. Now,

29:29

I know we always talked about what we're working on, and

29:31

so I just wanna give a quick update that I

29:33

am doing some exciting video content with

29:35

one of our latest partners, and

29:38

I'm not gonna give too much away, but you're

29:40

gonna see it at next a

29:42

cool partner announcement, and so I'm gonna

29:44

hopefully be able to showcase that video

29:46

soon. And then in terms of next, I'm

29:48

also doing two sessions. One

29:50

is for networking, which if you've

29:52

seen my videos, you know that I love talking about

29:54

networking. And this session covers how

29:56

you can simplify and secure your network

29:59

for all workloads

29:59

in the cloud. So we're gonna covering

30:02

all of the announcements related to

30:04

networking. And then I am

30:06

also in a session for

30:09

our partner ecosystem, where I bring on

30:11

Rubik and Exavim to talk about how you

30:13

can take on security in the

30:15

cloud using some of their tools

30:17

alongside the analytics capabilities

30:19

in Google Cloud. So check

30:21

that out. I know it might not be everyone's

30:24

cup of tea for the sessions I'm in, but we got a

30:26

ton coming from our VPs, GMs.

30:28

Obviously, Thomas Kurian, Priyanka's, in

30:30

some too, so definitely come here from these

30:33

leaders. And I'm gonna keep

30:35

reminding you about next for the next several weeks. Sorry.

30:37

I'm not sorry. But for

30:39

now, This is Stephanie Wong, and I will

30:41

see you all next week.

Rate

Join Podchaser to...

  • Rate podcasts and episodes
  • Follow podcasts and creators
  • Create podcast and episode lists
  • & much more

Episode Tags

Do you host or manage this podcast?
Claim and edit this page to your liking.
,

Unlock more with Podchaser Pro

  • Audience Insights
  • Contact Information
  • Demographics
  • Charts
  • Sponsor History
  • and More!
Pro Features