Episode Transcript
Transcripts are displayed as originally observed. Some content, including advertisements may have changed.
Use Ctrl + F to search
0:00
If you're somebody that wants to utilize
0:02
fleet of H100s or , soon
0:04
, the GH200 platform , not only do you
0:07
need to commit to spending a lot
0:09
of money , but in many cases , if you
0:11
go to the bigger cloud providers
0:13
today , you need to actually commit to a year
0:15
or three years of spend with them before
0:17
you can even get access to these GPUs . And so
0:19
this is sort of that inflection point , I think , where a
0:21
solution like Akash really shines sort
0:24
of that inflection point , I think , where a solution like Akash really shines . For example , if
0:26
you were to go to akashnetworkcomgpu right now , what
0:29
you would find is that you can
0:31
geta H100 with an SXM
0:33
interface for as low as
0:35
$1.50 , which , I believe
0:37
, is you know , half or less than half of
0:39
what you can get at many of the other places out there .
0:59
Hi there , welcome to Buidl Crypto
1:01
Today . I have with
1:03
me Anil from Akash
1:06
. Welcome , Anil .
1:07
Hey , it's great to be here . Thanks for having
1:09
me .
1:17
Great to have you .
1:19
Great to have you . So , anil , could you talk a little bit about yourself ? Yeah
1:22
, sure , I'll be going into that a little bit
1:24
. So yeah , so my background I started out my education
1:26
was in college , was
1:28
in electrical engineering . So
1:31
I background in electrical engineering , undergrad and
1:33
then , even though I have a graduate
1:36
engineering degree in electrical
1:38
engineering , I actually focused
1:40
on computer networks and so a lot
1:42
of my coursework there was computer
1:45
science classes , a little bit of electrical
1:47
engineering as well , and
1:49
then for the first several years I'd
1:51
say probably about half of my career
1:54
, which is about two decades at this point I
1:57
spent my time working
1:59
on embedded software so this
2:02
is typically device
2:04
drivers for embedded devices
2:06
and so I spent
2:08
a bunch of years doing that at companies
2:10
like Motorola , working on consumer
2:12
electronic devices and writing
2:15
essentially what's called the hardware
2:17
abstraction layer for devices like
2:19
that . And then
2:21
at some point along that journey I
2:23
got more and more , you
2:27
know , towards the
2:29
customer , facing portions of the
2:31
business , and that's where I realized that
2:33
you know , understanding
2:35
the whole product lifecycle and
2:38
identifying a problem
2:40
in the existing
2:43
customer base or , you know , sort
2:45
of a gap in the market was something that
2:47
I find exciting and that sort of took
2:49
me towards looking at product
2:52
roles as a potential transition
2:54
point in my career , and
2:56
somewhere along there I also ended up going to business
2:58
school while I was working , and
3:01
you know , put that together with the
3:03
experience that I had as an engineer I
3:06
ultimately ended up moving into
3:08
a product role , initially
3:10
working for mid-sized companies
3:13
and then startups , and then macro mid-sized
3:15
companies , and
3:17
then , over the second half of my career
3:19
as a product person , I
3:22
got to work on everything from networking
3:25
and hardware devices to cloud networking
3:28
devices to pure cloud
3:31
and then , ultimately , monitoring
3:33
and telemetry type companies . So
3:36
I spent some time at companies like New Relic
3:38
and HashiCorp prior to coming to Akash , prior
3:40
to coming to Akash . And
3:44
so , given that I had a whole bunch of
3:47
background
3:49
in building solutions for cloud-native
3:51
products and cloud-native customers , when
3:55
I was approached by the folks at Akash or
3:57
at Overclock Labs , which is the
3:59
parent company above Akash Network and
4:02
the creator of Akash Network
4:04
as well , the
4:07
project really attracted me because I
4:09
looked at it as a significantly new
4:11
way of imagining how infrastructure
4:14
gets utilized , and
4:16
just looking at how the clouds
4:18
have evolved over the last couple of decades
4:21
, it became very clear to me
4:23
that the original
4:25
premise that the clouds were created for
4:27
is no longer
4:29
valid anymore in many cases
4:32
, and what
4:34
Akash was doing seemed to be in
4:36
the right direction in terms of seeing
4:40
where the industry was going to go in the next few years , and
4:42
that's what got me excited about Akash and got me
4:44
joining Overclock Labs .
4:47
I do remember Greg , one
4:49
of his interviews saying that story
4:53
of tech is story of
4:55
cloud , and so
4:57
, rightly so , and
5:00
based on what your
5:02
experience , you would naturally gravitate
5:04
towards the
5:07
Akash version of the cloud
5:09
. So could you talk
5:11
a little bit more about that ?
5:13
Yeah , so you know , like you said
5:16
, having been in the tech industry for
5:18
the last couple of decades , for
5:21
the first few years of my career
5:23
, you know , cloud
5:26
was either non-existent or very nascent
5:28
and most companies were running
5:30
all of their software on-prem
5:32
and offering it as a service
5:34
or just shipping software
5:37
as binary images that their customers would
5:39
utilize , and so this was like the early
5:42
to mid-2000s , right . And so this
5:44
was like the early to mid-2000s , right . And then
5:46
when the concept of a cloud was invented back
5:48
in I don't know , 2006 , 2007
5:51
, by Amazon , and it's really
5:53
started to gain traction , probably four or five years
5:55
after that , where you saw , you
6:03
know , there was this inflection point where , initially , cloud was primarily targeted towards , you know
6:05
, the startups and the small and medium businesses , and then , eventually
6:07
, the enterprises realized that this
6:10
is something that they can utilize as well . That
6:13
transition probably happened somewhere around 2011
6:16
, 2012 , which is kind of the
6:18
time you can consider the cloud
6:20
going mainstream and beginning enterprise
6:22
adoption from then on . The
6:24
cloud going mainstream and beginning enterprise
6:26
adoption from then on . I think for the first few initial
6:29
years when the cloud came up , the
6:32
big draw towards the cloud was that it basically gave , or it essentially leveled
6:34
the playing field for startup
6:37
companies . So if you were a startup
6:39
company in the , you
6:41
know , around the time that the dot-com boom
6:43
happened , or in the early
6:46
2000s , if you wanted to build
6:48
a software service
6:50
and offer it to your customers , you
6:52
had a fairly large upfront investment
6:55
to make , and the
6:57
cloud essentially enabled
7:00
startups to be able to , you
7:03
know , get
7:07
to market much faster at a much lower cost by taking away that capital expenditure that they
7:09
needed to spend on infrastructure , on servers
7:11
and storage and all of that , and
7:13
also for resources to manage all that
7:15
infrastructure . And
7:18
so that was great and you know , we've obviously
7:20
had a really good run in terms of a lot
7:22
of startups being able to test out products at a
7:24
really low cost , find product market
7:27
fit or , in some cases , not find product
7:29
market fit and decide to abandon the idea , do
7:31
something else , and so it really levels
7:33
the playing field for them and allowed a lot
7:35
of startups to disrupt the status quo
7:38
and ultimately bring value to the customers
7:40
. But
7:48
what's happened in the last few years , particularly as GPUs have really taken off thanks to all the
7:50
demand from AI and machine learning workloads in the last few years
7:52
, particularly since OpenAI's chat
7:54
GPT movement . What has happened is
7:56
that we're sort of going back to
7:58
the traditional ways where
8:01
, given the scarcity in the
8:05
availability of GPUs , particularly certain
8:07
high-end models like the A100s
8:10
and the H100s , and soon
8:12
the GH200s and the B100s
8:14
from NVIDIA , not
8:16
only are these significantly more expensive
8:18
, but in many cases
8:20
they're just really hard to get . In
8:23
many cases they're just really hard to get , and so if you're somebody
8:26
that wants to utilize , you know , a
8:28
fleet of H100s or , you
8:30
know , soon , the GH200 platform , not
8:33
only do you need to commit to spending
8:37
a lot of money , but in many cases
8:39
, if you go to the bigger cloud providers
8:42
today , you need to actually commit to
8:44
a year or three years of
8:46
spend with them before you can even
8:48
get access to these GPUs . So
8:50
if you sort of go back to the start
8:53
of the cloud and compare that to
8:55
where we are today , the whole
8:57
premise of the cloud , which is remove
8:59
that capital expenditure , the upfront
9:02
expenditure that you've got to do , and
9:04
give you the flexibility to scale
9:06
up and scale down without having to take on the
9:08
uh ongoing expense that
9:11
sort of goes away . Uh , if you have to commit
9:13
to a year of cloud expenditure
9:15
in order to get access to a certain piece
9:17
of hardware and so , uh
9:19
, this is sort of that inflection point , I think , where a solution
9:21
like akash really shines , and that's what we've've
9:24
been seeing with a lot of our customers and users
9:26
as well .
9:27
Thanks for that , anil . So to follow
9:30
up , a follow-up question , for that is
9:32
how Akash makes it
9:34
more accessible , the GPU
9:36
accessibility , as you rightly pointed out
9:38
. Right , it's harder to get
9:40
hands-on on H100s
9:43
and the higher-ups now NVIDIA is
9:45
coming out with . First of all . That's
9:48
my first question and I
9:51
remember Craig also mentioning about
9:53
Akash being suitable also
9:56
for the small language models . So
9:59
yeah , if you could expand on that .
10:01
Yeah , absolutely Would love to dig into that . So
10:04
there's a few different things that are at play here
10:06
and we kind of saw this coming , you
10:08
know , a year , year and a half ago , and we're just
10:10
kind of why we , you know
10:12
specifically , you
10:15
know , focused our but really doubled down on
10:17
that strategy and it was kind of
10:19
, I think , driven by two or three things , if I can
10:21
frame it that way . The
10:37
first was , you know , it was very clear that there was
10:39
going to be a huge amount of demand for GPU workloads because of the
10:41
growth in the amount of applications that are going to get built
10:43
in the next few years . That was very clear . I think
10:45
it became more and more clear after the chat
10:48
GPT movement , but it was clear to many people
10:50
even before that that there is going to be some
10:52
point in time whether it was going to be
10:54
six months , one year , two years , there
10:56
was going to be some point in time where this was going to happen . So
10:59
that's kind of number one . The second
11:01
thing was what
11:09
we also realized was that , uh , even though initially when the chat
11:11
gpt movement happened um , open , ai was it almost seemed
11:14
for a little bit , you know , maybe a month or two months
11:16
, that open ai was going to be the only
11:18
game in town and they were going to basically
11:20
suck all the oxygen out of everything else
11:22
and everybody's going to be just building an open AI
11:24
. And that sort
11:26
of went back again to history
11:29
repeating itself , which was , if you've been
11:31
around in tech for long enough , or if you have read
11:33
about technology
11:36
history even if you've not been around for that long
11:38
, what you have seen is that
11:40
there's always been points in
11:43
technology history
11:45
where , even
11:47
if a certain technology gets invented by a really
11:49
big player and is
11:51
initially only
11:54
available through that specific player , over
11:57
time there is
12:00
enough movements in
12:02
communities around the world that
12:04
leads to open source solutions . Movements
12:07
in communities around the world that leads to open source
12:09
solutions . Arguably
12:13
the biggest example of that , historically speaking , is the Linux operating system . So you know
12:15
, way back in the day , in the 90s , obviously Windows
12:17
was the most dominant operating
12:19
system out there , and
12:21
today , if you look at most server
12:24
workloads , as well as a lot of consumer electronics
12:26
and many
12:29
other services that you access through
12:31
SaaS services , all
12:33
of them underneath run Linux and
12:47
as a result of communities that build in the open and are
12:49
able to come together and create something that is , overall , going to create
12:51
a better world for people that are building . It
12:55
is pretty clear in our head that that is going to be the case
12:57
even with AI . If
13:03
you look back at Akash , akash
13:05
has always had a significant portion
13:08
of its code-based
13:10
open source right from day one . But
13:13
what we did approximately
13:16
, you know , a year and a half ago
13:18
is we decided to go 100%
13:21
open source , and this was way before even
13:23
the chat GPT movement source
13:28
, and this was way before even the chat GPT movement . And then not only did we decide
13:30
to go 100% open source , we also decided to go to an open development
13:32
model . So we essentially
13:34
came up with this idea
13:37
of building in the open , similar to what
13:39
projects like the Kubernetes
13:41
project does . So they have the concept
13:43
of special interest groups and work groups
13:45
where people essentially
13:48
are able to propose ideas , talk
13:51
about them in the open as part
13:53
of a community and
13:55
vote on things and then work
13:58
together to implement certain things that
14:00
make sense from a community-driven
14:03
project perspective . So this
14:05
is the switch that we made about a year and a half ago
14:07
and we , to this day , operate
14:10
the same way . So literally every single
14:13
decision that gets made gets made in the open
14:15
. It's documented in our GitHub repository . All
14:17
of our code base sits there as well , and
14:20
so we made this transition . And then , sure
14:22
enough , for the first few
14:24
months after OpenAI released ChatGPT
14:26
and that whole inflection point happened a
14:29
few months following that you saw that
14:32
there were competing open source models being
14:34
released for similar
14:37
types of functions or capabilities
14:39
as what OpenAI was releasing . And
14:42
then since then , which was about a year
14:44
or a year and three months now , we
14:46
have seen a whole bunch of new open source models
14:49
get released . Hugging Face has been
14:51
an amazing repository for all those models and
14:54
everything from image generation to
14:56
large language models to small language
14:59
models everything between those
15:01
is now almost always
15:03
you can find an equivalent open source
15:05
version of a closed source model , and
15:08
so our strategy of being an
15:10
open company aligns really well with that
15:12
, and so that's worked out really well for us
15:14
. And now
15:17
taking those two things and then marrying
15:19
it with one of the questions that you asked , which
15:21
is how does small language models fit and
15:23
how does large language models fit ? Essentially
15:27
, the way things come together really
15:30
nicely for us is given that we
15:32
have been a crypto native company as well , or
15:34
a crypto native project as well . We
15:36
obviously have a blockchain based
15:38
mechanism for matching
15:40
supply with demand . So
15:43
, essentially , the way Akash works for folks
15:45
that are not familiar with it is that we're
15:47
essentially a two-sided marketplace . On
15:49
the one side , you have supply , which
15:51
is compute supply , whether it's
15:53
regular compute or accelerated compute in the form
15:56
of GPUs . Now All
15:58
of this supply is
16:01
available on the network in terms of individual
16:03
providers , so a single provider can
16:05
have a single server , they can have 10,000
16:07
servers , they can have 100,000 servers
16:09
any number and
16:11
each of these providers are
16:13
independent entities and no
16:15
single person owns the entire infrastructure . So
16:18
, even though Overclock Labs is the
16:20
company or the organization that created our cash
16:23
network , we don't own all the infrastructure
16:25
. We own a teeny , tiny portion of it . We're one of the
16:27
providers on the network , and there's over
16:29
75 of those providers today . And
16:32
then on the other side of this is people that want to
16:34
deploy applications onto
16:37
that compute infrastructure , and
16:39
the way the matching of these two sides
16:41
happens is through a blockchain , and the
16:43
reason we use a blockchain for that is because number one
16:45
, it
16:51
lets us be able to do this in a very automated fashion , so being able to easily create a smart
16:53
contract between somebody that wants a certain resource and somebody that has that
16:55
resource to give can be done very easily in
16:57
blockchains , using smart contracts or using
16:59
programmatic ways , and so that's what
17:01
we've implemented is a two-sided marketplace
17:04
where you can get the best possible
17:06
resource in terms of price
17:08
and performance for the workload that you want to
17:10
run . And so , given that
17:12
we have a crypto background , we have
17:14
a natural affinity or we have a good portion
17:16
of our community , that is consists
17:18
of people that have had GPU mining
17:20
equipment mining equipment so
17:22
if you look back to you know 2017
17:25
, 2018 , 2019 , 2020
17:27
as well similar to how
17:29
NVIDIA has seen a huge boost
17:31
from AI workloads in the last one year , two
17:33
years prior
17:41
to that , the prior inflection point that NVIDIA had was from GPU mining . Some of the people that
17:43
were around in the GPU space then would probably remember that and so
17:45
there's a whole bunch of GPU capacity sitting in
17:48
miners even now , whether it
17:50
was for Bitcoin mining or it was previously for
17:52
Ethereum mining or any others , and
17:54
a lot of these chains
17:57
have either transitioned away from proof of
17:59
work type blockchains to proof of stake
18:01
or , in case of Bitcoin , it's
18:03
getting more and more expensive , with each having to
18:06
be able to mine Bitcoin . So there
18:08
is , as a result , a lot of GPU capacity that's out
18:10
there that you know
18:12
. People have already invested the money into
18:14
that they
18:17
would love to monetize , and so
18:19
, while those GPUs
18:21
may not be the most latest
18:23
and the greatest GPUs , given
18:25
that they've been around for four , five , six years
18:27
, they still
18:29
serve in many cases to
18:32
be a really good platform for being able to do inference
18:34
. So , while you may not be able to train
18:37
the largest of the models on these
18:39
older GPUs , many of them work
18:41
really great for inference . For example
18:43
, one of the most common GPUs
18:45
that we get requests for
18:48
inference today is the RTX
18:50
4090 , believe it or not and
18:52
what people have found is that the
18:54
price to performance ratio of an
18:57
RTX 4090 is really good
18:59
when you're trying to do basic inference , whether
19:01
it's running something
19:03
like LAMA or
19:09
LL for language responses as a natural language
19:11
processing engine . You
19:15
know wanting to do image
19:17
generation using stable
19:20
diffusion or any of the other image
19:22
generation models out there , they work as
19:24
a really good platform for that type of stuff . So
19:28
that's where sort of you know us being able to match all
19:30
of this demand from the crypto and mining
19:32
communities towards people that want to do
19:34
small language models or just
19:36
pure inference on models
19:39
with fewer parameters . It works great . Now
19:42
, when you think about the higher-end GPUs , which
19:44
is primarily people that want to be able
19:47
to run models with
19:49
tens of billions of parameters or
19:52
want to be able to do large-scale training
19:55
, what we have found here
19:58
is that we are able to actually
20:00
bring in crypto-driven incentives . So
20:02
we have the concept of a community pool
20:04
within our
20:06
protocol that has
20:09
several million dollars of
20:11
money available for
20:13
us to deploy as part of community
20:16
incentives , and so what we're able to do is
20:18
we're able to actually source a lot , lot
20:20
of these iron gpus as well and
20:22
offer them at a significantly competitive
20:24
price relative to anybody else that's on out
20:26
of the market . So , for example , if you were
20:28
to go to akashnetwork gpu
20:31
right now , what you would find is um
20:34
that you can get a h100
20:36
with an sxm interface , but
20:39
as low as a dollar and50 , which
20:41
I believe is half or less than half
20:44
of what you can get at many of the other places out there
20:46
. So I hope that answers
20:48
some of the questions that you have .
20:50
Yeah , yeah , that definitely answers my
20:52
questions and brings in more questions
20:55
for me , actually , that I was thinking while
20:57
you were talking about this
20:59
. So I have used a cost service in the past
21:01
and it's it's amazing in terms of I
21:04
hosted a , hosted a
21:06
blog also , so everything is kind of containerized
21:08
. They're nice templates , it's very easy
21:10
to use and I was going to get to
21:12
the ease of use for non-crypto
21:14
native users . So that
21:17
was like a year ago and now things
21:19
might have improved even more . So
21:21
any improvements there in
21:23
terms of , like , how the GPUs
21:25
or the GPU marketplace works
21:27
, because that's relatively new right ?
21:29
Yeah , great question . So , yeah
21:32
, so the GPU marketplace was launched
21:34
. I mean time flies right . So
21:36
we actually launched the GPU marketplace in beta
21:38
, I think around June of last year , may to
21:40
June , may , june last year , may 23 . And
21:43
then , you know , ga'd it , I think a month
21:45
or two after that . So it's been around for
21:47
about six or seven months now , but
21:49
, yeah , we're quickly coming up on almost a year
21:51
. Yeah , so
21:54
, from the perspective of being
21:56
able to use GPUs or request GPU
21:58
resources on the network , the way
22:00
we have implemented GPU support
22:02
is to match it exactly the way
22:04
regular CPU resources work
22:06
, and so any sort of deployments
22:09
that you did on regular
22:11
non-accelerated compute I
22:13
don't know how long ago that was , maybe a year
22:15
ago or so or maybe two years ago you'll
22:19
find that the deployment workflow is exactly
22:21
the same even with GPUs . So , just
22:23
like how you can write this thing called
22:25
a stack definition language file , or an SDL
22:27
file , as we refer to it , which is effectively
22:30
like a Docker compose file for
22:32
those that are infrastructure nerds listening
22:34
to this and what you do
22:37
there is you basically say , hey , these
22:39
are the services that I want to be running , and
22:41
a service could be a backend service . It could be a frontend
22:43
service , it could be a machine
22:45
learning workload , it could be an inference app , whatever
22:48
you like , and
22:50
so you can have multiple of these services specified
22:52
inside that file , and then , for each of the services
22:54
, you specify something called a compute profile
22:56
, which is basically saying these
22:59
are the resources , or this is the amount of resources that
23:01
I think the service is going to need
23:03
in order to operate . So the compute
23:06
profile typically is you know , I need six
23:08
CPUs , I need one GPU , I
23:11
need a gigabyte of RAM
23:13
, I need , you know , two gigabytes of storage
23:15
. So you specify all these
23:17
things and then submit this job
23:19
onto the network , and then what you get back is
23:21
a whole bunch of bids from various providers
23:23
. Each bid typically consists of , you
23:26
know , information about
23:28
the provider , where is it located , what's
23:30
been the uptime on the provider for the last seven days
23:32
, what's the
23:34
name of the provider and then
23:37
, of course , what is the cost . So
23:39
what is this provider going to charge you for
23:41
running this workload for one month ? And
23:45
so you get all these bids back , and then you go
23:47
ahead and accept one of the bids and then , the moment
23:49
you do that , the workload gets
23:51
deployed onto the specific provider and
23:53
what you get back is an endpoint that
23:56
gives you access into the container running
23:58
container instance , and
24:00
if you expose certain ports , then
24:04
you know . If one of those ports happens to be a port 80 or 443 , then
24:06
you have essentially a HTTP you
24:08
know interface into that as well . So the
24:11
entire workflow is exactly the same as what it was with
24:13
CPUs . Nothing's changed , so
24:15
that should be totally familiar if you go try that . The
24:18
other aspect of that which you asked about
24:21
was how do we make
24:23
it easy for non-crypto people to be
24:25
able to access this ? And that's a really good question because
24:28
obviously
24:30
a majority , a big share of the AI
24:32
workloads today are being
24:34
built and deployed by folks
24:36
that are not crypto-native right , and
24:39
so , to that end , there's a few things
24:41
that are ongoing within
24:43
Akash and within Overclock Labs . First
24:47
and foremost , as you probably
24:49
know from past conversations and from following Akash
24:51
, we have a fairly vibrant ecosystem
24:54
and a fairly vibrant community . So
24:57
one aspect of our community is
24:59
that there is a bunch of people that are
25:01
actually building solutions
25:03
on top of Akash . So , similar
25:05
to how you know , when AWS and Azure
25:07
and all of these services took off , you
25:10
had a bunch of people building you know monitoring
25:12
solutions , building things
25:14
like Roku or you know Vercel
25:17
or these kinds of things that utilize AWS
25:19
compute underneath , or
25:21
Vercel or these kinds of things that utilize AWS compute underneath
25:24
. There is a bunch of teams that are building similar solutions to utilize Akash compute
25:26
underneath . In fact , one of those teams the
25:30
name of the team was CloudMOS . They were
25:32
called CloudMOS because they're
25:34
essentially built on the Cosmos or
25:37
they're part of the Cosmos ecosystem
25:39
, and they were primarily targeting
25:41
Akash compute
25:43
as the platform that they would
25:45
build on top of . They call themselves CloudMOS
25:47
. That team was actually
25:49
acquired by Oracle Oclabs about seven
25:52
or eight months ago and they're actually part of our team now , and
25:55
so they built this client that takes
25:58
you know our basic APIs and our CLI
26:00
and implements essentially a UI
26:02
on top of that to make it easy to deploy . And
26:06
so now that those folks are part
26:08
of our team , we've rebranded
26:10
that to consoleakashnetwork
26:13
. If you go to consoleakashnetwork , what
26:15
you'll see is what looks like a
26:17
simpler version of AWS console , but
26:19
specifically for Akash . So that's already
26:22
there . So you can check out consoleakashnetwork
26:24
and you can see what that looks like . What
26:26
you will see in the next few months
26:29
is us work on , you know , more
26:31
curated workflows for AI there's
26:33
already like a bunch of templates out there , but even better
26:35
curated workflows and also
26:38
potentially look at offering
26:40
a credit card based interface and not just a
26:42
crypto and a wallet driven interface as well . So
26:45
that's one aspect of it . The second aspect of it
26:48
is there is other teams out there , so there's a team called
26:50
spheron that has built a
26:52
ui app that already has a credit
26:54
card based interface that can
26:56
be utilized for deployments and
26:58
then separate from teams that are directly building
27:00
on us . We're also in
27:02
talks with you know
27:04
, partnerships talks with certain Web2
27:06
companies . So these are companies that have
27:09
built essentially AI
27:11
inference platforms , right ? So
27:13
these are companies that are built like a UI
27:16
and API layer that
27:18
allows people to
27:21
be able to utilize open source
27:23
models and abstracts away
27:25
all the infrastructure components from
27:28
that whole experience , right ? So whether you're utilizing
27:30
AWS underneath or Azure underneath or
27:32
Akash underneath , all of that is abstracted away
27:34
and what you , as a data scientist or
27:36
a machine learning engineer , get is you get
27:39
this API interface or UI
27:41
interface where you can just say , hey , this
27:44
is the model I'd like to run , I
27:46
would like to run it really fast , or I'd like to run
27:48
it medium or slow and
27:52
either run the model right now and give me the outputs or
27:56
give me a programmatic interface where I
27:58
can request that the model
28:00
be run with this data set and
28:02
with these parameters so I can tune
28:04
the model as well . So we have several
28:07
talks with companies that you'll be hearing about in the
28:09
next few months that are Web2
28:11
companies that have built these kinds of platforms
28:13
that are going to be using Akash computer .
28:16
Yeah , so you're fully realizing the
28:18
SkyCloud concept looks , looks like you
28:21
know , with this uh full realization of
28:23
that , where you can define
28:25
your uh compute parameters and then
28:27
it uh does what it does in the background
28:30
and as you , as
28:32
you described , like the fast , slow and
28:34
you know it depends upon the type of jobs
28:36
you're running , right and time and the time of the day . Then
28:39
you can have that price selectivity
28:41
as well . So it sounds fantastic
28:43
.
28:44
Yep , amazing . The flexibility of
28:46
Akash , I think , is that it
28:48
lets you not just be able
28:50
to choose the kinds of compute resources
28:53
and make the tradeoffs between price to performance
28:55
that is
28:57
applicable to your specific application performance
29:02
that is applicable to your specific application but it also gives you the option of choosing to be
29:04
as decentralized or not decentralized as you want to be . So let's
29:06
say , you use Akash for a
29:08
few months and you decide that these three
29:11
of the 75 providers are the ones that I like the most
29:13
and those are the only ones I want to be deploying
29:15
to . You can programmatically
29:17
set it up so that you always default
29:20
to those providers . Or , if you're
29:22
somebody that is a completely you
29:24
know you're a hardcore decentralization
29:26
fan , or
29:28
you know
29:31
you can be choosing a different provider
29:33
every single day and build your application
29:35
to do that . So that flexibility
29:37
in being able to decide what path you want to choose
29:39
is essentially what I think makes Akash really
29:42
unique , and so this brings me to like
29:44
, like a bigger , bigger
29:46
question .
29:48
Akash was talking about decentralized
29:50
cloud before all this deep
29:53
in narrative , right . So we've
29:55
, like , I had my interview with with Greg almost
29:57
a year , I had my interview with Greg almost a year , two
29:59
years ago , and so we were talking
30:01
about these things , and so where
30:03
do you see the conversions
30:05
now of AI and
30:08
crypto , like in the broader scheme of
30:10
things ?
30:11
Yeah , that's been a really hot topic
30:13
for the last few months , hasn't it ? So
30:16
you're absolutely right . Basically , what
30:18
we have seen , at least in the last several months , is
30:20
something that we have
30:23
been passionate about for several years now . Greg
30:25
and adam , way longer than
30:27
I have this idea of
30:30
decentralizing the
30:32
infrastructure , or the compute infrastructure
30:35
and the cloud , which you know in many
30:37
ways , is a public utility at this point . That's
30:40
something that really has taken on
30:42
a narrative for this specific crypto
30:44
cycle coming up , and so , as
30:46
with all narratives you know , similar to you know , in
30:48
the last crypto cycle , DeFi
30:51
and NFTs and you
30:53
know a few other things were pretty
30:55
hot and everybody wanted to jump on them . You have a bunch
30:57
of people trying to jump on this decentralized
31:00
physical infrastructure narrative
31:03
or the deep end narrative now , and
31:06
a bunch of people trying to claim that they
31:08
are quote unquote decentralized compute marketplaces
31:10
. What's
31:12
been interesting to watch is that many of
31:14
these projects actually don't in
31:17
the absolute worst case , some of them don't have an
31:19
actual product underneath and they're just
31:21
talking about things that you know
31:23
, in many ways are just copying messages
31:25
from uh projects like akash
31:27
and others that have been at this for several
31:29
years now in the , and that's
31:31
sort of in the worst case scenario where they don't really have
31:33
I haven't really built anything , but they're just talking about
31:36
it . And then , in the best case scenario
31:38
, is projects that have
31:40
legitimately built something but
31:43
they've not taken the effort to truly
31:46
think about decentralization at the core . So
31:48
they may have gone and acquired
31:51
compute from one or two or three
31:53
sources and then offering
31:55
that as a decentralized solution . You
31:57
know , that's not the true definition of decentralization
32:00
. That's just taking a regular
32:02
good old approach of
32:04
you know going and sourcing compute , but just sourcing it from multiple
32:07
sources yourself , right ? So
32:09
I think that's not . It's not I'm not saying
32:11
it's a bad solution . It works
32:13
. It's better than the first one , which is
32:15
just claiming things when you don't really have anything
32:18
, but it's also not really decentralized
32:20
. What's also
32:22
interesting to see about a lot of these solutions
32:24
is that they're
32:26
all closed source , so they
32:29
don't open up . They definitely
32:31
don't open up the source code for others to look at
32:33
, similar to what Akash has done but
32:37
they don't even open up their metrics
32:39
. In case of Akash , you can actually go to a
32:41
web page called statsakashnetwork
32:44
. What this is is basically
32:46
it shows you all the statistics of
32:48
things that are happening on Akash
32:50
. Every single second , every single
32:52
minute , Every time a block is created . What
32:56
you can see there is you can see the total number of providers on
32:58
the network . You can see the total amount of compute in
33:00
terms of GPU , CPU storage memory . You
33:03
can see the total number of
33:05
leases being created . A lease is basically
33:07
when one workload
33:09
gets deployed onto one provider . That's
33:11
typically a lease , so it's like one application
33:14
being deployed and you can
33:16
see the total number of compute
33:18
resources being spent , amount of
33:20
money being spent on the network , and all
33:22
of these metrics are basically stored
33:24
on the blockchain . So
33:29
it's not something that we as overclock labs or as anybody in the
33:31
community , can go and sort of spoof
33:33
or mess with or fake
33:35
in any way , because anybody
33:37
can query these parameters from the blockchain and
33:40
prove us wrong if we try to do that or fake
33:42
in any way , because anybody can query these parameters from the blockchain
33:44
and Roo was wrong if we try to do that . So
33:47
in that sense , I don't know of any other project out there other than
33:50
Akash that is fully open source , fully decentralized
33:53
and exposes all of its statistics on
33:56
a blockchain for anyone to query within
33:59
the compute deep end marketplace
34:01
. So I don't know of any other compute
34:03
deep end project that is doing
34:05
those three things , and if there is , then I would
34:07
love to learn about it .
34:10
Yeah , well said , I myself
34:13
haven't , even though I've interviewed
34:15
some folks in
34:17
competing with you guys or in
34:20
healthy competition , but
34:22
I haven't seen this clear
34:24
statistics from anyone so far , so
34:27
that's
34:29
good to see . So , as far as the convergence
34:32
of AI and crypto goes , I mean , clearly
34:35
there is one solid
34:37
use case that Akash is building for
34:39
, which is providing GPU accessibility
34:42
, which is at the inference
34:44
level as
34:46
well models
34:50
, and you have more efficient
34:52
models coming , and then you have
34:54
inferencing getting
34:56
better on on commodity hardware
34:58
. You know you can definitely see that uh
35:01
utilization of gpus even going
35:03
higher . Right , I'm currently looking
35:06
at the stats and I do see a lot
35:08
of utilization happening here month
35:10
over month , so that's good to see
35:12
. Okay , one question
35:15
that's uh
35:17
, one of the last
35:19
questions is and
35:21
I've started doing this with my speakers
35:24
is um
35:26
, what advice would
35:28
you give to somebody who comes next
35:30
on my show and and this
35:33
is you could say
35:35
regarding something in the crypto
35:38
sphere that you have learned so far ?
35:41
And actually I , just before I answer that , I just realized
35:44
I didn't answer the previous question completely , so
35:46
I'll just quickly answer that as well . I
35:48
didn't touch quite on the AI crypto narrative
35:51
, so I talked a lot about , you know
35:53
, the new projects coming up and how Akash
35:55
potentially is different from those , but
35:57
really the AI versus AI
35:59
across crypto narrative view , if you
36:02
want to call it . That makes
36:04
complete sense to me , because one
36:06
of the biggest things that people talk about in the
36:08
non-crypto world today with
36:10
regards to AI is how AI
36:12
is being controlled by a handful of companies , right
36:15
? So there is this huge outcry
36:17
among a lot of people that you
36:19
know a few companies have enough
36:22
compute capacity and
36:25
are capable of acquiring a lot of compute capacity
36:27
into the future , and these
36:29
are the companies that are going to be able to train the
36:31
best models , run the best models and all of that
36:33
. And I think this
36:35
is where crypto really makes
36:37
sense to me , because it's
36:40
the one way that we can build systems
36:42
in the open , allow
36:44
easy or programmatic
36:46
aggregation of capacity compute
36:48
capacity , the way Akash is doing and
36:51
be able to crowdsource
36:53
the development of not just
36:56
the development of models but also , you know , the
36:58
accessibility to models as well as compute
37:01
in an open fashion , and
37:03
so being able to do this with
37:05
crypto is a lot
37:07
more easier to make programmatic
37:09
and a lot more easier to make you know
37:11
sort of source from a community or crowdsource
37:14
than it is to do without crypto , and that's
37:16
why that thing makes complete sense to me . Now
37:18
to answer your last question , which is you know what sort
37:20
of advice I might have for the next person that comes along
37:22
, I
37:24
think the biggest advice I would give
37:26
to someone and this is coming from
37:28
me as someone who was
37:31
not in crypto before I started working
37:33
on Akash and joined Overclock Labs is
37:37
when you think about a crypto project . I
37:39
know there's a lot of people out there that build
37:41
crypto projects with
37:43
the pure intention of shilling a token
37:45
or making a quick buck
37:47
and calling it a day . I think
37:50
it'd be really nice to see more people think
37:52
about the real utility of crypto and
37:55
how it can be applied specifically
37:57
to areas of our
37:59
life that require
38:01
decentralization or require
38:04
things that need
38:06
an incentive mechanism to
38:09
make them more like a public utility , without
38:12
actually making them a public utility
38:14
in the sense of , you know , making
38:16
a fixed cost or a fixed price and no competition . So
38:19
I think what crypto is really good at is being
38:21
able to sustain
38:23
innovation while
38:25
, at the same time , leveling
38:28
the playing field and giving people access to technology
38:31
that otherwise would not have access to them , while
38:33
at the same time , you know , allowing entrepreneurs
38:35
to be able to generate wealth , retain
38:38
value or capture a certain portion of value that
38:40
they create , without having to get
38:42
democratized completely or without
38:44
having to get commoditized completely
38:46
. So I think that's kind of my overall advice
38:49
is to think of solutions that could
38:51
really uniquely be solved only with crypto
38:53
, as opposed to being solved by a
38:55
web to solution , and just do crypto for the sake of it .
38:58
Great Thanks , anil
39:00
, chatting with you and folks
39:03
here listening . Where can they , where
39:06
can they find you ?
39:07
Yeah , so , akash , you can find us at
39:09
akashnetwork on the web . And
39:12
then I'm on Twitter . My handle
39:14
is underscore Anil underscore
39:16
Murti underscore . And
39:20
then I'm also on LinkedIn and the usual
39:22
places that you'll find someone Awesome .
39:24
Thank you .
39:25
Thanks you .
Podchaser is the ultimate destination for podcast data, search, and discovery. Learn More