Episode Transcript
Transcripts are displayed as originally observed. Some content, including advertisements may have changed.
Use Ctrl + F to search
0:00
Today
0:01
on the Techbytes podcast, we talk with Palo
0:03
Alto Networks about the latest AI
0:05
ops features first. He won. AI
0:07
ops is a broad term for software that analyzes
0:10
network data and then makes recommendations
0:12
on issues and problems that it can see
0:15
from the data that's been collected. This proactive
0:17
capability has become somewhat necessary
0:20
as complexity of WAN networks in particular
0:22
undelays, overlays encryption.
0:24
You're going on prem. You're going off prem.
0:27
You've got remote working. You might have integrated
0:29
VPNs. You might be doing content scanning.
0:32
you might have filters, you might have all
0:34
threat detection engines, you might be going
0:36
through AAAA Sassy
0:39
cloud product All that complexity
0:41
adds up into a whole lot
0:43
more stuff that you need to do as network
0:45
engineer. The old days of simple OSP
0:47
for adding along behind us and
0:49
the new environment's really difficult. And that's where
0:52
tools like AI ops comes in to help you
0:54
keep that under control. So today, we're
0:55
talking with Bill Pruitt. He's a
0:57
senior product manager at Palo Alto Networks,
0:59
and he's gonna be talking about how Palo Alto
1:01
Networks AI ops builds on the existing capabilities
1:04
that's in the product today. and then how
1:06
AI ops can help you with an easier work life.
1:08
So, Bill, let's get started with talking
1:10
about how the network insight products
1:12
work, which is where you are today, and how we build
1:14
on that. So what's network insights?
1:16
Yeah.
1:17
So network insights basically takes what we built
1:19
over the past couple years and adds another
1:21
layer of analysis on top it. So previously
1:23
we focused on how can we reduce the noise
1:25
through event correlation and event management
1:28
identifying flapping conditions and things
1:30
like that. But now we're kind of taking some
1:33
really big steps to do some proactive
1:35
analysis outside of just an event.
1:37
So network insights, they
1:39
really operator in the Acxiom
1:42
that, you know, that we use internally, which is,
1:44
you know, day one is only one day, but day
1:46
two is forever. So we we take all
1:48
of these things that we would
1:50
normally do as a network administrator.
1:53
Capacity planning, we pay big two huge
1:55
teams of people to do these things as network operators.
1:58
and we perform those tasks automatically. And
2:00
I'll give you some examples of things that we're doing
2:02
that are really just for me, I used to
2:04
be a network operator. Right? So these
2:07
things I spend a lot amount of time doing
2:09
between me and my team. And some things
2:11
that we're doing are, for example, looking
2:13
across every single circuit in the network,
2:15
proactively, weaken, weaken, weaken out to
2:17
identify quite a few different conditions,
2:19
up to five of them on a given
2:21
circuit. And what we did is we took
2:24
a baseline of the hundreds of
2:26
thousands of circuits that we have access
2:28
to the data of in our system. And
2:30
we said How do we expect a
2:32
circuit to perform? Simply, let's talk about
2:34
latency and loss. So if you look
2:36
at latency and loss, two,
2:39
you know, conditions that plague any
2:41
particular, you know, circuit or connection as
2:43
as those things get out of spec. We
2:45
said, you know, we expect latency and
2:47
loss to increase as load
2:49
increases a marginal But
2:51
when a circuit deviates from that amount,
2:54
that that that prescribed model
2:56
that we trained, then we that
2:58
as a condition where the circuit has excessive
3:00
latency and or excessive loss. And
3:02
we give it a simple ending round up.
3:05
So you
3:05
know when you talk about blackouts and, you know,
3:07
systems working, then you have an electricity brown blackout.
3:09
But you can also have a brownout, and that's where you
3:11
run out of bandwidth. maybe the backbone
3:13
is congested or latency is increasing
3:16
for some reason usually because of backbone
3:18
congestion. And that's very hard
3:20
to monitor for unless you're
3:22
actually doing some sort of application. We'll
3:24
talk more about digital experience
3:26
monitoring, like the Prisma digital experience
3:28
in monitoring at the end here. But
3:31
that brown out is the same for everybody
3:33
because it's the it's roughly the same condition
3:35
for all networking because everybody's
3:37
using the same networks across the same providers
3:40
you can and, you know, our brownout is the same
3:43
all over the world. You know, in any network.
3:45
Right? Absolutely.
3:46
Those happen. and
3:48
they're difficult to troubleshoot. I've spent
3:51
days and hours and, you know, weeks
3:53
troubleshooting these things. They're totally
3:55
difficult. So why can't we ask the
3:57
question, why can't we give these things to the customer
3:59
on a silver platter? Right? These pop up
4:01
as notification, hey, this circuit
4:03
or these set of circuits is having these problems.
4:05
Mhmm. Now,
4:07
can I guess how this is how is this different than
4:09
from what I would normally be getting from an SD
4:11
WAN device, which supposed to be monitoring, you
4:13
know, my link performance anyway. What what's
4:15
the the difference here? What what makes it AI
4:17
ops? What makes AI ops is
4:19
that we're actually we taken all the
4:21
data that we have and we've trained the models.
4:23
And in the models, the actual
4:26
the artificial intelligence piece
4:28
looks at the trained model but
4:30
it performs a historical analysis.
4:33
Go looking back several weeks for performance.
4:35
So it's not just a right now condition.
4:38
These are things that it might occur over time. because
4:40
let's be honest. We don't wanna be notified when
4:42
we have a single blip in the network or
4:44
maybe it's only bad for a few minutes
4:47
and then it's good. Right? Though that's noise. But
4:49
when these things become chronic, that's when
4:51
we need to know about them. We need to work with our carriers
4:54
to troubleshoot the connections or increase
4:56
bandwidth. And I
4:56
think it's also useful to point
4:59
out that bad performance, say,
5:01
for an Azure hosted SaaS application.
5:03
I like to order Salesforce. you
5:05
know, or or some sort of help desk application
5:07
is very different to an application
5:09
that might be hosted in your own data center.
5:11
And you actually need to build a baseline
5:14
around those things. You can't just
5:16
say, you know, eighty percent packet loss,
5:18
ninety percent packet loss with this amount of latency.
5:20
And suddenly, it's a problem. It's actually
5:22
a variable. It's a it's a mutable constant
5:25
around that. Right? It's not a fixed thing.
5:27
Absolutely.
5:27
Yeah. So it's all relative. Let's think it's
5:29
what you're trying to say. It's like it's all
5:31
relative. So, yeah, the the the
5:33
model has to be trained very specifically to
5:36
identify those types of anomalies a
5:38
hundred percent. And so what it sounds
5:40
like I'm getting then is as opposed to just
5:42
sort of in the immediate moment where the
5:44
SD WAN could say, okay, maybe we should switch this flow
5:46
to a different link because of Performance
5:48
issue x, I can over time see this
5:50
link is having consistent problems and maybe I
5:52
need to go talk to my provider and get something
5:54
new or change my SLA or something.
5:56
That's right. Yeah. So the SD
5:58
WAN system will obviously in real time
6:00
react to those problems. This is really
6:02
focused around that day two operations and
6:04
making sure that all the underlying infrastructure,
6:07
including the transport, is working well.
6:09
So what about other capabilities? There's
6:11
a note here in our document about predictive
6:13
circuit capacity exhaustion.
6:16
Yes. Yeah. So in that first
6:18
level first batch of insights that we did
6:20
last fall, we actually introduced one called
6:22
bandwidth upgrade recommended. And what
6:24
that did was look at a historical analysis
6:27
of of a circuit performance. And
6:29
basically, if it was redlining, you know, above
6:31
a certain amount, then we would
6:33
take that and say, you know what? You should probably
6:35
consider bandwidth upgrade. Here's the time today
6:37
this occurs. Here's all the apps and
6:39
IPs that are causing it, but you may wanna consider
6:41
an upgrade. We're not taking that to the next
6:43
level this summer by looking at it
6:45
from predictive standpoint. So no longer is it gonna
6:47
be like historical analysis, yes,
6:49
that trains the model. that'll be predictive
6:51
that says, hey, in 234
6:53
weeks, based on your current trend,
6:55
you're likely to be hitting bandwidth caps at
6:57
this circuit and the site. and
7:00
you'll be able to know. Now, this predictive
7:02
capacity, and all of these aics, actually
7:04
builds up the visibility tooling that's actually
7:06
in the product today. You've already got quite
7:09
a wide suite in the SD WAN portfolio
7:12
of, you know, monitoring the network,
7:14
tracking the bandwidth, asset management, configuration
7:16
management and that all runs as
7:18
a cloud hosted service that you can see
7:20
it. This is all services that build
7:22
on what I thought of as, like, table
7:24
stakes for SD WAN. This isn't like
7:26
a simple thing. These are complex new tools.
7:29
Absolutely.
7:30
Yeah. And those tools as
7:32
we discussed before. That gives you
7:34
just an unprecedented amount of data that we
7:36
never had we never had operating our
7:38
network historically. And what we're trying to
7:40
help the customer do is
7:42
make sense of all that data and really
7:44
bubble those problems up to the top so they
7:46
don't have to sift through it manually. Even though they
7:48
can't, that's great. And it's all really rich
7:50
and helps them, you know, look through the historical
7:52
analysis themselves. We wanna give it to them automatically.
7:55
Yeah. because I'm I'm thinking of in the Palo
7:57
Alto Networks SD WAN tooling,
7:59
you get
7:59
things like heat wap heat maps and
8:02
wind utilization charts for sure, but
8:04
you actually get them in really
8:06
quite complex graphic arts. Like, you
8:08
have the the utilization quadrant
8:11
report, which actually maps against maps
8:13
the utilization against the
8:16
the land to land utilization. So there's a quality
8:18
versus utilization map. And some of
8:20
that's really sophisticated, but that's all rather
8:22
passive. This is much more predictive, and
8:24
it's saying, instead of reacting and you go and
8:26
have a look at hotspot reports and you say where
8:28
are the problems in the network because there's a problem,
8:30
this is we're telling you what's gonna
8:32
happen before it happens. It's it's not totally that,
8:34
but it's getting in that direction.
8:37
That's right.
8:37
Yeah. I think of it of, like, Maslow's
8:39
hierarchy for networking. Right? So we'll
8:41
call it Greg's hierarchy for networking.
8:44
We're we're up there at the self actualization
8:46
level of networking. Right? Where we're
8:48
providing just an extreme amount of
8:50
value built upon all these other foundational
8:52
layers of the technology. So this
8:54
predictive notion means that, you know, say, I
8:56
open a new branch or a new retail site and it's
8:58
sort of, you know, standard operating
9:00
procedures to provision x amount of bandwidth.
9:02
But if this retail site particularly
9:04
popular. I can see, you know, the
9:06
bandwidth needs or the consumption increasing over
9:08
time. And before customers have a bad
9:10
experience, maybe I can upgrade that circuit to
9:12
prevent that. That's exactly the
9:14
point exactly is to to give
9:16
the administrator, their operator time
9:18
to go and upgrade the capacity if they need
9:20
to before it becomes a problem.
9:22
So let's
9:22
move on to talking about autonomous digital
9:25
experience management. Now digital experience coaching is
9:27
where you actually monitor the user
9:29
experience rather than or or monitor
9:31
the the the application experience
9:33
via a separate process. Instead of monitoring the
9:35
network, you're monitoring using
9:37
probes or tools in the network. Now you've
9:39
had Prisma Autonomous Digital Experience in
9:42
the tool for a while, but this is being
9:44
extended? Yeah.
9:45
So so if you think about it, we've also
9:48
seen that movie vantage point where
9:50
you've got several different versions of a
9:52
story depending on the perspective. And
9:54
all of those different aspects
9:56
add up to a really complete view of
9:58
an event that happened. So the
10:00
Prisma ADM solution is
10:02
not just monitoring from one particular
10:04
position in the network. It monitors on
10:06
the client side. from a
10:08
GP client no matter where you're at, it the
10:10
we also have the agents embedded to
10:12
Prisma SD WAN and also
10:14
inside of the Prisma Access Cloud. So really
10:16
from end to end the complete
10:18
solution we have monitoring from every single
10:20
component. And we take all that data
10:22
that we're gathering from every single perspective, and
10:24
then we combine that together in our
10:26
cloud to provide that
10:28
true end to end view
10:30
of of the application health.
10:32
And what this does is, you know, it
10:34
to your point, We're now monitoring up at
10:36
the application layer. So we're running a
10:39
series of probes in addition to monitoring real
10:41
user traffic to say for these
10:43
key applications, what is my
10:45
performance. So that's one
10:47
aspect. You can look at it, but then we actually
10:49
break it down into something where we call segment
10:51
wise insights. And this looks at it,
10:53
takes all that data and merges it
10:55
together and says, okay, in a hot by
10:57
hot fashion, starting all the way
10:59
from the client PC, their
11:01
laptop, and processes on
11:03
that all the way through to the far
11:05
end where traffic comes out of the security
11:07
cloud, that what
11:09
is my experience all the way from there
11:11
into the actual app and app
11:13
application. So we we break all of
11:15
that down. You can see it incredible
11:18
detail hot by hot at the network and
11:20
application layer to break down what
11:22
is happening on that application? Yeah. Because this
11:23
isn't just you know, taking flow
11:26
data from the network or taking statistics
11:28
from the SD WAN edge device. These
11:30
are actually agents that can be installed on
11:32
devices, but also on user laptops and
11:35
and devices. And then you
11:37
configure probes to actually monitor
11:39
independent of the network. and
11:41
that gives you a completely different view
11:43
of the experience rather than a view of
11:45
the performance. I know there's some
11:47
semantics, but that's the idea or is
11:49
it not? That's
11:50
right. Yeah. We we want to monitor the user
11:52
experience. So never performance is
11:54
great. And, you know, we we that's
11:56
great information. have, but then when you up
11:58
level it, what am I really trying to do with the network? I'm
12:00
trying to ensure a user experience. So
12:02
now that I have this proactive twenty
12:04
four hours a day visibility from
12:08
across the entire set of WAN
12:10
infrastructure, WANsecurity infrastructure,
12:12
I've got the ability to to
12:14
to measure carefully measure
12:16
and quantify that user experience
12:18
from end to end. And that's really
12:20
critical in this now, you know, distributed work
12:22
environment where a user could
12:24
be on a home network where the
12:26
problem maybe is their, you know, their
12:28
WiFi or their router or it's on a local
12:30
ISP or it's on a backbone And
12:32
unless you're measuring each of those segments,
12:34
when the user says, hey, the network is slow, you
12:36
don't really know what that means. That's
12:38
right. See, we're assessing assess service
12:40
like Salesforce or Right. Microsoft Office. How
12:42
do you measure that? You know? Right.
12:44
And that and that goes back to the the
12:46
meantime to innocence for the network operator.
12:48
Right? We're at the bottom of the OSI
12:50
model. So always like to say things roll
12:52
downhill to us. So now we have all of
12:54
the tools to be able to prove
12:56
ourselves innocent. Right? And one of
12:58
our our our customers actually coined that
13:00
term mean time to innocence. Yeah.
13:02
I've been using it, I don't know, for a few
13:04
years. It's also a mean time to
13:06
denial. It's not there you go. There's another
13:08
one, beloved. So I'm curious what kind
13:10
of information I can get at on the
13:12
endpoint. Can I can I look at, you know,
13:14
wireless signal strength, can I look at processes on
13:16
the actual device that may be affecting,
13:18
you know, CPU or memory? The
13:21
answer is yes. All the above the
13:24
agent on the when you actually look at it from the
13:26
client perspective, monitors the
13:28
the Wi Fi RSSI and SNR,
13:31
monitors all the the processes and their
13:33
health that could be affecting, you know, CPU
13:35
memory overhead on that client. All
13:37
those things can impact the
13:39
user experience, and we give you the tools to
13:41
be able to figure out where in the chain
13:43
that it's being impacted. I
13:45
like
13:46
to how SD WAN
13:48
In the early days, we had so many
13:50
problems with people saying, oh, we couldn't
13:52
possibly run SD WAN overlays. And,
13:54
of course, the challenge there was how to get visibility
13:57
into the overlay. What's the performance of the
13:59
user application? What's the performance
14:01
of, you know, that how how do we know it
14:03
works? Right? And, of course, the answer was you didn't know how
14:05
it worked before. So why would this be any different? Right.
14:09
But now Things have come a long way.
14:11
Things have come a long way that the the
14:13
intelligence we've got from cloud
14:15
hosted applications and from these very
14:17
powerful edge devices that gives us
14:19
application grade visibility. And
14:21
combining that with digital experience, tools.
14:23
The dam you know, the
14:25
it just changes the way we
14:27
see applications and
14:29
getting visibility telemetry from
14:31
the network. that's really what this is about, I
14:33
think. I agree
14:34
a hundred percent. Obviously,
14:37
the world has changed And
14:40
our our goal is the Prisma Sassy Business
14:42
Unit is to provide the tools and
14:44
the capabilities necessary to provide
14:46
you know, and a consistent end user
14:49
experience no matter where they're consuming
14:51
the services they need, whether it be a coffee
14:53
shop, in the office, in
14:55
in the corporate headquarters no matter where it's
14:57
at. And do remember
14:57
that Palo Alto does actually provide a whole stack
15:00
solution. So there's all all the security
15:02
that you want there, all the SD WAN features
15:04
that you there's even integration with a wide
15:06
range of other tools, which we've talked about in
15:08
lots of other shows on the Packet Precious Network.
15:10
So if you wanna find out more about those, contact
15:12
Palo Alto networks to talk about those
15:14
or go into a search for them on our
15:16
website, even be able to get more
15:18
on that. unfortunately, that's all that we have
15:20
time for today. Thanks very much to Bill
15:22
for coming on talking about Palo Alto
15:24
networks and the new features around AI
15:26
ops using machine learning and artificial
15:28
intelligence. to improve the performance of
15:30
your of your job. Right? No. It
15:32
doesn't improve the network. It improves your job because
15:34
you know what's happening. And also talking a little
15:36
bit about the Prisma Autonomous Digital
15:38
Experience Management, which does the
15:40
experience wondering, it has been extended since our
15:42
previous conversations. Thanks to
15:44
Palo Alto Networks for sponsoring. Without
15:46
them, we couldn't be here. And remember that there
15:48
are many, many more find free technical forecasts
15:50
just like this on our community blog
15:52
at packet pushes dot net. follow us
15:54
on Twitter is at packet pushers. Find us on
15:56
LinkedIn. And I believe you've got the
15:58
time, please rate us on your favorite
16:00
podcast tool. because that helps us
16:02
stay around. And last but never ever
16:04
least, remember that too much tech
16:06
networking would never be
16:08
enough.
Podchaser is the ultimate destination for podcast data, search, and discovery. Learn More