Episode Transcript
Transcripts are displayed as originally observed. Some content, including advertisements may have changed.
Use Ctrl + F to search
0:00
It's time for security. Now, Steve Gibson
0:02
is here. We're gonna
0:04
talk about, in fact, at
0:07
length about the EU's new
0:09
legislation to monitor
0:12
citizens' communications this
0:15
this is a bad one. Folks, Steve's
0:17
got the details. He'll tell you why he
0:19
doesn't like QNAP, but he does like
0:21
Synology. If you're looking for a NASH,
0:23
you wanna hear that. And then a look at VMware's
0:26
ESXI servers, a
0:28
massive exploit. It's already claimed
0:30
thousands of victims and it's just a couple of days
0:33
old. All that more coming up next.
0:35
That's security now.
0:39
Podcast you love. From people
0:41
you trust. This
0:44
is true.
0:49
This is Security Now is Steve Gibson
0:51
episode 909, recorded
0:54
Tuesday, February seventh twenty
0:56
twenty three. How ESXI
0:59
fell. Security
1:02
Now is brought to you by Dorada,
1:05
too often, security professionals are
1:07
undergoing the tedious, arduous
1:09
task of manually collecting evidence.
1:11
With drada, say goodbye to the days
1:13
of manual evidence collection and hello? To
1:16
automation all done at dorata speed.
1:19
Visit dorata dot com slash TWiT to get a
1:21
demo? And ten percent off implementation.
1:25
And by Barracuda, Barracuda
1:27
has identified thirteen types
1:30
of email threats, and how cyber
1:32
criminals use them every day, phishing
1:34
conversation hacking, ransomware
1:36
plus ten more tricks Cyber criminals
1:39
used to steal money from your company or
1:41
personal information from your employees and
1:43
customers. Hit your free e book at
1:45
verracuda dot com slash
1:47
security now. And by
1:50
Thinks Canary, detect
1:52
attackers on your network, while avoiding
1:55
irritating false alarms. If
1:57
the alerts that matter for ten percent
1:59
off and a sixty day money back
2:01
guarantee, go to canary dot tool
2:03
slash tweet and enter the code, TWiT
2:06
the How did you hear about Spotts? It's
2:09
time for security now. Yay. You've been
2:11
waiting all week for the best show on the network.
2:14
Mister Steve Gibson makes it so.
2:16
Hello, Steve? Yo,
2:18
Leo. Good to see you. Be with you.
2:22
For 909, our
2:25
first show of February. And,
2:28
of course, we've got questions.
2:30
Now, You used to say at the top
2:32
of our q and a
2:33
episodes, you have questions, we
2:35
have answers. Yes.
2:36
Whatever happens. We stopped those.
2:39
Well, because now we're teasing most of the
2:41
questions and then providing the answers. You're
2:43
asking the questions and answering them.
2:45
That's right. We take care of the whole job
2:47
here. So this week, we wonder
2:50
what is about to happen with
2:52
the EU's legislation to monitor
2:54
its citizens' communications. Why
2:57
would a French psychotherapy clinic
3:00
be keeping thirty thousand old
3:03
patient records online and
3:05
who stole them. What top
3:07
level domains insist upon and
3:10
enforce https. How
3:13
is Chrome's release pace about to
3:15
change? And when you
3:17
say Russia shoots the messenger,
3:20
is that only an expression? Were
3:23
a full and is crypto soon parted
3:25
or should that be was? Exactly
3:28
why is QNAP back in the news
3:31
And what do I really think
3:33
about Synology? Would
3:35
companies actually claim unreasonably
3:37
low CVS scores for their own
3:39
vulnerabilities. No. What
3:42
questions have our listeners been asking
3:44
after all this recent talk about passwords?
3:47
What's the role? What's the whole
3:50
unvarnished story behind
3:52
this week's massive global
3:55
attack? On VMware's
3:57
ESXi servers
4:00
and who's really at fault. These
4:03
questions and more will probably
4:05
be answered before you fall
4:06
asleep, but no guarantees.
4:09
No guarantees. Some
4:11
of them rhetorical, I might add.
4:14
Great. I'm excited to be a good show. We also have
4:16
a great picture of the week. Fitting in
4:18
with the usual topic of our pictures of
4:21
the
4:21
week. And
4:22
But first, a word from
4:25
one of our fine sponsors. And this
4:27
week, I'd love to talk to you a little bit
4:29
about Drata. This
4:31
is an area of security that I was
4:34
not really fully aware
4:36
of, but obviously everybody in enterprise
4:38
is this whole notion of
4:41
of auditing and
4:43
and compliance. Right? It's one thing to be secure.
4:45
It's nothing to prove it. And in
4:47
many cases, you've got to prove it. You've got
4:50
you've got to have continuous
4:52
compliance, but many organizations are
4:54
still doing this. Manually,
4:59
and that can be a big problem as you
5:01
grow and scale. Manual collection.
5:04
Of evidence for compliance can
5:06
really be a bottleneck. That's
5:09
why you need to know about Rata as a leader
5:12
in cloud compliance software g
5:14
two says, their leader,
5:16
they streamline your SOC two,
5:18
your s o twenty 7001,
5:21
your PCI DSS, your GDPR,
5:23
your HIPAA, and all those other compliance
5:26
frameworks that you've got to be responsive
5:28
about. They give you twenty four hour
5:30
continuous control so
5:33
you could focus on scaling and all the other parts
5:35
of your business and make sure and know
5:38
that your compliance is handled.
5:41
And one of the ways strata works and
5:43
does so well is because they have more than
5:45
seventy five integrations in the tools
5:47
you're already using. It's already part of your
5:49
tech stack. Like AWS and
5:51
Azure or GitHub or
5:53
Okta or Cloudflare and on and on and
5:55
on seventy five different integrations.
5:58
Countless security professionals from
6:00
companies including lemonade and
6:02
Notion and Bamboo HR have
6:05
shared how crucial it has been to have Toronto
6:08
As a trusted partner in the compliance process,
6:11
Dreda is personally backed by SVCI.
6:14
Why does that matter well? It's
6:16
actually a testimony to
6:18
how great Drorada is. SVCI is
6:21
a syndicate of CISO Angel Investors.
6:23
So people who really know how important this is
6:26
from some of the world's most influential companies,
6:29
they backed Prada because they saw that Prada
6:31
solves a problem. That they haven't so
6:33
many others do. With
6:35
Drorada, your company can see
6:37
all of its controls. You can easily
6:39
map to compliance frameworks
6:42
to gain immediate insight and to overlap,
6:44
so that saves you money. Right? You can
6:46
start building a solid security posture you
6:48
can achieve and maintain compliance, and
6:51
you can expand your security assurance
6:53
efforts. Dreda's automated dynamic
6:56
policy templates, support
6:58
companies new to compliance and
7:00
help alleviate hours of manual
7:02
labor, their integrated security awareness training
7:05
program the automated reminders,
7:07
ensure smooth employee onboarding.
7:10
They're the only player in the industry that builds on
7:12
a private database architecture. That ought to
7:14
be really important to you. It means your
7:16
data can never be accessed by anyone
7:19
outside your organization. It's truly
7:21
private. Andrade
7:23
is with you every step of the way. Every customer
7:25
gets a team of compliance experts, including
7:28
a designated customer support manager.
7:30
But even more importantly, they have a team
7:33
of former auditors who have
7:35
conducted more than five hundred audits
7:38
and are available for you
7:40
to talk to for support for counsel,
7:43
you can say, am I doing this right? With
7:45
a consistent meeting cadence too, they keep
7:47
you on track to make sure there are no surprises, no
7:50
barriers, and when it's time, For that
7:52
audit, their pre audit calls prepare
7:54
you for when those audits begin.
7:56
With Prada's risk management solution,
7:59
you can manage end to end risk assessment
8:01
and treat workflows. You could flag
8:03
risks. You could score them. You could decide
8:05
whether to accept them, to mitigate them, to
8:07
transfer them, or avoid them. DRADA
8:10
maps appropriate controls to risks,
8:12
which simplifies risk management, automates
8:15
the process to. And Prada's
8:17
trust center provides real time transparency
8:19
into security and compliance postures
8:22
that helps for you in sales
8:24
and security reviews and just
8:26
better relationships with customers and
8:29
partners. Say
8:31
goodbye to manual evidence collection
8:33
and say hello to automated compliance
8:36
visit drada dot com slash
8:38
TWiT. DRATA
8:40
drona dot com slash
8:42
tweet. Bringing automation to
8:44
compliance at drona's speed,
8:46
DRATA. Dot com
8:49
slash We thank you so much for supporting
8:51
our work here, especially with Steve's up to,
8:54
and we hope you will take a
8:56
look at Dreda And when you do make sure you
8:58
use that address so they know you saw it here,
9:00
verada dot com slash
9:03
tweet.
9:04
Picture of the week time, mister
9:07
g. So this one
9:09
can you could spend some time
9:11
visually parsing this picture. It
9:13
it really begs many questions.
9:16
So without further ado,
9:18
what we have is a close-up of
9:21
a chain which has been wrapped
9:23
around the opening
9:25
side of a fence, like, you know, to
9:27
to keep the fence closed. Now
9:30
and to call this a chain, it
9:32
really doesn't do a justice. A chain is what you
9:35
wear around your neck. This thing looks
9:37
like it could have been the anchor for the Titanic.
9:41
You know, just in terms of the the beefiness
9:43
of this chain, but but what's odd
9:46
is that it's it's actually there's
9:49
actually two pieces of chain that
9:51
there's a center three links
9:53
which are actually a little smaller than
9:56
the main chain which
9:58
goes around in order to
10:00
to keep this fence closed. And
10:03
and for reasons not at all
10:05
clear, we've got it, you know, your
10:07
traditional master lock a
10:10
standard, you know, hap style lock
10:13
that is interlinking the
10:17
the chain that goes around the
10:20
the the opening to this
10:22
little three link subchain,
10:26
and then then there's
10:28
a white nylon zip
10:30
tie, which is
10:33
connecting the small chain to,
10:35
you know, this monster chain
10:37
or the small earth chain, they're all big chains.
10:40
And so it's like, oh, so
10:44
now and anyone who's ever, like, tried
10:47
to manually pulled one of
10:49
those nylon zip ties
10:50
apart, knows they are really strong.
10:52
In fact, I think aren't police now using them
10:54
as a for Hanka. Yeah. Just disposable
10:57
handcuffs. Yeah. Yeah. So you're
10:59
not getting out of this. But but
11:01
at the same time, if you had you need
11:03
some sub a knife or something? Some to nail
11:05
clippers. Yeah. Donail clippers work well. Yeah.
11:08
Nothing. You know, now you're able
11:10
to get in here. So and
11:12
Leo, it's not like you
11:15
couldn't use the only
11:17
the big chain with the mat with a
11:19
master padlock to bridge across
11:22
--
11:22
Yeah. -- the the large chain. Yeah.
11:24
It would work just fine. I need
11:25
this little three chain three link
11:27
chain. don't know what that's there for. Right.
11:30
And no no Hokey
11:33
white nylon zip ties till I
11:35
connect the two chains together. Really?
11:38
I I the more the more
11:40
pictures of this we see, the less
11:42
faith I have in humanity. And
11:45
I really, you know, I would like
11:47
to get the backstory behind some of these.
11:49
Like like, what we had couple weeks ago
11:51
has been haunting me. That that that
11:53
piece of fence across the
11:55
across the sidewalk that had
11:57
a sign on it. Sidewalk closed
12:00
except there was a sidewalk that was
12:02
just fine on the other side. And
12:06
and you could go around it in either direction.
12:08
It's just
12:08
like, what is it? Who? What?
12:11
And if there's a couple of ways in our
12:13
chat rooms say, well, truthfully, it'd be
12:15
harder. It's the Masterlock's
12:17
easier to pick than the zip ties. So maybe
12:20
Yeah. Maybe the zip tie is actually not the
12:22
weakest
12:22
link. If you do have any sharp cutting
12:25
tool -- Yeah. -- using them right through. On your
12:27
purse. Yeah. Then yeah. That's true. Okay.
12:30
So, we are
12:32
back to protecting the
12:34
children. And I'm
12:36
not making light of that at all.
12:38
See, Sam, as we know, child sexual
12:41
abuse material, and online exploitation
12:43
of children is so distasteful that
12:45
is difficult to talk about because
12:47
that requires imagining something
12:49
that you'd much rather not. But
12:52
it's that power that
12:55
gives this bit of a Trojan horse
12:57
ability to slip past our
12:59
defenses or at least pass the
13:01
politicians because there's
13:04
also a very valid worry
13:06
surrounding, you know, this
13:08
whole issue that once we've agreed to
13:11
compromise our privacy for
13:13
the very best of reasons, protecting
13:16
children, our government or
13:19
a foreign government or law enforcement
13:21
might use their then available
13:24
access to our no longer
13:26
truly private communications against
13:28
us. Now,
13:31
nowhere in the EU's pending
13:33
legislation, this pending surveillance
13:35
legislation that I'll that I'll get to in a second,
13:39
Is there any mention of
13:41
terrorists or terrorism? But
13:44
it's been voiced before and you can
13:46
bet that it will come marching out again. And
13:49
once everyone's communications is
13:51
being screened for seductive
13:53
content that might be considered grooming,
13:56
you know, photos that might be naughty
13:59
and other content that some automated
14:01
bot thinks should be brought to a human's
14:03
attention then what's next?
14:06
So this is, you know, this is this is the very
14:08
definition of a slippery slope.
14:11
Document number 52022PC0209
14:19
is titled Proposal
14:22
for a regulation of the European
14:25
Parliament and of the council laying
14:28
down rules to prevent
14:30
and combat child sexual
14:33
abuse. Okay. First of all,
14:35
it won't prevent it. Right? Nothing
14:38
will. What it will do is
14:40
drive that material to seek other
14:42
channels. And that's not a bad thing,
14:44
and I agree that it would likely combat
14:47
the problem, though, QNAP,
14:50
okay, to some degree. Right? The
14:52
question is, is this the
14:54
best solution? And what real price are
14:56
we paying to make that possible? And
14:58
of course, what could possibly
15:01
go wrong? So what is
15:03
essentially happening is that
15:05
the EU is taking
15:07
the next step. Over
15:09
and ignoring the loud and
15:11
recently pulled objections, of
15:13
seventy two percent of European citizens,
15:17
EU legislators are preparing to
15:19
move their current content screening
15:21
Internet communication surveillance, which
15:24
until now has been voluntary. And
15:27
as a consequence, somewhat limited in
15:29
its application, to mandatory
15:32
and therefore universal. Okay.
15:35
So now just to recap a bit about how we
15:37
got to where we are now. Three
15:39
years ago, in twenty twenty. The
15:42
European Commission proposed temporary
15:45
legislation, which allowed
15:47
for the automated Internet communications
15:50
surveillance, for the purpose
15:52
of screening content for CSAM,
15:55
child sexual abuse material. The
15:58
following summer, on July sixth
16:00
of twenty twenty one, the European
16:02
Parliament adopted the legislation to
16:05
allow for this voluntary screening.
16:07
And as result of this adoption, which
16:10
they refer to as an e privacy
16:13
degradation, in other words,
16:15
creating a deliberate exception to
16:17
a privacy for this purpose.
16:21
US based providers, like
16:23
Gmail, Outlook dot com and
16:25
and meta's Facebook began
16:28
voluntarily screening for
16:30
this content on some of their platforms. Notably
16:34
however, only those very few
16:36
providers did anything. The other providers
16:38
of, for example, explicitly secure
16:41
secure communications you know,
16:43
telegram signal, they've
16:45
not done anything. And
16:47
so last summer, on May eleven
16:49
of twenty twenty two, The Commission
16:52
presented a proposal to move this Internet
16:54
surveillance from this
16:57
is gonna no longer gonna be temporary and
16:59
is no longer gonna be voluntary. It
17:01
will be becoming mandatory for
17:04
all service providers. As
17:06
we noted, when this was last discussed, in
17:08
the context of Apple's hastily
17:11
abandoned proposal to provide
17:13
client local image analysis
17:16
by storing the hashes of known
17:18
illegal images on the user's phone,
17:21
the content to be examined includes not
17:24
only images but also textual
17:26
content, which might be considered solicitous
17:29
of minors. You know, that's that
17:31
grooming term. And
17:33
most controversially, all
17:36
of this would impact every
17:38
EU citizen regardless
17:40
of whether there was any preceding suspicion
17:43
of wrongdoing. Everyone's visual
17:47
and textual communications would
17:49
be and apparently will soon
17:52
be surveilled. Interestingly,
17:56
the legality of this surveillance
17:58
in the EU has already been challenged,
18:01
and according to a judgment by
18:03
the European Court of Justice, the
18:05
permanent and general automatic
18:08
analysis of private communications violates
18:12
fundamental rights. Nevertheless,
18:15
The EU now intends to
18:17
adopt such legislation. For
18:20
the court to subsequently annull
18:22
it can take years.
18:25
By which time the mandated systems
18:27
will be established and in place. Currently,
18:31
meetings and hearings are underway they're
18:34
gonna be going on through through the re
18:36
rest of the year, a parliamentary vote
18:39
is being held next month in March.
18:41
Followed by various actions being taken
18:43
throughout the rest of the year as required
18:46
to move the, you know, the
18:48
shorter passage of this legislation
18:50
through a large bureaucracy. Why
18:53
sure? After all, how
18:56
does any politician defend
19:00
not wishing to
19:02
protect the children. I
19:04
read a great deal of this proposal and
19:06
it has been clearly written
19:08
to be rigorously defensible
19:11
as a child protection act
19:14
period. So
19:16
how do you stand up and vote against
19:19
that? It shows
19:21
every indication of being adopted.
19:23
With this surveillance set to become mandatory
19:26
in April of next year,
19:28
twenty twenty four. So
19:31
some pieces from this legislation. Quote,
19:34
by introducing an obligation
19:37
for providers to detect Laporte
19:40
block, and remove child
19:43
sexual abuse material from
19:45
their services, the proposal
19:47
enables improved detection
19:49
investigation and prosecution
19:52
of offenses under the child
19:54
sexual abuse directive. Another
19:57
piece. This proposal sets out
19:59
targeted measures that are proportionate
20:02
to the risk of misuse of
20:04
a given service for online sexual
20:07
abuse and are subject to robust
20:09
conditions and safeguards. It
20:12
also seeks to ensure that
20:14
providers can meet their responsibilities by
20:17
establishing a European center
20:20
to prevent encounter child
20:22
sexual abuse, further, you know, here
20:25
and after, referred to as EU Center.
20:28
To facilitate and support
20:30
implementation of this regulation,
20:33
and thus help remove obstacles
20:36
to the internal market, especially
20:39
in connection with the obligations of
20:41
providers under this regulation
20:44
to detect online sexual
20:46
child sexual abuse, report
20:49
it, and remove child
20:51
sexual abuse material. In
20:53
particular, The EU center
20:55
will create, maintain, and operate
20:58
databases of indicators of
21:01
ONLINE CHILD SEXUAL ABUSE THAT PROVIDERS
21:03
WILL BE REQUIRED TO
21:05
USE TO COMPLY WITH
21:08
THE DETECTION OBVIATIONS. Okay.
21:12
Why mandatory? They say
21:14
the impact assessment shows
21:17
that voluntary actions alone
21:19
against online child sexual abuse
21:22
have proven insufficient. By
21:25
virtue of their adoption, by a small
21:27
number of providers only of
21:30
the considerable challenges encountered
21:32
in the context of public
21:35
private cooperation in this field
21:37
as well as of the difficulties faced
21:40
by member states meaning EU
21:42
member states QNAP preventing the
21:44
phenomenon and guaranteeing an
21:47
adequate level of assistance to
21:49
victims. This situation
21:51
has led to the adoption of divergent
21:54
sets of measures to fight online child
21:56
child sexual abuse in different
21:58
member states. In the absence
22:00
of union action, legal
22:03
fragmentation can be expected to develop
22:05
further as member states introduce
22:08
additional measures to address
22:10
the problem at national level, creating
22:13
barriers to cross border
22:15
service provision on
22:17
the digital single market. And
22:20
as to why they think this is a good thing,
22:23
quote, these measures would
22:25
significantly reduce the
22:27
violation of victim's rights
22:30
inherent in the circulation of
22:32
material depicting their
22:34
abuse. These obligations
22:36
particular the requirement to detect
22:39
new child sexual abuse materials
22:41
and grooming would result
22:43
in the identification of new victims
22:46
and create a possibility for their rescue
22:49
from ongoing abuse, leading
22:51
to a significant positive impact
22:53
on their rights and society at large,
22:56
the provision of a clear legal basis
22:59
for the mandatory detection and
23:01
reporting of grooming would
23:04
also positively impact these rights.
23:07
Increased and more effective prevention
23:09
efforts will also reduce the prevalence
23:12
of child sexual abuse supporting
23:14
the rights of children by preventing them from
23:16
being victimized. Measures to
23:18
support victims in removing their images
23:21
and videos would safeguard
23:23
their rights to protection of
23:25
private and family life, privacy,
23:27
and of personal data. Okay.
23:32
So this is clearly
23:34
something that the EU is
23:36
focused upon and is committed
23:39
to seeing put into action. To
23:41
be in effect in the spring of next
23:44
year, twenty twenty four, And
23:46
apparently, the EU has a legal system
23:48
much like the one which has evolved or
23:51
devolved here in the US. Where
23:53
the court system has been layered
23:55
with so many checks, balances, and
23:58
safeguards against misjudgments. That
24:01
years will then pass while
24:03
challenges make their way through the courts.
24:06
Meanwhile, this is mandatory
24:09
starting in April. Conspicuously
24:12
missing from any of his proposed
24:14
legislation, is any apparent
24:16
thought to how exactly
24:20
this will be accomplished from a
24:22
technology technological standpoint,
24:24
which of course is what interests us. If
24:27
I have an Android phone whose
24:31
job is it, to watch
24:34
and analyze what images
24:37
my camera captures, what
24:39
images my phone receives, what
24:41
textual content I exchange. Is
24:45
the phone hardware providers is
24:47
you know, is it the phone hardware provider's job?
24:50
Or is it the underlying Android
24:53
OS's job? Or is
24:55
it the individual messaging application?
24:58
It's difficult to see how signal
25:00
and Telegram are ever
25:03
going to capitulate to this. And
25:06
is it the possession of the
25:08
content? Or
25:11
the transmission, reception,
25:13
and communication of the content.
25:16
You know, can you record your own
25:19
movies for local use, never
25:21
with any intention to do anything else with
25:23
them. The proposal establishes
25:26
and funds this so called
25:28
EU Center. To serve as
25:30
a central clearinghouse for suspected
25:33
illegal content and providing
25:36
the in some fashion, the
25:39
the the samples against which
25:41
material that is seen
25:44
on devices, on consumer devices
25:47
in the EU is checked against.
25:50
So when an EU based
25:52
provider somehow detects
25:54
something which may be prescribed,
25:57
the identity and the current location
26:00
of the suspected perpetrator along
26:03
with the content in question will be
26:05
forwarded to the EU Center for
26:07
their analysis and further
26:09
action, if any.
26:12
Wow. So as I've been saying
26:15
for years, this battle over
26:17
the collision of cryptography and
26:19
the state's belief in its
26:21
need for surveillance is
26:23
gonna be a mess, and it's far
26:25
from over. So Leo,
26:29
it moves forward. I it makes
26:31
me really think about
26:36
the long term consequences of
26:38
that. And if I were Apple or
26:40
Google or Samsung, well,
26:42
or I would be fighting this tooth
26:44
and nail because in the long run,
26:47
they're gonna be forced to
26:49
to enforce it essentially.
26:52
Right? To compromise, they're they're gonna
26:54
have to do something. Yeah. And
26:57
and once they if they do, then
27:00
you're gonna see migration away
27:02
from their platforms to nonproprietary open
27:05
platforms. So that people
27:07
don't have to sub sub subjugate themselves to this.
27:09
So I think it hurts them badly. First,
27:12
because they're gonna have a battle over how to
27:14
enforce it, Apple's already turned on advanced
27:16
data protection in the US, which
27:19
is so and here's another
27:21
question. And now
27:23
globally. It went global couple weeks. Okay.
27:25
Well, it's With with iOS sixteen point
27:27
three, they'll be
27:28
now universal. They'll be, you
27:30
know, non compliant in the EU.
27:32
And then there's the other question is,
27:35
they haven't done this yet, but how long before they
27:37
then make it illegal for me to encrypt
27:39
everything. Right? Because
27:42
they're gonna stop the vendors. But
27:44
what if I decide? Well, I'm gonna figure out
27:46
a way that I'm gonna preate an Internet, do
27:48
what you call pie, a pre Internet encryption
27:50
of everything. Have
27:53
I now found
27:56
guilty because I must
27:58
be hiding something? I
28:00
know. I think it pushes people into
28:03
a position where they do have to now start
28:05
being responsible for their own encryption. They
28:07
only would choose end to end encryption choices.
28:09
It's gonna end end up driving people underground
28:12
in the dark, not just criminals,
28:14
but everybody who wants privacy. I
28:17
think
28:19
the long term implications of this
28:21
are bad all around. Howard
28:23
Bauchner: I know. And and so from
28:25
a technology technology standpoint, we
28:27
have signaled in Telegram. There's
28:30
just no way that Moxy is
28:32
gonna compromise Right. Signal
28:35
in order to allow the e like, and
28:37
and be responsible for having
28:39
a a connection to the EU center
28:42
to get a database of
28:44
things, it has to check its users'
28:46
messages. And that's why I'm saying -- Yep. --
28:48
the burden of this is ending up on Apple and Google
28:50
because and Samsung. Because what they'll
28:53
have to do is take them out of the store. They'll
28:55
have to say, well, we can't have signal in the App Store,
28:57
and then they can then we've washed their hands of it.
29:01
But signal will continue to be distributed
29:03
underground. And if you are and this
29:05
is what I'm saying is, ultimately, if you care
29:07
about privacy, you're gonna run an open
29:09
platform that you control, that you put
29:11
your own
29:12
software, you're not gonna be relying on an Apple store
29:14
or an Android store. And -- Well,
29:16
TWiT go it goes a little bit further though because
29:18
because the
29:21
Apple could be compelled
29:24
to do the filtering before signal
29:26
gets
29:26
it. Remember that signal You know, no.
29:29
I understand. You can't use an Apple device
29:31
is what I'm saying. The burn will end up being
29:33
on Apple. And and Apple
29:35
will if they comply, which they probably
29:37
will have to in the long run, lose
29:39
customers like you and me who
29:41
will say, well, I'm gonna use signal,
29:43
I'm gonna do encryption, and it ain't gonna
29:46
be on a device where I can't. So
29:49
you're exactly right. That's why I'm saying,
29:51
this is who should be fighting this tooth and nail right
29:53
now is Apple and Google because this is
29:55
this is gonna be a not only a burden
29:58
on them, It's gonna require
30:00
them to reverse things they've been doing, but also
30:02
it's gonna lose some customers. I
30:04
don't know. Do most people care enough about this that
30:06
they would actually you said seventy two
30:08
percent of the EU is against
30:10
it?
30:10
They're saying, yeah, we do not want this.
30:14
So I think you can't stop encryption. Right?
30:17
You can only stop it on commercial platforms.
30:19
You can now has already escaped. Yeah.
30:22
So so they can't stop
30:24
it. They can only tell companies, Internet
30:26
service providers, carriers, cell
30:29
phone manufacturers to do it. So
30:31
then when just say,
30:32
well, I think that just creates a brisk market.
30:35
For now, if you remember, I was all
30:37
geared up to do a product called
30:39
CryptoLink -- Yeah. -- years ago -- didn't want
30:41
it. -- and I I saw the handwriting on the wall.
30:43
It wasn't taken. It's much slower
30:45
march, but I didn't wanna be you know,
30:48
in a position where, you know, where
30:50
governments are saying, you you know, we have to have
30:52
a backdoor to your secure communications
30:56
too. Twenty years ago, about twenty
30:58
years ago, I I there was a documentary
31:00
which has since been suppressed about hacking.
31:02
In which I gave an interview and I said, really, it's gonna
31:05
be the hackers that the freedom fighters. They're gonna be
31:07
the ones who are gonna be protecting us from governments.
31:10
And corporations who are gonna wanna invade our
31:12
privacy, take over our lives.
31:14
And that open source software
31:17
and hackers people who know how to use it are
31:19
gonna be the heroes. They're
31:21
gonna be the heroes. We're gonna it's gonna
31:23
be up to us to protect ourselves. I don't think we should
31:25
all turn into the unit bomber But
31:27
at the but I think we're
31:30
all gonna have to embrace open software
31:32
because they can't stop open
31:34
software. No. That's
31:36
very So that that would
31:38
mean hacking an Android device
31:40
in order to side load your
31:43
own your own Not
31:45
necessarily. There are already companies
31:47
like Pine that make phones that
31:49
are not Android or iOS that run Linux.
31:53
Okay. So I they're not very good.
31:55
I keep buying them in hopes, and they're
31:57
terrible. But this
31:59
will stimulate their development. And eventually, just
32:02
as you can buy an o a computer that,
32:04
you know, you don't have to have, you know,
32:06
TPM on a computer. You can buy a computer
32:08
that is not
32:11
you know, lock down. Lock
32:13
down. Yeah. And put open stuff on it and control
32:15
it. And
32:16
that's what's gonna happen, I think. At least
32:18
for people who care. Maybe that's a
32:20
tiny minority. And obviously, that is the
32:22
like, yeah, a diminishing minority.
32:25
I mean, maybe they'll, you know,
32:28
they'll, you you know, once upon a
32:30
time, you know, Uncle Willie was asking
32:32
his his nephew who
32:34
was the geek, you know, what was
32:36
the best computer to buy and and what should
32:38
you do? And so maybe it'll be like, hey, I heard
32:40
about, you know -- Yeah. -- governments are spying
32:43
on everybody with their phone, you know,
32:45
junior, What what phone should I get?
32:47
And then, you know, junior will know because
32:49
he's in college and he's up on all these stuff.
32:51
There'll be a brisk market and open hardware software,
32:53
I think. And then
32:55
you the sad thing is then you've completely lost
32:57
control. You know?
33:00
Be huge. Well, you know, there's nothing they could do about
33:02
it. Yes. And and it will be
33:04
as we've already seen, it'll be the bad guys
33:06
that are driven to that platform. Yeah.
33:09
And sadly, mean, there
33:11
is a level of false positives
33:14
that occur with this. There are images which,
33:17
you know, someone who's sitting there, clicking
33:19
a buttons, snapping through
33:21
images in or, you know, the the the
33:24
human capture person is
33:27
sitting there saying, whoa, what's that?
33:29
And, you know, go, you know, go
33:31
question this person. I mean, it's gonna
33:33
be horrible if that's
33:34
happening. Yeah. I you know, it's
33:36
always I've always felt like there
33:39
would come a time when this stuff
33:41
was this tech computer technology
33:44
was too powerful and that governments would wanna
33:46
try to control it and shut it down. And that there
33:48
would always be a group of us They're
33:51
called hackers, but there would always be a group of us who
33:53
said, no. No. We're gonna keep it open. We're gonna keep it
33:55
ours. We're gonna keep their prying eyes out.
33:57
Like like Neo, in the matrix.
33:59
Like the matrix, you know. Yeah. That Yep.
34:02
Wow. And they're and they're
34:04
pushing us that way, you know? It's too bad.
34:06
Yeah. Yeah.
34:08
Okay. So thirty thousand patient
34:10
records online. This
34:14
interesting and sobering cyber hacking
34:17
news caught my eye, and raise an
34:19
interesting question. Okay. First, I'll share the
34:21
story and then the question that it brought
34:23
to mind. The news was that
34:25
French authorities have detained
34:28
a twenty five year old Finnish
34:30
national who is accused of
34:32
hacking the Vistamo psychotherapy
34:35
center. For regions, we'll see
34:38
this hack of Vistamo is
34:40
considered to be one of the worst in the country's
34:43
history. Okay? Now it occurred
34:45
back in twenty eighteen and twenty nineteen. So
34:47
I guess this kid was, what, twenty years old then.
34:50
When he he allegedly
34:54
stole the personal medical records
34:56
of the clinic's patients and attempt to
34:59
distort the clinic. To put
35:01
pressure on the company, the hacker
35:03
leaked extremely sensitive client
35:05
files on the dark web. When
35:08
that failed, he sent emails
35:10
with ransom demands to more
35:12
than thirty thousand of
35:15
the clinic's patients. Asking
35:18
them each asking
35:22
them each for two hundred euros
35:25
and threatening to publish their meta records
35:28
if they did not pay up. Oh,
35:30
boy. Uh-huh. Finish
35:32
authorities formally identified the hacker
35:35
in October last year when they issued
35:37
a European arrest warrant for his arrest,
35:39
and they detained him last
35:41
week. Okay. So This
35:43
is brazen and bad. Right?
35:46
The hacker obtained extremely
35:48
sensitive personal medical information
35:51
and chose to use it to both extort
35:54
the clinic and its past
35:56
patients, all thirty thousand
35:59
of them. And it was that number
36:01
of files and patient histories that raised
36:03
my eyebrows thirty thousand. Okay,
36:06
no matter how large and
36:08
busy this clinic might be,
36:11
they cannot be currently treating
36:14
thirty thousand patients. And
36:16
in fact, you know, there are two sixty
36:19
working days a year, five times fifty
36:21
two. So if the clinic
36:23
averaged ten new patients
36:26
per day, which seems like a high side
36:28
number, thirty thousand patient
36:30
records would be eleven and a half
36:32
years' worth of patient files
36:35
at the rate of ten per day. So
36:38
I'm sure there's some requirement for
36:41
retaining medical files for
36:43
some length of time. You know,
36:46
HIPAA regulations have that here in the US.
36:48
But even so, they certainly
36:50
don't need to be kept in a hot online
36:53
storage. If it was
36:55
burdensomely expensive to
36:57
store all that aging data online,
37:00
that it would not be stored online because
37:02
it doesn't need to be. It would be spooled
37:05
onto some form of offline cold
37:07
storage. Still indexed and
37:09
available if needed. But offline
37:12
and therefore not available to
37:15
remote online attackers. This
37:18
is one of the things that we're gonna
37:20
need to get much better at handling
37:23
as a society. Excessive
37:25
data retention is a problem,
37:28
and it's exacerbated by the reality
37:30
that storing data cost
37:33
next to nothing. So Why
37:35
not store it on the off chance that it might
37:37
be useful for something? You know,
37:39
it doesn't delete itself unless you
37:41
actually create some technology so that
37:43
it does, but no one seems to do that.
37:46
The problem is, even if all
37:48
that old data was of no
37:50
use to the clinic in this instance, was
37:53
certainly useful to the hacker who
37:55
obtained a far larger
37:57
pile of extortable victims
38:00
as a consequence. So,
38:02
you know, it's unclear how we move
38:04
past this where we are stuck
38:07
now. There needs to be some
38:09
form of incentive for
38:11
inducing deletion or at
38:13
least for the migration of old
38:15
records into offline archival
38:17
storage for varying periods
38:19
of time. And such records
38:22
should be destroyed once their retention
38:24
period has lapsed, you know,
38:27
but should was
38:29
the strongest word I could
38:31
find. I I dug into
38:33
medical records re
38:36
retention legislation and
38:38
requirements. You know, I couldn't
38:40
find any clear requirement under
38:43
HIPAA for mandatory
38:45
deletion. It's not there.
38:48
So if an organization acts irresponsibly,
38:51
it's not clear whether they would be
38:54
in any legal jeopardy at least
38:56
in the US. You
38:58
know? Good. God help you if
39:00
you're in the EU, but still, it
39:03
TWiT clearly, you know, and we've
39:05
talked about data retention before. It is
39:08
a real problem. Dot
39:13
dev It
39:16
turns out to my surprise
39:19
is always https.
39:23
I I know. Mhmm. I encountered
39:25
something the other day that
39:28
that I didn't realize happened.
39:30
I was over at Hover registering spinrite
39:34
dot dev because I thought it
39:36
might come in handy since I'm planning
39:38
to be spending the rest of my active coding
39:40
life on what promises to be a very
39:42
exciting and worthwhile project. So
39:46
as I was checking out, I was
39:48
presented with a pop up confirmation
39:51
notice the likes of which I had never
39:53
seen. It read, And
39:55
it was number three of things I had to
39:57
check off. It said TLD
39:59
info for
40:01
dot And of course, TLD stands for
40:03
top level domain. And
40:05
it says registration of
40:08
dot dev domains is
40:10
open to anyone. You
40:12
should be aware that Dot
40:14
dev is an encrypted
40:17
by default, TLD.
40:20
By virtue of being inscribed
40:23
in the HSTS preload
40:26
list found in all
40:28
modern web browsers. Websites
40:32
hosted on dot dev will
40:34
not load unless they
40:36
are served over https.
40:39
Wow. I e have a valid
40:42
SSL certificate installed, and
40:44
I had to check I have read an
40:47
under and the requirements for dot
40:49
dev domains in order to
40:51
proceed with the purchase. Is
40:53
it that cool? Yeah. So
40:56
star dot dev is permanently
40:59
preloaded into the HTTP
41:02
Strict transport security, that's
41:05
HSTS list
41:07
for all modern web browsers. Okay.
41:11
Now before I go any further, let me quickly
41:13
review HSTS. As
41:15
I just said, it stands for HTTPS
41:18
strict transport security. HSTS
41:22
is an HTTP response
41:25
header, which web servers
41:27
can send to browsers telling
41:29
them to treat the site with
41:32
strict Laporte security.
41:35
This means to only
41:37
use secure HTPS
41:40
TLS connections no
41:42
matter what. If the browser
41:44
receives a non secured
41:47
HTTP link, the
41:50
HSTS status instructs
41:53
the browser to automatically upgrade
41:56
it without asking anybody else
41:58
to https. The
42:01
header specifies a max
42:03
age which tells the browser
42:06
how long this security upgrade
42:08
directive is to remain an effect.
42:11
It's also possible to add an
42:13
includes sub domains parameter
42:16
so that everything below that root
42:18
domain will also be covered.
42:22
The first time a site is
42:24
accessed using HTTPS.
42:27
And the site returns the strict transport
42:29
security header. The browser records
42:32
and caches this information so
42:34
that all future attempts to load
42:37
that site using HTTP will
42:40
automatically be promoted to using
42:42
https instead. When
42:44
the expiration time specified
42:47
by the strict transport security header
42:49
elapses, the next attempt
42:52
to load the site via h
42:54
will proceed
42:56
as normal instead of automatically
42:58
using HTTPS. Whenever
43:01
the strict Laporte security header is delivered
43:03
to the browser, however, it will
43:05
update the expiration time
43:08
for that site. Essentially, you know,
43:10
continually pushing it forward. So
43:13
sites can refresh this information and
43:15
prevent the time out from expiring. Should
43:18
it be necessary for some reason
43:20
to disable strict transport
43:22
security, setting the max
43:25
age in that header to
43:27
zero over an HTTPS
43:30
connection, of course, will immediately
43:32
expire the strict transport
43:35
security header, allowing access
43:37
then via HTTP. But
43:41
All this cleverness still
43:44
leaves us with one problem. What
43:46
about the very first time
43:49
a browser visits a site? If
43:51
that visit were initiated,
43:54
for example, by following an
43:56
HTTP link maybe from a
43:58
malicious email The
44:00
initial connection will be insecure.
44:03
In plain text, unauthenticated, and
44:06
susceptible to contraception and on
44:09
the fly modification of the traffic.
44:12
Even if the web server is
44:14
sending out HSTS headers,
44:17
they could be stripped from the insecure
44:19
connection so that the browser never receives
44:22
them. The solution
44:24
to this problem, this first
44:26
contact problem, is
44:29
the HSTS preload
44:31
list. All modern
44:33
browsers carry a large list
44:35
of web domains, which have previously
44:38
proven to be HSTS capable
44:42
by offering https TLS
44:46
connections. Redirecting
44:49
any HTTP request over
44:51
to https and
44:53
sending an HTTPS response
44:56
header with an expiration time of at
44:58
least a year. Those are the requirements in
45:01
order to qualify for inclusion
45:03
in the browser's master list.
45:06
If all of those criteria are met,
45:08
The domain qualifies for
45:10
permanent HSTS registration.
45:14
At that point, the HSTS preload
45:17
site you can go to HSTS
45:20
preload dot org. Can
45:22
be used to submit a domain
45:25
for inclusion in the
45:27
global browser HSTS
45:29
preload list, GRC dot com, has
45:31
been on that list since the bliss
45:34
earliest days when we first discussed
45:36
this on the podcast many years ago.
45:39
And once on that list, any
45:41
attempt to ever connect
45:44
to port eighty will
45:46
be redirected by the browser.
45:48
Just be don't just ignore that. And
45:51
go to port 443 for the
45:53
establishment of a TLS connection,
45:55
period. Okay.
45:58
So with that bit of a refresher, just
46:00
imagine the number
46:03
of domains, the dot
46:05
com's, that like GRC
46:07
dot com is one, how many more
46:10
that must be on the list with
46:13
those common top level domains
46:16
dot com, you know, and the others. As
46:18
I said, GRC dot com has always
46:20
been there. But So
46:22
but so much so
46:25
so must be an incredible number
46:27
of other domains. What's so super
46:30
cool? About the idea
46:33
that dot dev top
46:35
level domain is
46:38
by universal agreement all
46:41
HTPS is
46:43
that it avoids any
46:46
need for sub domains
46:48
of dot dev being on the list.
46:50
Instead of needing to have a list that enumerates
46:53
all of those domains, like, for example,
46:55
spin write dot dev, there's only
46:58
one entry on the list
47:00
star dot dev. Down
47:03
at the bottom of that HSTS preload
47:06
page, It talks about this.
47:08
It says under
47:10
the heading TLD preloading,
47:13
they they say, owners of
47:15
GTLDs, you know, global
47:18
top level domains, CCTLDs,
47:21
or any other public suffix
47:23
domains are welcome
47:26
to preload HSTS across
47:29
all their registrable domains.
47:32
This ensures robust security
47:35
for the whole TLD. And
47:38
is much simpler than preloading
47:40
each individual domain. They
47:42
finish plea please contact us if
47:44
you're interested or would like to learn more.
47:47
So not only is this much simpler,
47:49
but it is vastly more
47:52
efficient. Since pretty much
47:54
Now, everything you know, needs to be
47:56
https these days anyway. It's
47:59
such a cool idea when
48:01
a new TLD is created, to
48:04
simply declare the entire thing
48:06
as h a as HTDPS
48:09
only and place that
48:12
single entry star dot whatever
48:15
QNAP the global browser preload
48:17
list. So much better
48:19
than needing to have every subdomain needing
48:22
to do that individually. And
48:25
everybody's protected even if they don't
48:27
do the the whole HSTS
48:30
header routine. Okay. So I thought, what
48:33
else might be on the list? I
48:35
posed that question to the gang, who
48:38
hangs out in GRC's Security Now
48:40
News Group, noting that it would be possible
48:43
to pull the current list from the
48:45
open source Chromium repo and
48:48
run a regular expression on it
48:50
to extract only top level domains.
48:53
One of our very active contributors, Colby
48:56
Boomer, who actually, he's the one
48:59
who got me into Gitlab and has been helping
49:01
incredibly to keep our git lab
49:03
instance organized during all the spin
49:06
right work. He stepped up, grabbed,
49:08
parsed, and filtered. The current
49:11
Chrome, Chromium, HSTS
49:14
file, and sure enough, the Dot
49:16
dev domain has a great deal of
49:18
company. There are presently forty
49:22
four zero top
49:24
level domains in the global
49:26
browser HSTS list,
49:29
meaning that any subdomain of
49:31
any of those top level
49:33
domains will only be accessible
49:36
by web browsers using authenticated
49:39
and encrypted TLS connections. Okay.
49:41
In alphabetical order, they are.
49:44
Android. And so
49:47
in every case, this is, you know, something
49:49
dot Android. Right? App
49:52
Azure, bank, Bing,
49:56
boo, channel, chrome,
49:59
dad, day, dev,
50:02
eat, esque as in esquire,
50:05
ESXi, fly,
50:08
food, Glee, GLE
50:10
-- Oh,
50:11
it's gonna register. -- steve
50:13
dot poo. Some
50:15
people register steve dot poo, I guess.
50:18
A bit of taken. Gmail,
50:21
Google, hangout, hotmail,
50:25
ing, Insurance, Meet,
50:29
MIM, Microsoft, move,
50:32
MOV, New, nexus,
50:36
office, page, PhD,
50:39
play, ProF, PROF,
50:43
RSVP, search,
50:46
Skype, Windows, Xbox,
50:49
YouTube, and zip. Okay.
50:53
So zip so dah
50:55
dah dah dah dah is there. Along with thirty
50:57
nine others. We see that Google and
51:00
Microsoft who each own several
51:02
of their own TLDs have placed
51:04
them on that list. And why not?
51:07
You know, as desirable as it would
51:09
be to be able to place dot com
51:11
dot org dot net dot edu dot
51:13
gov, you know, the original bunch
51:16
onto this list or or really just
51:18
to abandon HTTP for
51:21
user client web browsing altogether.
51:23
I don't see how we're ever gonna get there from
51:25
here. You know, doing so would immediately
51:27
make any HTTP only
51:30
sites inaccessible, and that's
51:32
not something I can ever see happening in our lifetimes.
51:35
But what I think must be happening
51:37
because, I know, come on. And
51:40
glee. He's just so new.
51:41
And and dad,
51:43
exactly. Yeah. And that's the point.
51:45
Any new registration of
51:48
a TLD is probably
51:51
automatically saying, put us
51:53
on the the eight the the global
51:55
HSTS list for the entire
51:58
TLD. Why not? That
52:00
way, you're just saying to anybody
52:02
who wants to set up a web server, great.
52:05
Love to have you. Happy to take your
52:07
forty fourteen ninety five per year
52:10
to maintain registration for you.
52:12
Oh, by the way, you could only
52:14
use you're gonna have to get a certificate. But of course,
52:16
that's free now too. With less with
52:18
less encrypt and the Acme protocol,
52:21
or even did I think Digicert is now doing
52:23
the same thing. So, you know, the the
52:26
it's no longer the case that that's a problem. So
52:29
Yeah. Let's make it
52:30
mandatory. Anyway, I just never knew
52:32
that. I thought that was very cool. And
52:35
Leo, We're next gonna talk about
52:37
the changes Chrome is making in
52:39
their release schedule, but I
52:41
need to take sip of a drink for Indeed,
52:44
you do. And This would be an excellent
52:46
time for me to talk about the thirteen email
52:50
threat types that are lurking
52:52
around every corner Our sponsor
52:55
for this section of security now is
52:57
Barracuda in a recent
53:00
email trends survey forty three
53:02
percent of the respondents said
53:04
they had been had been victims
53:06
of a spear phishing attack. Even
53:09
more scary, only twenty three percent
53:11
said they have dedicated spear phishing protection.
53:14
If you don't have it, that means you're relying
53:16
on your employees to be smart enough,
53:19
to see it, recognize it, and ignore
53:21
it. Maybe
53:24
you wanna think about protecting yourself better
53:26
than
53:26
that? How are you keeping
53:28
your emails secure? Barracuda has identified
53:30
third teen different types of email
53:33
threats. And how cyber criminals
53:35
use them every single day? Phishing, of
53:37
course, conversation hacking.
53:40
Yeah. Ransomware,
53:42
ten more tricks to steal
53:45
money from your company, personal information
53:47
from your employees, your customers,
53:49
your patients? Are
53:51
you protected against all thirteen types
53:54
every day? Email, cybercrime, is
53:56
becoming more sophisticated, and those attacks
53:58
are more difficult to prevent. Because
54:01
these emails are, you know, they they use social
54:03
engineering, they use you know,
54:05
strong emotions like urgency and fear
54:07
to prey on victims, your
54:09
employees. Social
54:12
engineering attacks, including spear phishing
54:14
and business email compromised cost businesses
54:16
a lot on average about hundred thirty thousand
54:19
dollars per incident. And
54:21
you know, it's always tied or often
54:23
tied to something your employees
54:26
are kind of already thinking about. When
54:29
the at the beginning of last year when the demand
54:31
for COVID-nineteen tests ramped up,
54:33
barracuda researchers saw a massive
54:36
increase, five twenty one percent increase
54:38
in COVID-nineteen TWiT related
54:40
phishing attacks. Because
54:42
they know, you know, employees got in their mind and maybe
54:44
there's some anxiety about that. It's much
54:47
more likely they're gonna click without
54:49
thinking and that is the end of the line
54:51
for your business. Crypto
54:53
currency, when, you know, that's a
54:55
constant topic. When the price
54:57
of Bitcoin increased, you know, what was it? Four hundred
54:59
percent between October twenty twenty
55:01
and April twenty twenty one, impersonation
55:04
attacks taking, you know,
55:06
taking advantage of that increased
55:09
a hundred ninety two percent in that period.
55:11
In twenty twenty, the Internet crime complaint
55:14
center IC three received nineteen thousand
55:16
three hundred and sixty nine business email
55:18
compromise or email account compromise
55:20
complaints adjusted losses over
55:23
one point eight billion dollars. That's
55:26
enough stats. Let's let's talk
55:28
about what you are gonna do to protect yourself
55:30
against You might say, well, I am, we secure
55:32
email at the gateway, right? Sure, that's fine
55:34
for ransomware or spam. Maybe,
55:38
you know, inbound viruses, it's not
55:40
gonna work against targeted attacks,
55:42
spear phishing attacks, attacks, you
55:45
know, emails seem to come from a
55:47
company management to named
55:49
employees you know,
55:52
protection, you need protection at the
55:54
inbox level, and that's hard. That's
55:58
really hard to do, and it's gonna need AI
56:00
and machine learning to adjust as attacks
56:03
differ. You
56:05
know, these threats are very sophisticated and
56:08
and they costs the bag as nothing
56:10
to try new approaches, so they evolve
56:12
very quickly. Here's probably
56:15
the first step for you. Get a free
56:17
copy of the Barracuda Laporte. It's called thirteen
56:19
email threat types to know about right now.
56:22
They it constantly. As you know, they're
56:24
always out there looking and finding
56:26
these new threats. So they're they're very
56:28
aware of what's going on right now.
56:31
In this report, you'll see how the cyber criminals
56:33
are getting more and more sophisticated every day and
56:35
what you can do to build the best protection for
56:37
your business, your data, your customers, your
56:40
people. With Barracuda.
56:42
Find out about the thirteen email thread
56:44
types you need know about
56:46
and how Barracuda can provide complete
56:48
email protection at the inbox
56:51
level for your teams, your customers,
56:53
and your reputation. Get your free
56:55
ebook, Barracuda, dot com
56:57
slash security now, BARRACUDA,
57:01
barracuda dot com
57:03
slash security now,
57:05
barracuda. Your journey
57:07
secured. Remember secured
57:10
and use that address so they know you saw it
57:11
here. Barracuda dot com slash
57:14
security. No.
57:16
Steve? So we
57:18
were just talking about the idea
57:20
of staged releases of software
57:22
updates to minimize the fallout from
57:25
previously undetected problems.
57:28
As a matter of fact, given the number of
57:30
wacky problems I've been encountering with
57:32
spin right, As our early prerelease
57:35
testers find ever more bizarre machines
57:37
to torture it with, I've decided
57:40
that the only same thing for me to do
57:42
will be to inform everyone here
57:45
who's following this podcast, when
57:47
and where it's available in final
57:49
beta, and then in final release.
57:52
Anxious as I am to inform Spinwright's
57:55
entire broader user community of
57:57
what has grown to become a
57:59
major free upgrade I'm
58:02
gonna wait a while to see
58:04
how, you know, how much a more local
58:07
larger release smart. Yeah.
58:09
Yeah. It goes. Yeah. Especially because these are
58:11
these are the more sophisticated listeners that
58:13
-- Yeah. -- they're they're gonna be the great great people
58:16
to try it out with and let you know.
58:17
Yes. And and I can say go
58:19
to the forum and they'll be able to get online
58:21
and communicate and so forth. So yeah.
58:24
And there, you know, people who waited eighteen
58:26
years they can wait another month or two.
58:28
So yeah. And
58:30
apparently, Google has decided to
58:32
do the same with Chrome. Back
58:35
a few days before Christmas, they
58:38
posted the news. Change
58:40
in release schedule from
58:42
Chrome one ten. With
58:45
a subhead from Chrome one
58:47
ten, an early stable
58:49
version will be released to a small
58:51
percentage. Of users.
58:54
And of course, as I just
58:56
said, I can relate to that. Chrome
58:59
is just about at one
59:01
ten. Yesterday, the Chrome
59:03
beta channel was updated to one
59:05
ten. There are four channels
59:07
which stage the progressive rollout
59:10
of each new major release. The most
59:12
bleeding edge is the canary channel
59:15
followed by the dev channel, then the beta
59:17
channel, and then finally
59:19
the main release channel. So one
59:22
ten where they're gonna start you
59:24
know, staggering, staging the
59:26
release isn't just went into beta
59:28
yesterday. Its next
59:30
move then will be to release.
59:33
And that's where the timing will be changing a
59:35
bit. What Google is now explaining
59:37
is that one ten will be appearing
59:39
more slowly in the release channel
59:42
than before. They wrote quote,
59:45
we are making a change to the release
59:47
schedule for Chrome. From
59:50
Chrome one ten, the initial
59:52
release date to stable will
59:54
be one week earlier. This
59:56
early stable version will be released
59:59
to a small percentage of users.
1:00:01
With the majority of people getting the
1:00:03
release a week later, at
1:00:05
the normal scheduled date. This
1:00:08
will also be the date the new version is
1:00:10
available from the Chrome download page.
1:00:13
By releasing stable to
1:00:15
a small percentage of early users,
1:00:18
we get a chance to monitor the release
1:00:20
before it rolls out to
1:00:22
all of our users. If any show
1:00:24
stopping issue is discovered, it
1:00:26
could be addressed while the impact is relatively
1:00:29
small. So, again,
1:00:31
if you think about the number
1:00:33
of Chrome users there are, it's
1:00:35
just an unimaginable number. So,
1:00:38
yeah, I I think that makes absolute
1:00:40
sense not to have everybody having the same
1:00:42
problem all at once in the world. We've
1:00:47
been tracking the gradual increase
1:00:50
in accountability for cyber
1:00:52
intrusions and data breaches with
1:00:55
More recently, IT employees even
1:00:58
increasingly being held accountable. In
1:01:00
another bit of just surfaced news,
1:01:03
we learned that Russia is moving
1:01:05
forward with its own legislation to impose
1:01:07
major fines and even prison
1:01:09
sentences for IT administrators
1:01:12
and their managers. Following
1:01:14
major data breaches. Yes.
1:01:18
You know, nothing
1:01:20
encourages the quick and
1:01:23
full public disclosure of
1:01:25
data breaches more than the prospect
1:01:27
of some prison time. At the other
1:01:29
end, Now the idea first surfaced
1:01:31
last May in Russia, and
1:01:34
once this legislation is passed, the
1:01:36
Russian government will be able to find individuals
1:01:39
anywhere from three
1:01:41
hundred thousand to
1:01:44
two million rubles
1:01:46
Of course, three hundred thousand rubles won't
1:01:48
buy you very much. Maybe
1:01:51
a Russian car, that's
1:01:53
forty two hundred dollars equivalent up
1:01:55
to two million rubles, which is twenty
1:01:58
eight thousand. Or
1:02:00
and or imprison them for up to ten
1:02:02
years, if their companies get
1:02:04
hacked and user data is stolen.
1:02:07
Now, okay, that's that's
1:02:09
brutal. I I'm all for accountability.
1:02:13
But this could well devolve into
1:02:16
shooting the messenger rather than the
1:02:18
source of the message. You know?
1:02:20
Sure. There could be misconfiguration that
1:02:23
IT should have known better
1:02:25
and done more
1:02:27
to secure. But there were also
1:02:30
plenty of zero day vulnerabilities that
1:02:32
no one should be held to account
1:02:34
for, you know, more than
1:02:37
the original source of the vulnerability, which
1:02:39
is where the zero day came from in the first
1:02:41
place. I'm not gonna dwell upon this further
1:02:44
now because this week's primary top topic
1:02:46
winds up posing some serious
1:02:48
questions about accountability. In
1:02:51
this case, the VMware issue.
1:02:54
But this additional news demonstrates that
1:02:57
we're continuing to see and
1:02:59
not surprisingly mounting
1:03:01
pressure to hold someone
1:03:04
accountable for cyber security
1:03:06
incidents. And this
1:03:09
isn't over by a long shot. I
1:03:12
had to shake my head at this little
1:03:14
piece. There's a new scam
1:03:17
that's growing in popularity in the cyber
1:03:19
underground where there
1:03:21
are templates for carrying it out.
1:03:24
Generically, they're known as CryptoDrainers.
1:03:29
They're custom fishing pages that
1:03:31
entice victims into
1:03:33
connecting their Crypto
1:03:35
Wallet TWiT an offer
1:03:38
to mint NFTs on
1:03:40
their behalf. And
1:03:43
of course, this is where we all collectively chant
1:03:45
in unison. What could
1:03:47
possibly go wrong wrong?
1:03:50
To no one's surprise. Other
1:03:52
than the hapless victims, as soon
1:03:54
as victims attempt to mint NFTs,
1:03:57
The CryptoDrainer page siphons
1:04:01
both the user's cryptocurrency
1:04:03
and the desired NFT into
1:04:06
an attacker's wallet. According
1:04:08
to name was kind of a giveaway, the CryptoDrainer.
1:04:11
CryptoDrainer. Yeah. I wanna I
1:04:13
wanna sign up for the CryptoDrainer page.
1:04:16
Yeah. What could I possibly
1:04:18
go wrong? What could I possibly
1:04:20
go wrong? According to recorded future,
1:04:23
there are several crypto drainer templates
1:04:26
Currently being advertised on underground
1:04:28
cybercrime forums, and they're growing
1:04:31
in popularity. Of course. Okay.
1:04:33
Now, apparently, it's the
1:04:35
it's the bible's proverbs twenty
1:04:38
one twenty, which is the original
1:04:40
source of the expression
1:04:42
a fool at his money are
1:04:44
soon party. Now, Steve, I
1:04:46
didn't know you were so up in the bible. Oh,
1:04:48
honey. I'll tell you there's nothing you can't
1:04:51
There's nothing you can't find on Google.
1:04:53
Oh, yeah. Good. didn't even I didn't
1:04:55
even at chat GPT. That
1:04:58
now that proverb, however, speaks
1:05:00
of wealth being capriciously spent.
1:05:03
In this case, of course, the outcome
1:05:05
is the same. And you've really got
1:05:07
to wonder. That there are
1:05:09
people willing to connect their
1:05:11
wallets to some random
1:05:13
page on the Internet, which states,
1:05:16
you know, will mint NFTs
1:05:19
for you and auto deposit
1:05:21
your profits into your wallet. Because
1:05:24
--
1:05:24
-- you know, you could trust us
1:05:26
and our broken English.
1:05:28
Mhmm. Oh, god.
1:05:31
Okay. Unfortunately, And
1:05:33
then Leo, remember, Robert's twenty
1:05:35
one twenty.
1:05:36
Twenty one twenty. Oh, keep that in mind.
1:05:38
Yes. The Taiwanese, NASA,
1:05:41
network attach storage vendor, QNAP,
1:05:44
is back in the news and,
1:05:47
you know, with with them, The
1:05:49
news is never pretty. This time,
1:05:51
QNAP has recently patched a
1:05:53
sequel injection vulnerability tracked
1:05:55
as CVE twenty twenty two
1:05:58
twenty seven five ninety six. That's
1:06:00
the end of the good news of
1:06:02
this story. A week later, Sensus,
1:06:06
CENSYS, that's that
1:06:08
newer IoT search engine
1:06:10
group. Sensus says
1:06:13
that roughly ninety eight
1:06:15
percent of the thirty
1:06:18
thousand QNAP NAS
1:06:20
devices it currently
1:06:22
tracks. Remain unpatched.
1:06:25
What? Yes. Oh.
1:06:29
No. No. Nobody patches. Their q nap
1:06:31
gets snooze. It's not sitting in a closet. Yeah.
1:06:34
Exactly. That's ninety oh,
1:06:37
ninety eight percent. Of thirty thousand
1:06:39
are unpatched. And turns
1:06:41
out, because it's trivial
1:06:44
to exploit and the exploitation
1:06:46
process does not require any authentication.
1:06:49
Sensus expects the vulnerability to
1:06:51
be quickly abused by ransomware
1:06:53
gangs. As has happened many
1:06:56
times previously, like all the
1:06:58
many times we've talked about this before.
1:07:00
And the number of vulnerable devices
1:07:03
could possibly be much higher since
1:07:05
census said that there are
1:07:07
another thirty thousand sorry,
1:07:09
thirty seven thousand QNAP
1:07:12
systems online for which
1:07:14
it could not obtain a version number,
1:07:16
but which are also likely vulnerable as
1:07:19
well. So maybe
1:07:21
ninety eight percent of sixty seven
1:07:24
thousand QNAP
1:07:26
devices. Okay.
1:07:28
And speaking of NASA's, I
1:07:31
just wanted to give a shout out
1:07:33
to Synology. I
1:07:36
own one and I've just ordered
1:07:38
another. They're back ordered right now
1:07:41
and I'm not surprised because Damn.
1:07:44
They are amazing. I am so sure. Say
1:07:46
that. Yeah. I am so impressed.
1:07:48
Yep. I had been running a pair
1:07:51
of collocated dropos which
1:07:53
were running just fine. But the oldest
1:07:55
one of the pair, which is now more than
1:07:57
ten years old, started acting a
1:07:59
little flaky, and it finally went
1:08:02
belly up. Since the company's
1:08:04
Drorbo's future is a bit uncertain,
1:08:07
I decided to switch to Synology, which
1:08:10
I kept hearing about, and oh my
1:08:12
god, what a fabulous experience.
1:08:15
The what I got are the DS4
1:08:17
eighteen's. It only has
1:08:19
four bays as opposed to the Drogos
1:08:22
five. But my storage needs are
1:08:24
not excessive, and the management experience
1:08:27
is so good. Since
1:08:30
I have two work locations, I plan
1:08:32
to use their integrated Synology
1:08:34
synchronization system to
1:08:36
have have the two boxes mirror
1:08:39
each other, and then I'll be keeping
1:08:41
my local work synchronized locally.
1:08:44
Anyway, I just wanted to say, for what it's worth,
1:08:46
just one user's experience of Synology,
1:08:49
it's been one hundred percent positive.
1:08:51
And I you know,
1:08:54
these guys,
1:08:55
they they should have the market because
1:08:57
they've done it right. And I know you feel the same.
1:08:59
Oh, yeah. I have three of them. I love them. Yeah.
1:09:04
To no one's surprise, after
1:09:06
the vulnerability intelligence company,
1:09:09
VolneCheck, analyzed
1:09:12
more than twenty five thousand entries
1:09:14
from the NIST vulnerability database
1:09:18
that contain CVSS ratings
1:09:21
from both NIST and the
1:09:23
product vendor. Volm
1:09:26
check discovered that more
1:09:28
than half of those analyzed, fourteen
1:09:31
thousand of the twenty five thousand
1:09:33
vulnerabilities had
1:09:36
conflicting scores where
1:09:39
the vendors and NIST had
1:09:41
assigned different ratings for
1:09:44
the vulnerability severity. Imagine
1:09:46
that. VoltenCheck says
1:09:49
that despite the large number
1:09:51
of entries, Most of
1:09:53
these came from thirty nine
1:09:55
vendors whom they did not name,
1:09:58
suggesting that some companies
1:10:01
are intentionally downgrading
1:10:04
the severity of their own vulnerabilities.
1:10:07
And the trouble with this is, you know, not
1:10:09
just public relations, which, of course,
1:10:11
is why they're trying to, you know, that's
1:10:14
what's driving them to to falsely
1:10:17
claim things are less serious than they
1:10:19
are. At the high level,
1:10:21
the vulnerability ratings are
1:10:24
being used to set patching priorities.
1:10:27
You know, if you can't patch everything,
1:10:29
patch the bad things. So it's
1:10:31
natural. To patch the most important
1:10:33
problems first. You know, on
1:10:36
and so intentional vulnerability downgrading
1:10:39
messes with ability to do any
1:10:41
of that correctly. And now we have some numbers.
1:10:43
Fifty eight percent of of
1:10:46
of the the twenty five thousand where
1:10:49
are private listings
1:10:51
and public listings, you know, like official
1:10:53
listings, the thirty nine
1:10:55
of the companies who are doing this are saying,
1:10:58
don't think it's as bad as everybody else. Okay. As
1:11:04
a consequence, I think of the fact that
1:11:06
we've been talking about passwords a lot
1:11:09
in the last actually, all year so so
1:11:11
far. All
1:11:13
all of the interesting questions that I
1:11:16
ended up finding in my mail bag were
1:11:18
about that. I have four. Simon
1:11:20
Locke TWiT, he said, dear Steve, what
1:11:23
OTP off I'm sorry. What
1:11:25
OTP app? I already gave away the
1:11:27
answer. OTP app.
1:11:29
Can you recommend? Or what do you
1:11:31
use? Mostly, I think, for
1:11:33
iOS, but if it also does
1:11:36
Android, that would be nice. Cheers, and thank
1:11:38
you for a lot of great hours listening to security
1:11:40
now. Okay. So the one I've chosen
1:11:42
after poking around with them a bit is
1:11:44
the iOS app OTP auth.
1:11:47
For those who have settled upon
1:11:50
something else. The fact that
1:11:52
you have settled upon anything and
1:11:54
are therefore using one time
1:11:56
passcodes is far better news
1:11:59
then which one you've settled on?
1:12:01
I'm not saying that it matters much
1:12:04
at all. So, you know, I'm
1:12:06
no in no way suggesting that OTP
1:12:08
auth My choice is superior
1:12:11
to XYZ off. It's
1:12:13
just the one I like. Its interface
1:12:15
is clean, It synchronizes among
1:12:18
all of my I dev I devices
1:12:20
through iCloud. I can unlock
1:12:22
it with my face or touch it
1:12:25
pastes the code to the clipboard, which
1:12:27
makes transcribing it simpler.
1:12:30
And I like the fact that it has a customizable
1:12:33
widget that allows me to have a
1:12:35
subset of the passcodes I
1:12:37
most use appear on the iPhone's
1:12:39
notification center for even
1:12:41
easier access. TWiT
1:12:44
definitely iOS only, so it won't
1:12:46
do the cross platform deal over to
1:12:48
Android. Oh, and it also
1:12:50
allows encrypted backup
1:12:53
to a documented file format.
1:12:56
It's published by some German guy and
1:12:58
he he feels
1:12:59
German. I'm impressed with the app's author.
1:13:01
That's who you want to document a format, to
1:13:03
be honest. Right. It
1:13:05
is going to be like
1:13:06
this. That's right.
1:13:09
No. It's it's like a no nonsense solution.
1:13:11
It's beautiful. OTP off.
1:13:13
I'll have to check it out. OTP off.
1:13:15
I really like it. Via
1:13:18
DM, I received, hi, Steve.
1:13:21
I've been following your podcast for more
1:13:23
than two years, and I love it. Even though
1:13:25
I'm not a cybersecurity or even an IT
1:13:27
professional, I've learned a lot. I think
1:13:29
I think he's maybe an ophthalmologist. Anyway,
1:13:32
based on his Twitter DM, is
1:13:34
that I have a question regarding your favorite
1:13:37
two factor app OTP auth.
1:13:39
Would you be able to explain How
1:13:41
does the syncing via cloud
1:13:43
work for it? I'm syncing
1:13:46
it via iCloud, but
1:13:48
don't necessarily see a file there.
1:13:51
If theoretically my iCloud
1:13:53
was compromised, would someone
1:13:55
be able to get hold of my OTP auth
1:13:58
tokens and get access to all my
1:14:00
two factor authentication codes, thanks
1:14:02
in advance. Okay. So
1:14:05
app data stored
1:14:07
and linked through iCloud is
1:14:09
not like iCloud Drive
1:14:12
with, you know, desktop documents,
1:14:15
downloads, etcetera, folders. ICloud
1:14:18
drive is an app that deliberately
1:14:20
exposes those shared resources.
1:14:23
By comparison, app data
1:14:26
is registered by the app
1:14:28
and is never seen by the user. You
1:14:30
only get to see like how much an app
1:14:32
is using of your iCloud space if
1:14:34
if you go and and and analyze the way memory
1:14:37
is consumed. Essentially,
1:14:40
apps are able to use iCloud as
1:14:43
their own secure synchronization
1:14:45
service which is private
1:14:48
with within that app. And
1:14:50
Apple does not have the keys
1:14:52
to that app data. They only exist
1:14:54
in the user's devices. So I'd
1:14:57
say, that it's as unlikely as
1:14:59
possible for iCloud app
1:15:01
data to be compromised. But
1:15:04
if you were really worried about, you can flip
1:15:07
that switch off. Big as you said,
1:15:09
and I agreed Leo, this German
1:15:11
guy, he said, well, Maybe they don't
1:15:13
want iCloud sync fine. Turn it off.
1:15:16
And I'll bet you dollars to donuts that
1:15:18
he deletes it from the cloud before, you
1:15:20
know, as part of that. Mark
1:15:23
Jones tweeted, Steve, you
1:15:25
continually reinforce time
1:15:27
based authentication and
1:15:29
discredit the now exceedingly common
1:15:32
SMS message as a second
1:15:34
factor, amen. You've
1:15:37
never touched an option that
1:15:39
I'm seeing more and more. Frequently,
1:15:42
I now have services asking
1:15:44
me to validate via their
1:15:46
app on my mobile device. Google
1:15:49
just asked me to check my Google app
1:15:52
on my phone before letting me
1:15:54
log on a Windows machine. That
1:15:57
is after I've set up Google Authenticator
1:16:00
as my second factor. Apple does
1:16:02
it too. I've never seen an analysis
1:16:05
of the security of this new model.
1:16:07
What are your thoughts? Okay.
1:16:11
If a giant company like
1:16:13
Google or Apple has
1:16:15
the luxury of requiring you
1:16:17
to run their app on another device
1:16:20
and to respond to its authentication prompts,
1:16:23
then I think that's nearly as
1:16:25
secure as a
1:16:28
time varying passcode. And
1:16:30
it certainly beats the crap out of SMS
1:16:32
because everything does. I
1:16:35
say that it's nearly as secure because,
1:16:38
really, The only way to improve
1:16:41
upon our current six digit
1:16:43
standard would be to increase the number
1:16:45
of digits, and that's not necessary since
1:16:48
this, you know, the right answer changes
1:16:51
every thirty seconds. The
1:16:53
seductive beauty of the
1:16:55
time varying code which
1:16:57
only requires that both
1:16:59
ends agree on the time
1:17:01
of day and date, is
1:17:03
that nothing is
1:17:05
sent to your authentication device.
1:17:08
The system is open loop. The
1:17:11
authenticator can be offline,
1:17:13
and without any radio, Like, remember
1:17:16
those original LCD footballs that
1:17:18
we had back before smartphones when
1:17:20
we first went OTP, you know,
1:17:22
this notion of AA6 digit
1:17:25
variant code first appeared. That
1:17:28
time variant code, which is driven by a
1:17:31
shared secret cryptographic key,
1:17:34
is really the perfect
1:17:36
solution. The one
1:17:38
downside with the vendors authentication
1:17:41
app, oh, except I I
1:17:43
should give myself a caveat there. I
1:17:45
didn't think of it last night. And that
1:17:48
is perception. There we
1:17:50
are seeing that that second
1:17:52
factor authentication of this kind
1:17:55
is being intercepted because the
1:17:57
channel back to the server is
1:17:59
through the web browser. So if you're not
1:18:01
actually where you think you are and,
1:18:04
you know, you could be at a spoof
1:18:06
site. The spoof site was
1:18:08
just asked for a two factor code it
1:18:11
forwards that request to you, your on
1:18:13
on your browser, you go to your
1:18:15
app, give it the six digit code, the
1:18:17
spoof site gets it and logs in,
1:18:19
and is doing this behind your back.
1:18:22
So that, you know, that is a problem
1:18:24
with our six digit time varying
1:18:26
codes, which the the
1:18:29
the Apple and Google and whomever
1:18:33
stand alone authentication app doesn't
1:18:35
have because they are talking to their
1:18:37
app on your phone which they've
1:18:39
established a relationship with and
1:18:41
when your phone lights up, then
1:18:45
you know it's from them, except One
1:18:47
downside with the vendors authentication app,
1:18:50
which can push notification
1:18:52
requests is notification fatigue
1:18:55
which we've talked about before. Attackers
1:18:58
are refining this now to
1:19:00
a science. Timing, they're
1:19:02
spooked authentication requests for
1:19:05
the time of day when its user
1:19:07
would be expected to be logging into
1:19:09
their remote services. Or
1:19:11
sometimes just using more brute force approaches
1:19:14
fatiguing the user by prompting
1:19:16
the user over and over and over and over and
1:19:18
do it. They just give up and accept the
1:19:21
authentication request and allow the bad
1:19:23
guys in. So, yes,
1:19:27
specific vendor closed loop
1:19:29
authentication beats SMS,
1:19:31
as I said, because everything does.
1:19:34
And as long as you are, giving
1:19:36
your six digit code to the proper
1:19:38
site, the site you think you are and
1:19:40
not something a
1:19:43
spoofed fishing site, then Nothing
1:19:46
beats the the open loopness of
1:19:48
a one time passcode. And
1:19:52
finally, Dan Stevens. He
1:19:55
tweeted, hi, Steve. In the last SecurityNow
1:19:57
episode 908, you and Leo discussed
1:19:59
extensively. The rules for
1:20:02
creating secure passwords in a way that
1:20:04
can be reconstructed from
1:20:06
memory. How complicated?
1:20:09
What if you forget what the rules are?
1:20:13
Maybe you've said this before, but
1:20:15
my advice would be use
1:20:17
a password manager with a completely
1:20:19
random master password at
1:20:21
good ESXi. And write it on a slip of
1:20:23
paper and keep it somewhere accessible and safe.
1:20:26
Refer to the slip of paper whenever you
1:20:28
log in to the password manager and
1:20:30
eventually For most people, the
1:20:33
random password will stick in muscle
1:20:35
memory. At which point, you can
1:20:37
destroy the slip of paper for extra
1:20:39
security. Is this not
1:20:41
a whole lot simpler. Much
1:20:43
simpler, but definitely better. I
1:20:46
I agree completely. Yeah.
1:20:48
But There
1:20:50
are places where a password manager
1:20:53
cannot reach when I'm logging
1:20:55
into my servers or when
1:20:57
or even into my Windows desktop. I
1:21:00
don't have access to password manager.
1:21:02
It's true that I could open the manager on
1:21:04
my phone and carefully transcribe
1:21:07
a long and complex password. But
1:21:09
the threat model for local
1:21:11
login to my desktop or remote
1:21:13
login to a network service
1:21:16
that no one at any other IP
1:21:18
than mine can even see
1:21:20
is different from logging into random
1:21:23
Internet websites. So
1:21:25
Leo uses an approach that he likes,
1:21:28
and I had the phonetic made up word
1:21:30
approach that I like. The
1:21:32
important thing to appreciate I think
1:21:35
is that there is no one right
1:21:37
answer, nor a best answer.
1:21:40
Anyone who's been listening to this podcast
1:21:42
will have been exposed by now to
1:21:44
the fundamental theory of password
1:21:47
cracking and password entropy.
1:21:50
And we've tossed around many
1:21:52
different systems and schemes for
1:21:54
creating passwords. So the
1:21:56
right answer is any
1:21:58
answer. The key is
1:22:00
that you've given this some thought and
1:22:03
will arrive at an answer. And
1:22:05
you will hopefully have arrived at a system that
1:22:07
creates strong passwords that are also
1:22:10
workable. For you,
1:22:12
depending about who you are and what
1:22:14
your goals are.
1:22:16
And I think we are closing this
1:22:18
topic for, you know, but his
1:22:20
way is definitely better. I mean, you're truly random
1:22:23
password. The problem is I, you
1:22:25
know, I I can't be getting
1:22:27
a slip of paper at every single time.
1:22:29
I I log in to my password manager all
1:22:31
the time. I mean, it's just part of the deal.
1:22:34
And I'm gonna get to keep this in my wallet.
1:22:37
And and and you do because I hardly
1:22:39
ever log in to my password manager. Oh,
1:22:41
all the time. Mhmm. Constantly.
1:22:45
For a variety of reasons. I mean, I'm using it
1:22:47
in a lot of different systems. Oh.
1:22:49
Your password you're using BitWORD.
1:22:51
Right? It doesn't time out.
1:22:54
I've said it's a time out. So if I'm not
1:22:56
using it after period of time, it times
1:22:58
out. As it should. So, you know,
1:23:00
Yeah. And so I know I'm in a
1:23:02
in a locked environment. No one else has
1:23:04
access to my machine. Right. And so and
1:23:07
and and the machine itself has a very
1:23:09
strong authentication
1:23:11
system. We're just protecting, you know,
1:23:13
it's access They can't get that by the machine. Right.
1:23:15
Blah blah. Yeah. I could probably do that with machines.
1:23:18
The mobile devices use
1:23:20
biometrics. So, right, don't
1:23:22
often have to enter But I still, from time to
1:23:24
time, we'll have to enter it. Yeah. It's just
1:23:26
not some to me, it's not practical, curious, slip of
1:23:28
paper around with my master
1:23:30
password. I don't think it's
1:23:31
secure either by the way. And I often have my
1:23:33
wallet at the at the other side of the house.
1:23:36
Right? So -- Yeah. -- you know, So, you know,
1:23:38
look, I have a long password.
1:23:41
Maybe, you know, thirty some characters
1:23:44
of completely random stuff would eventually be
1:23:46
memorized. But in the meantime,
1:23:49
it's a pain in the butt. I feel like I've
1:23:51
come up with a system that generates as
1:23:54
close to a random password as you can get,
1:23:56
I mean, it's not
1:23:58
truly random because it's based on a
1:24:00
phrase, but but that's pretty random.
1:24:02
You know? Yeah. I'm not too worried about it.
1:24:05
Yep. Again, I think to each their
1:24:07
own, you know, the important
1:24:09
thing is think about this. Well, certainly,
1:24:11
everybody listening to this podcast is,
1:24:13
you know, not only is tired of thinking about
1:24:16
it, they're tired of hearing about So we're done
1:24:18
now. Enough. Now I'm
1:24:20
trying to figure out how I can get my
1:24:22
secret keys out of
1:24:24
office so I can move them over. To
1:24:26
your church, which I like, by the way. III
1:24:29
just
1:24:29
downloaded. I've been playing with it a little bit.
1:24:31
It's it's it's good. I think you're right. think it's
1:24:33
nice one I
1:24:34
like. The problem with Authy the reason
1:24:36
I like Authy is because it backs up my
1:24:38
secret keys to the Authy server
1:24:40
so I can put it on multiple phones. I don't wanna
1:24:42
you know, it used to be you'd have to reset up Google
1:24:44
Authenticator from scratch every time.
1:24:47
But your your solution is
1:24:49
a perfect intermediate OTP
1:24:52
auth lets you back it up to a file in some
1:24:54
secure place that's encrypted. And then I could
1:24:56
have download it and unencrypt it, and then
1:24:58
I import it, and I'd be set. So think
1:25:00
this is I prefer this than to trusting
1:25:02
Twilio with it. So,
1:25:04
you know, I think I'll probably if I can figure
1:25:06
out how to get those OTP seeds out.
1:25:09
I think there are
1:25:10
ways, but we shall see. Then almost Let's
1:25:12
let's take our last brief. Yes. And then we're
1:25:14
gonna talk about how ESXI
1:25:17
fell. Security now
1:25:19
is brought to you by Thinks
1:25:22
Canary. Let us talk first
1:25:24
though about what to do if
1:25:26
your security falls. And,
1:25:29
you know, unfortunately, often, you
1:25:32
don't know it. On average, companies go
1:25:34
about ninety one days before they realize they've
1:25:36
been breached. We've been reached. That's
1:25:38
three months. The bad guys have to wander
1:25:40
around. Xfiltrate what they want.
1:25:43
Find all your weaknesses. Mock
1:25:45
your CEO and then trigger
1:25:47
the ransomware. See, if you had
1:25:49
this little thing, you wouldn't have to worry about
1:25:51
this. This is
1:25:54
the Thinks Canary. Like,
1:25:56
a Canary and a Gold Mine? They even have a little
1:25:58
Canary logo on it. Thinks
1:26:00
Canary is a Honeypot. Not
1:26:02
just any hot about the best darn honeypot
1:26:05
anywhere in the world. This little
1:26:07
device looks like it's about the size of a portable
1:26:09
hard drive, you know, a little USB
1:26:11
hard drive. Three
1:26:13
minutes is set up, and you've got
1:26:15
on your network visible
1:26:18
to all. It could even be in your
1:26:20
in your directory, your active directory,
1:26:23
a device that is not doesn't
1:26:26
look vulnerable to the bad guys, but looks
1:26:28
very valuable. For instance, this is
1:26:30
set up as a Synology Ness. And
1:26:33
mean, thinks does it right? This canary. The
1:26:35
Synology Ness has a MAC address that would be
1:26:38
appropriate for a Synology. You just got a Synology
1:26:40
MAC address. When you try to when
1:26:42
you hit it and you log in, you're gonna get a
1:26:44
absolutely authentic looking DSM
1:26:46
login. It's just indistinguishable
1:26:49
from the real thing except it's not. It's a honeypot.
1:26:52
And the minute a bad guy touches it,
1:26:54
you get an alert. You find
1:26:56
out. You know. That is
1:26:59
awesome. No ongoing overhead.
1:27:02
Nearly zero false positives. This
1:27:04
canary is not gonna squawk unless
1:27:06
somebody actually tries to get into
1:27:08
it. You'll be able to detect attackers
1:27:10
the minute they start snooping around.
1:27:13
It's no wonder why things canary hardware.
1:27:16
They also have VMs and cloud based canaries
1:27:18
are deployed and loved in all seven continents.
1:27:21
Go to canary dot tool slash love.
1:27:23
You can see some of that love spread around.
1:27:26
When when you get into a network, And
1:27:29
by the way, the Canary guys know this because
1:27:31
they have for the last couple of decades taught
1:27:34
companies, governments, militaries,
1:27:37
how to attack computers. They're
1:27:39
hackers. So they know exactly what
1:27:41
they would do if they were to get into your network.
1:27:44
They start looking around for juicy content.
1:27:47
They browse active directory for file
1:27:49
servers and look for file shares,
1:27:51
looking for documents. They try
1:27:53
default passwords against network devices,
1:27:56
and web services they scan for open
1:27:58
services across the network. When
1:28:01
they encounter one of these, the
1:28:03
services on offer are designed to
1:28:06
shall we say solicit further investigation.
1:28:09
They're juicy. At which
1:28:11
point the actors have betrayed themselves? Because
1:28:14
your canary notifies you of the incident. It's
1:28:16
not just the hardware here. As you heard, there's there's
1:28:18
VMs, there's cloud based, and every canary
1:28:20
can make canary tokens. Like,
1:28:22
you actually can get these from the Canary site as well,
1:28:24
but I like it because they come from an internal
1:28:27
IP address. So I create
1:28:29
these files, these Canary tokens, PDFs, doc
1:28:31
files, spreadsheets. I give them
1:28:33
provocative names and I scatter them around.
1:28:35
They're like little trip wires on our network.
1:28:38
If a bad guy says, oh, what's this employee?
1:28:41
Payroll information dot XLS
1:28:43
file and tries to open it?
1:28:46
Hello, like an area goes. Hey.
1:28:49
Hey. Hey. Hey. Leo. Leo. Somebody
1:28:52
has hit that file. You
1:28:55
can be notified via email by SMS
1:28:58
they support Slack, webhooks,
1:29:01
SISL log. If you still use SISL log,
1:29:04
you know, and sometimes the old stuff's the best
1:29:06
stuff. Right? You can get it any
1:29:08
way you like it. It will
1:29:10
notify you there is somebody who has
1:29:12
hit this file at this location or
1:29:14
this hardware at this location. They can be
1:29:17
a Linux server. They can look like a Windows
1:29:19
server. It can have lit up like a Christmas
1:29:21
tree. Every server service turned on. Or
1:29:23
just some judiciously juicy services.
1:29:26
It's so
1:29:26
fun. You can make 1AA router,
1:29:29
make one a SCADA device,
1:29:32
When you go into the interface, you'll see there's
1:29:34
all sorts of, you know, ways you can configure
1:29:36
this. There are hardware based
1:29:38
birds like the one I'm holding in my hand here.
1:29:41
There's tool. There's cloud based birds. You
1:29:44
can configure and deploy canaries throughout your entire
1:29:46
network. Again, no overhead, no false
1:29:48
positives, just alerts when it really
1:29:51
matters when intruders are
1:29:54
actually present. Even
1:29:57
customers with hundreds of canaries, and that's
1:29:59
not unusual, by the way, big banks casinos,
1:30:02
places like that, receive just a handful
1:30:04
of events every year. I I
1:30:06
hear from this canary. I've heard from it once,
1:30:09
and it was a good reason I heard There
1:30:11
was a device that we put on our network that was scanning
1:30:13
all our ports and scanning all our devices. When
1:30:17
you get that incident, and you go,
1:30:19
I know you'll get the information you need.
1:30:22
Look at the canary dot tool slash love, for
1:30:24
instance, a principal security engineer of
1:30:27
F fifty Fortune fifty company says,
1:30:29
quote, Canary has helped
1:30:31
us detect and mitigate several incidents
1:30:34
that could have turned into catastrophes. You
1:30:36
don't wanna be headline on this show,
1:30:38
folks. He
1:30:40
said an alert fired by their cloned
1:30:42
sight token allowed us to identify
1:30:45
and force a takedown of several
1:30:47
doppelganger domains that were
1:30:49
purchased by bad actors for the purpose
1:30:51
of launching phishing attacks against our employees
1:30:53
and customers, yikes. He
1:30:57
said, I cannot recommend this product enough.
1:30:59
You don't know what you don't know.
1:31:02
But canary helps you know
1:31:04
what you need to know when it matters.
1:31:08
That's a catchy slogan. You don't
1:31:10
know what you don't know, but canary helps
1:31:12
you know what you need to know when it
1:31:14
matters. Couldn't have said it better myself.
1:31:17
You may have heard about the Circle CI
1:31:20
compromised recently. Most
1:31:23
users found out about the incident directly
1:31:25
from their thinks to Canary. How
1:31:28
about that? Canaries
1:31:30
work and they continually prove
1:31:32
it. Canary dot
1:31:34
tools slash twit, and I got a
1:31:36
good deal for you if you use the upper code, TWiT. And
1:31:39
how did I how did you hear about this box? You get ten
1:31:41
percent off the price for life. Now
1:31:44
the price of course is gonna vary depending on how many canaries
1:31:46
you need. You can scatter them around a couple handful
1:31:49
We've I won't say how many we have. We have a
1:31:51
handful. Some as I said, some people
1:31:53
have many, many more. But
1:31:55
just as an example, pricing example, because
1:31:57
I like to be clear and you know you know what you're
1:31:59
gonna get. Let's say you want five of these.
1:32:02
That's seventy five hundred bucks a year.
1:32:04
You get the canaries, You get your
1:32:06
own hosted console where you set the canaries
1:32:09
up and you check on how they're doing. You get
1:32:11
upgrades. You get support.
1:32:13
You get maintenance for that whole year, one
1:32:15
price. And ten percent off
1:32:17
if you use the offer TWiT. If
1:32:20
you sit on the can area, somebody steps on it
1:32:22
because little little guy. You know,
1:32:25
if you pour your coffee into it, don't worry that you
1:32:27
send another one right away. Even
1:32:30
better, We know you're gonna love but
1:32:32
if you didn't, if for any reason you
1:32:34
were not happy, you can return your
1:32:36
canaries with their two months
1:32:39
money back guarantee for a full refund two
1:32:41
months. There's no risk.
1:32:43
There's a lot of risk if you don't have them.
1:32:46
There's no risk to try an amount. In
1:32:49
all the years, the Canary has offered
1:32:51
a money back, refund guarantee.
1:32:54
You wanna know how many times people said, I don't
1:32:56
like this. I want my money back. You know, how many times
1:32:58
in in all those years? Zero.
1:33:01
They
1:33:04
could make it a six month
1:33:06
guarantee. Zero because nobody
1:33:08
gets one of these and says, oh, I don't like it. This is
1:33:10
the best thing you'll ever get. It's
1:33:13
it's nope. It's hysterical.
1:33:16
I couldn't believe it when they told me that. went, what?
1:33:18
No. None zero. Nope. Canary
1:33:21
dot tools, CANARY,
1:33:23
dot tools slash TWiT. Don't
1:33:26
forget the twit in the offer code box. You
1:33:28
know, how'd you hear about this box for ten percent off?
1:33:30
Forever. This
1:33:32
is just a must have everybody. Everybody
1:33:35
needs a little help from
1:33:37
your canary Put it in your
1:33:39
coal mine. Canary
1:33:42
dot tools slash Twitter?
1:33:44
Can you believe that? Nobody's ever said.
1:33:46
Yeah. I want my money back. In
1:33:49
fact, most
1:33:49
likely, what happens is they get five and they
1:33:51
go, you know, can we get five more Can we
1:33:53
get twenty more? We
1:33:55
think we need them all over the place. Can
1:33:57
there we got tools TWiT? Alright,
1:34:00
Steve.
1:34:01
Let's get to this VMware
1:34:03
exploit here.
1:34:04
Yeah. Today's sad story
1:34:08
involves VMware's ESXI.
1:34:12
ES XESXI
1:34:14
is VMware a hypervisor technology
1:34:17
that allows organizations to
1:34:19
host several virtualized computers
1:34:21
running multiple operating systems on a
1:34:23
single physical server. The
1:34:26
solution has grown very popular among cloud
1:34:28
hosting infrastructure providers because
1:34:30
--
1:34:31
Yep. -- it's one of the good ones. If
1:34:35
By any chance, you are two years
1:34:37
behind QNAP
1:34:39
patching a publicly exposed
1:34:42
instance of ESSI, Please,
1:34:47
we've told you about canaries, stop
1:34:49
listening to this podcast, Right now,
1:34:53
go patch. If you're
1:34:55
using a cloud hosting provider
1:34:57
instance, you should immediately perform
1:35:00
a proactive version check. In fact,
1:35:02
you could use GRC's shields
1:35:04
up service to make sure
1:35:06
that that your 427
1:35:09
is closed to the public. And
1:35:12
if you wanna watch which,
1:35:14
sure to become a Honeypot feeding
1:35:16
frenzy, place an instance of
1:35:19
the open SLP service on
1:35:21
port 427 Stand
1:35:23
back and get ready.
1:35:26
What's going on is TWiT over
1:35:28
this past weekend just
1:35:30
two days ago, A
1:35:33
new ransomware strain, being
1:35:35
tracked as ESXi,
1:35:38
and we'll explain the name in a minute, swept
1:35:41
through and encrypted several
1:35:44
thousand unpatched
1:35:47
VMware ESXi
1:35:49
servers. And here's the heartbreaking
1:35:52
bit. The entry point
1:35:54
to all of these systems was an unpatched
1:35:57
vulnerability more than
1:35:59
two years old. Well
1:36:02
known no. Long
1:36:04
since having been identified, being
1:36:07
tracked as CVE twenty twenty
1:36:09
one twenty one nine seventy
1:36:11
four. For which, as
1:36:13
we'll see, There's also a publicly
1:36:16
available proof of concept, which
1:36:18
made it easy for the bad guys
1:36:20
to to hack these VMware
1:36:23
servers. Okay. So we'll get back to
1:36:25
this weekend's attack in a minute. Let's
1:36:28
first get some perspective on all this by
1:36:30
turning back the clock through the fall
1:36:32
of twenty twenty. Back
1:36:34
on March second, twenty twenty
1:36:36
one, Lucas Leon
1:36:40
A researcher with Trend Micro's
1:36:43
Zero Day Initiative authored
1:36:45
a blog posting titled, Pre
1:36:47
Off, remote code execution
1:36:50
in VMware ESXi.
1:36:54
And this was in March. Once
1:36:56
he was finally able to talk about
1:36:59
this publicly, which was about
1:37:01
six months after he first
1:37:03
informed VMware of
1:37:05
what he had found. So
1:37:07
in his posting, March second,
1:37:10
Lucas wrote last fall,
1:37:13
I reported two critical
1:37:15
rated pre authentication, remote
1:37:18
code execution vulnerabilities in the
1:37:20
VMware ESXi platform.
1:37:24
Both of them reside within the same component.
1:37:26
The service location protocol, SLP
1:37:30
service. In October, VMware
1:37:33
released a patch to address
1:37:35
one of the vulnerabilities, but it
1:37:37
was incomplete and could be
1:37:39
bypassed. VMware released
1:37:41
his second patch in November, completely
1:37:44
addressing the use after free.
1:37:48
Portion of these bugs. The
1:37:50
use after free vulnerability was
1:37:52
assigned CVE twenty twenty
1:37:55
thirty nine ninety two. After
1:37:57
that, VMware re released
1:37:59
a third patch in February, completely
1:38:03
addressing the QNAP overflow portion
1:38:06
of these bugs. The
1:38:08
heap overflow was assigned
1:38:10
CVE twenty twenty one twenty
1:38:12
one nine seventy four. That's the
1:38:15
one that is the trouble. This
1:38:17
blog, he says, takes a
1:38:19
takes a look at both bugs and how
1:38:21
the heap overflow could be
1:38:23
used for code execution. Here's
1:38:25
a quick video demonstrating the
1:38:28
exploit in action. Okay? So that
1:38:30
was what he posted March second twenty
1:38:32
twenty one nearly two years ago.
1:38:35
And then, his blog post proceeds
1:38:38
to demonstrate and provide descriptions
1:38:41
details and pseudocode of
1:38:43
the critical portions of the homegrown
1:38:46
open SLP server that
1:38:49
VMware had running in their
1:38:51
ESXi While
1:38:54
continuing to be responsible, Lucas
1:38:56
disclosed all of the juicy details
1:38:59
a month after the trouble
1:39:01
was finally patched. So they finally
1:39:03
patched it in February twenty twenty one. VMware
1:39:06
did. Lucas waited a month
1:39:08
and then you know, he did his blog posting,
1:39:11
didn't do a proof of concept publicly,
1:39:13
but, you know, did reveal
1:39:15
what he'd found. When
1:39:19
Lucas is describing the QNAP overflow
1:39:21
bug in question, this is the, you know, the
1:39:23
the one ending in twenty
1:39:25
one nine seventy four, he notes He
1:39:28
says, like the previous bug,
1:39:30
this bug exists only in
1:39:32
VMware's implementation of
1:39:35
SLP. As I noted,
1:39:38
the balance of this posting provides suit
1:39:40
of of his posting, provides pseudocode
1:39:43
a VMware's code and walks
1:39:45
the reader step by step through a theoretical exploitation
1:39:48
process. Lucas implemented it
1:39:50
as shown in the video but
1:39:52
being responsible, he deliberately
1:39:54
stopped shorter providing a working proof
1:39:56
of concept. At the end of his step
1:39:58
by step explainer, he notes.
1:40:01
Quote, if everything goes fine,
1:40:04
you can now execute arbitrary
1:40:07
code with root permission on
1:40:09
the target ESXI system.
1:40:12
He says in ESXI7A
1:40:15
new feature called demon
1:40:18
sandboxing was prepared
1:40:20
for SLP. It uses
1:40:23
an app armor like Sandbox to
1:40:25
isolate the SLP demon.
1:40:28
However, I find that this is
1:40:30
disabled by default in my environment.
1:40:33
And as this week's news demonstrates
1:40:35
all too clearly, Lucas was not alone
1:40:37
in finding that sandboxing was
1:40:39
not present or enabled. He
1:40:42
concludes with VMware ESXi
1:40:45
is a popular infrastructure for
1:40:48
cloud service providers and many others.
1:40:51
Because of its popularity, these bugs
1:40:53
may be exploited in the wild at some
1:40:55
point. To defend against
1:40:57
this vulnerability, you can either apply
1:40:59
the relevant patches or implement
1:41:01
the workaround. You should consider
1:41:04
applying both to ensure your
1:41:06
systems are adequately protected Additionally,
1:41:09
VMware now recommends disabling
1:41:12
the open SLP service in ESXI
1:41:15
if it is not used. So
1:41:17
yes. Adding insult
1:41:19
to injury, we also have
1:41:21
the old security bugaboo of
1:41:24
a service which turns out
1:41:26
to be readily exploitable, which
1:41:28
is running by default, forbidden,
1:41:32
even if there is no need for it,
1:41:34
in any given deployment. Yet,
1:41:36
there it is. Not even a backdoor.
1:41:39
This is a front door. Now
1:41:42
being a responsible researcher, as I said,
1:41:44
Lucas' job was now done. He
1:41:46
found a problem privately and responsibly
1:41:49
notified its publisher In this
1:41:51
case, discovered that it hadn't been fixed
1:41:53
once or twice, but finally
1:41:55
the third attempted patch worked.
1:41:57
So Lucas Dallas moved on to
1:41:59
examine, improve the security of other
1:42:01
software, which would benefit from his scrutiny.
1:42:04
But of course, other people
1:42:06
have other interests. Nearly
1:42:09
three months after Lucas' posting on
1:42:12
May twenty four, twenty twenty one,
1:42:15
a hacker by the name of Johnny Yu
1:42:18
extended Lucas' work, essentially
1:42:21
pushing it across the finish line. Johnny
1:42:23
wrote, During a recent
1:42:25
engagement, I discovered a machine
1:42:28
that's running VMware, ESXi,
1:42:30
six point seven point zero. Upon
1:42:33
inspecting any known vulnerabilities
1:42:36
associated with this version of the software, I
1:42:38
identified it may be vulnerable
1:42:41
to ESXI open SLP
1:42:43
QNAP overflow, CVE twenty
1:42:46
twenty one twenty one nine seventy four.
1:42:49
Through Googling, I found a
1:42:51
blog post by Lucas Lyon,
1:42:54
of Trend Micro Zero Day Initiative, the
1:42:57
security researcher who found this
1:42:59
bug Lucas wrote
1:43:01
a brief overview on how to exploit
1:43:03
the vulnerability, but shared
1:43:06
no reference to a proof of concept.
1:43:09
Since I couldn't find any existing proof
1:43:11
of concept on the Internet, I thought
1:43:13
it would be neat to develop an exploit
1:43:15
based on Lucas' approach. Before
1:43:18
proceeding, I highly encourage
1:43:20
fellow readers to review Lucas'
1:43:22
blog to get an overview of the bug
1:43:24
and exploitation strategy from the
1:43:27
Discoverer's perspective. So
1:43:30
here we have a textbook example of
1:43:32
the way we get from Something
1:43:34
doesn't look right here. Two,
1:43:37
here's how to exploit this if you ever
1:43:39
encounter a server with it unpached. The
1:43:42
two year old vulnerability allows
1:43:45
threat actors to execute remote
1:43:48
commands on any unpatched
1:43:50
ESXi server through VMware's
1:43:53
own implementation of the open
1:43:55
SLP service on port four twenty
1:43:57
seven. What's open SLP?
1:44:00
The project has its own website, which
1:44:02
describes this as, quote,
1:44:04
service location protocol, is
1:44:06
an Internet engineering task force,
1:44:09
you know, IETF standards track
1:44:11
protocol that provides a framework
1:44:13
to allow network looking applications to discover
1:44:15
the existence, location, and configuration
1:44:18
of network services in enterprise
1:44:21
networks. The open SLP
1:44:23
project is an effort to develop an open source
1:44:25
implementation of the IETF service
1:44:28
location protocol suitable for
1:44:30
commercial and non commercial application. While
1:44:32
other service advertising and location
1:44:35
methods have been invented and even
1:44:37
widely consumed, No other
1:44:39
system thus far has provided a
1:44:41
feature set as complete and as
1:44:43
important to mission critical enterprise
1:44:46
applications as SLP. So
1:44:48
I've never looked at it closely. I don't
1:44:50
know about it. It looks
1:44:53
like, well, for some reason, VMware decided
1:44:55
they wanted to add it. They'd
1:44:57
apparently rolled their own and it had
1:44:59
some problems. So,
1:45:03
you know, and Not only is it
1:45:05
often unused and unneeded, but it's
1:45:07
running by default. So until
1:45:11
an unless patched offers
1:45:13
a way for criminals. You know,
1:45:15
how many criminals so far? Is
1:45:18
everybody sitting down? More than
1:45:20
thirty two hundred VMware,
1:45:23
a thirty two hundred individual
1:45:28
VMware ESSI servers were
1:45:30
hacked over the weekend. What?
1:45:32
First Laporte over the weekend? Yes.
1:45:37
From first reports came in on
1:45:39
Friday and then they
1:45:41
increased. Thirty
1:45:43
two hundred. Okay.
1:45:47
This is this ESSI ARG's ransomware
1:45:50
campaign. France is the most affected
1:45:52
country followed because they have
1:45:54
a hosting provider who unfortunately seems
1:45:57
to really like to have
1:45:59
old versions of ESXI
1:46:01
for their customers. France
1:46:03
followed by the US, Germany,
1:46:05
Canada, and the UK in declining
1:46:08
numbers. And we have
1:46:10
the ransom note. If
1:46:13
you the homepage of
1:46:15
the web server that ESXI
1:46:18
publishes will say
1:46:20
after the attack, how to restore
1:46:23
your files in looks
1:46:25
like heading, you know, h one in
1:46:27
HTML. Security alert.
1:46:29
Three exclamation points. We hacked
1:46:32
your company successfully. All
1:46:34
files had been stolen and encrypted
1:46:37
by us. If you want to restore
1:46:39
files or avoid file leaks, please
1:46:42
send two point 034413
1:46:46
bitcoins to the
1:46:48
wallet, and then a Bitcoin address.
1:46:51
If money is received, encryption
1:46:53
key will be available on talks
1:46:56
ID. And then they provide a
1:46:58
public key we'll talk about in a second. And
1:47:00
then, attention, three exclamation
1:47:02
points, send money within
1:47:04
three days. Otherwise, we
1:47:06
will expose some data and
1:47:09
raise the price. Don't try
1:47:11
to decrypt important files. It may
1:47:13
damage your files. Don't trust
1:47:15
who can decrypt. They are liars.
1:47:18
No one can decrypt without key file.
1:47:20
If you don't send bitcoins, we will
1:47:22
notify your customers of the data
1:47:24
breach by email and text
1:47:26
message, and sell your
1:47:29
data to your opponents or
1:47:31
criminals, data may be made
1:47:33
release. Note, SSH
1:47:36
is turned on Firewall is
1:47:39
disabled. So
1:47:41
it's not a note that you want to
1:47:44
receive on your coming from your
1:47:46
server, and more than thirty
1:47:48
two hundred VMware
1:47:51
servers are now or
1:47:53
were broadcasting that note. That
1:47:56
two point 034413
1:47:59
bitcoins is
1:48:02
you know, Bitcoin value fluctuates.
1:48:04
Right? It looks like at the time this
1:48:06
happened, it was about fifty thousand dollars.
1:48:09
So they're asking for about fifty thousand dollars
1:48:11
per instance. The logic must
1:48:13
be that since it was a collection
1:48:16
of hosted servers running
1:48:18
inside the VMware hypervisor
1:48:20
that was taken down, not
1:48:23
an entire enterprise, this
1:48:25
isn't worthy of hundreds of thousands
1:48:27
of dollars in ransom payment. And
1:48:30
since the attackers have left more than
1:48:32
thirty two hundred, Of these
1:48:34
ransomware notes, they presumably
1:48:37
expect to receive many smaller payments
1:48:39
rather than one big score. In
1:48:41
the US, cybersecurity officials
1:48:43
at CISA have confirmed that they're investigating
1:48:46
the ESXI ARGES campaign A
1:48:49
CSA spokesperson was reported saying
1:48:51
that, quote, CSA is working with our
1:48:53
public and private sector partners to assess
1:48:56
the impacts of these reported incidents and
1:48:58
providing assistance where needed. And
1:49:00
the organization experiencing a cyber security
1:49:03
incident should immediately report it to
1:49:05
CISA or the FBI. Now
1:49:08
the standing advice, of course, is always
1:49:11
do not pay. And in this instance,
1:49:13
that seems a little extra warranted because
1:49:16
it turns out that the Bitcoin wallet
1:49:18
addresses appearing in the ransom demands
1:49:21
are not one hundred percent individualized.
1:49:24
Wallet reuse has been
1:49:26
detected. But still, there
1:49:29
are a great many of them. Since the
1:49:31
ransom note, is left behind on
1:49:33
a public facing web server, and
1:49:35
it always follows the same pattern.
1:49:38
Researchers have been scanning the net
1:49:40
for infected machines. That's how we
1:49:42
have account, and compiling
1:49:44
lists of the Bitcoin wallet
1:49:46
addresses appearing in the ransom demands.
1:49:49
I have a link in the show notes to a GitHub
1:49:51
page that's maintaining a growing
1:49:53
list of detected addresses. And
1:49:56
think there were like seven hundred some last
1:49:58
time I saw, but it wasn't super current.
1:50:01
And somebody did do a sort by
1:50:04
the address and was seeing
1:50:07
doubling of the use.
1:50:09
So it looks like the the bad guys didn't
1:50:11
wanna create an individual Bitcoin
1:50:14
wallet for every single one of these
1:50:16
thirty two hundred. I mean, you know, that's there's
1:50:19
only so much time. It's they're they're
1:50:21
so busy you know, infecting
1:50:23
and taking over all these Veeva But they're
1:50:25
ESXi
1:50:26
servers.
1:50:26
And and they've shared the wallet. They how
1:50:28
do they know you paid? Precisely.
1:50:33
There is a way. Although, I don't know how
1:50:35
unique well, there there is a way,
1:50:38
I'll I'll explain a second. The ransom note
1:50:40
refers to a talks ID.
1:50:42
Elliott, this kinda comes back also to our conversation
1:50:45
at the beginning about the EU's surveillance
1:50:49
intentions. The
1:50:51
talks ID is shown
1:50:54
in the demand and provides a very
1:50:56
long hex string. Talks
1:50:58
is an interesting open
1:51:01
source end to end encrypted
1:51:04
peer to peer instant
1:51:06
messaging system that uses
1:51:08
no centralized servers, so
1:51:11
it boasts that it cannot be
1:51:13
shut down. I have not examined
1:51:15
it closely so I can't say whether or not it
1:51:17
could be blocked, but it's a perfect example
1:51:20
of the trouble that the EU or
1:51:22
any other bureaucracy is gonna
1:51:25
have when they attempt to tighten the
1:51:27
screws on the legal and
1:51:29
illegal use of encrypted communications.
1:51:32
As we've always said, the math
1:51:34
has already escaped. There are an
1:51:36
infinite number of ways to communicate with
1:51:39
unbreakable encryption. It's
1:51:41
true that stomping on the
1:51:43
mass market solutions will catch
1:51:45
those who are unaware but history
1:51:47
also shows that awareness follows
1:51:50
very quickly. Anyway, a
1:51:52
talks ID is
1:51:54
used to identify peers
1:51:56
on the network, and the system is
1:51:59
simplicity itself. The
1:52:01
talks ID is simply the
1:52:03
two fifty six bit plus thirty
1:52:05
two byte static public
1:52:08
key of the other peer
1:52:10
on the network to which
1:52:12
you wish to communicate. This
1:52:15
means that a packet of communications
1:52:18
can be encrypted with a random
1:52:20
knots. That nonce can
1:52:23
then be encrypted using the recipient's
1:52:26
talks ID, that is the recipient's
1:52:28
talks ID public key. And
1:52:31
it can then be sent on its way.
1:52:33
Only the party the matching
1:52:36
private key will be able to decrypt
1:52:38
the non and then use that
1:52:40
decrypt the knots to decrypt the message
1:52:43
payload. So a
1:52:45
victim sends, hey creeps,
1:52:47
I just paid you your fifty thousand
1:52:50
dollars in Bitcoin. It went
1:52:52
to the following wallet at this time
1:52:54
of day. Please send me the decryption
1:52:57
instructions and destroy our unencrypted
1:52:59
virtual machines that you stole. And
1:53:02
then, of course, they kneeled down to
1:53:04
pray. Because, you
1:53:06
know, who knows? Well, they're ever gonna see -- Oh,
1:53:08
we know. -- the decryption key that they think
1:53:10
they bought. You know, bad guys are honorable and
1:53:13
That sounds right. So
1:53:17
by far the most impacted are
1:53:19
the customers of hosting provider, OVH
1:53:22
Cloud. Based in France. While
1:53:25
it's tempting to blame them for the misery
1:53:27
that their customers are suffering, it appears
1:53:30
that all OOH providers
1:53:34
are I'm
1:53:36
sorry. That it appears that all
1:53:38
that the OVH the
1:53:42
cloud service is providing our
1:53:44
bare metal servers. On
1:53:46
to which the VMware ESXi
1:53:49
Hypervisor is installed. It's
1:53:51
difficult to understand why such
1:53:53
an outsized proportion I
1:53:56
think it's like forty four percent of all
1:53:58
of the compromises is this one
1:54:00
provider. So it's hard to understand
1:54:02
why such an outsized Laporte of
1:54:05
impacted ESXI servers are within
1:54:07
OVH's cloud. It
1:54:09
might be that OVH offers
1:54:12
initial setup services and
1:54:14
that, you know, over the course of many
1:54:16
years, they set up their ESXI
1:54:18
servers on behalf of their customers, which
1:54:20
were never then patched or upgraded. And
1:54:23
who knows how recently, may maybe even
1:54:26
00VH didn't bother updating beyond
1:54:28
the six point five, six point seven
1:54:31
server that has the problem. I don't
1:54:33
have any experience with the ESXI
1:54:35
upgrade process, but I did note
1:54:37
that VMware's page describing
1:54:40
the process of upgrading ESXI
1:54:44
was last updated yesterday.
1:54:47
So it appears that there's a sudden demand
1:54:50
for information about how to get
1:54:52
away from the old and buggy version
1:54:54
sixes and the early version sevens.
1:54:58
Patches to an existing system appear to
1:55:00
be far more easily applied, and that
1:55:02
would have solved the problem two years ago.
1:55:04
But many thousands of E
1:55:06
S XI admins never
1:55:09
bothered. In a statement
1:55:11
to TechCrunch, a VMware
1:55:14
spokesperson said the company
1:55:16
was aware of reports, you think,
1:55:18
that a ransomware variant dubbed
1:55:21
ESXI ARGX, quote,
1:55:24
appears this is the spokesperson, quote,
1:55:26
appears to be leveraging the
1:55:29
vulnerability identified as
1:55:31
CVE twenty twenty one, twenty one
1:55:33
seven ninety four, and said that patches
1:55:36
for their vulnerability, quote, were
1:55:38
made available to customers two
1:55:40
years ago in VMware's security
1:55:43
advisory of February twenty third
1:55:45
twenty twenty one. She goes on
1:55:47
to add the, quote, security hygiene
1:55:50
is a key component of
1:55:52
preventing ransomware attacks. And
1:55:55
organizations who are running versions
1:55:57
of ESXI impacted by
1:55:59
CVE twenty twenty one twenty
1:56:02
one nine seventy four and have
1:56:04
not yet applied the patch should
1:56:06
take action as directed in
1:56:08
the advisory, unquote. Okay.
1:56:12
So as we know, mistakes
1:56:15
happen. This is all complicated stuff,
1:56:17
which we haven't yet figured out how to create
1:56:19
securely. But as much as
1:56:21
I have infinite understanding for
1:56:24
mistakes, I'm unforgiving about
1:56:26
deliberate policy decisions. Someone,
1:56:29
somewhere, made the policy
1:56:31
decision at VMware to
1:56:34
have this homegrown open SLP
1:56:36
server that apparently few people
1:56:39
actually need running by default,
1:56:41
opening port force four twenty
1:56:43
seven, then listening for and accepting
1:56:46
incoming unsolicited connections from
1:56:48
the public Internet. And
1:56:51
all that, as
1:56:55
I said, while the service was
1:56:57
typically, unneeded, unwanted, and
1:56:59
unused. Minimizing a
1:57:02
system's attack surface should
1:57:04
be taught and probably is during
1:57:06
cybersecurity 101, yet
1:57:08
that basic lesson was ignored here
1:57:11
with catastrophic results. Okay.
1:57:14
However, the good news is it appears this policy
1:57:16
was changed for the better several
1:57:19
years ago, though only after
1:57:22
all of the servers being attacked had
1:57:24
been deployed. In a blog posting
1:57:27
yesterday, VM wears
1:57:29
Edward Hawkins, whose title is
1:57:31
high profile product incident
1:57:33
response manager and, yes,
1:57:36
Edward, this would qualify as a
1:57:38
high profile product incident
1:57:41
He wrote, we wanted to address
1:57:43
the recently reported ARG's
1:57:46
Renswear attacks as well as
1:57:48
provide some guidance on actions
1:57:50
concerning customers should take
1:57:52
to protect themselves. VMware
1:57:55
has not found evidence that
1:57:57
suggests an unknown vulnerability,
1:58:00
a zero day, is being
1:58:02
used to propagate the ransomware used
1:58:04
in these recent attacks. Most
1:58:06
reports state that end of
1:58:08
general support, which they call
1:58:11
EOGS, and or
1:58:13
significantly out of date products
1:58:16
are being targeted with known vulnerabilities,
1:58:19
which were previously addressed and disclosed
1:58:21
in VMware security advisories.
1:58:24
Those are VMSAs. You
1:58:26
can sign up for email and
1:58:28
RSS alerts when an advisory is published
1:58:31
or significantly modified on our
1:58:33
main VMSA page. With
1:58:35
this in mind, he finishes, we are
1:58:37
advising to upgrade to the
1:58:39
latest available supported releases of
1:58:42
vSphere components to address
1:58:44
currently known vulnerabilities. In addition,
1:58:47
VMware has recommended disabling
1:58:49
the open SLP service in
1:58:52
ESXI. In twenty twenty
1:58:54
one, es XI7
1:58:57
point 0U2C
1:59:00
and ESXI8
1:59:03
point 0GA began
1:59:06
shipping with the service disabled
1:59:09
by default. What
1:59:13
is that about horses having left the barn?
1:59:16
But still, this was clearly the
1:59:18
correct policy change. In
1:59:20
OVH's first posting last
1:59:22
Friday the third, they
1:59:25
they observed. They said, The attack
1:59:28
is primarily targeting servers
1:59:30
inversion before seven point
1:59:33
0U3I apparently
1:59:36
through the open SLP port four
1:59:38
twenty seven. Right? So
1:59:41
the moment VMware changed their
1:59:43
policy turned off that unneeded
1:59:45
service and closed that Their
1:59:49
services were no longer, their systems
1:59:51
were no longer vulnerable. Now
1:59:54
there's some confusion about
1:59:56
what files are encrypted. The
1:59:58
encryption code has been found
2:00:01
now and analyzed. So
2:00:03
we know that it targets all files
2:00:05
with the extensions dot
2:00:08
VMDK, which is
2:00:10
the mother load. As well as
2:00:12
VMXVMXFVMVMVMVMVMSS,
2:00:21
dot n v ram and dot
2:00:23
v ram. We know that the encryption
2:00:26
appears to use a variant of
2:00:28
the cipher used by the Babuke
2:00:31
Ransomware, whose source code was
2:00:33
leaked and became public, thus allowing
2:00:35
it to be, you know, offshoots to be created
2:00:37
and this appears to be one. And
2:00:39
we know that the encryption was done
2:00:41
right. There is no easy decryption
2:00:43
path without obtaining the key.
2:00:46
In that regard, the ransomware note
2:00:48
was correct. The ransomware obtained
2:00:50
its name ESSI ARDS.
2:00:53
Because for every file that it encrypts
2:00:56
and doesn't need to do many because this is a
2:00:58
virtual machine. Right? It just needs to encrypt
2:01:00
the container. For every
2:01:02
file that encrypts with those extensions that
2:01:04
I mentioned, it leaves behind that
2:01:06
encrypted file dot ArGS,
2:01:10
which is that, you know, containing
2:01:12
the the specific per encryption data
2:01:15
that is needed to direct the file's
2:01:17
eventual restoration. There
2:01:19
was some initial news that the
2:01:21
big master virtual machine image,
2:01:24
the big dot VMDK file,
2:01:26
was not being encrypted which would
2:01:28
have allowed for the reconstitution of
2:01:31
the system without paying the ransom. All
2:01:33
of the other little pointer files could have been
2:01:35
fixed apparently, but Every you
2:01:38
know, everything we're seeing
2:01:40
suggests that maybe that was a one off
2:01:42
or a low probability
2:01:44
incident. In
2:01:46
another bit of good news, it may be
2:01:48
that the claim of filtration and
2:01:50
subsequent public exposure is
2:01:53
an empty threat. One victim,
2:01:55
posting on bleeping computers forum about
2:01:57
their own post attack forensic analysis
2:01:59
wrote, our investigation
2:02:02
has determined that data has
2:02:05
not been exfiltrated. In
2:02:07
our case, The attack
2:02:09
machine had over five hundred
2:02:12
gigabytes of data, but
2:02:14
typical daily usage of
2:02:16
only two megabits. We
2:02:19
reviewed traffic stats for the last ninety
2:02:21
days and found no evidence
2:02:24
of outbound data transfer.
2:02:27
Of course, that's not definitive for everyone,
2:02:29
of course, but, you know, another interesting
2:02:31
data point. Okay. Well, all this
2:02:33
said, I was left with
2:02:35
one other thought. Why?
2:02:39
Were the bad guys allowed
2:02:42
to find and exploit this.
2:02:46
This problem has been waiting
2:02:48
for discovery for two years.
2:02:51
While VMware knew that
2:02:54
they had a serious, remotely
2:02:56
exploitable, remote code execution
2:02:59
vulnerability. We
2:03:01
know they knew that this was a
2:03:03
critical remote code execution vulnerability
2:03:06
affecting all of their ESXi
2:03:08
servers at the time. ZDIs,
2:03:11
Lucas, would certainly have shared
2:03:13
his own private proof of concept
2:03:15
exploitation demo with them, though
2:03:17
he'd never released it publicly. And
2:03:20
as we know, they proactively change
2:03:22
their policy to no longer have
2:03:25
their open SLP service running and
2:03:27
exposed by default. So there's
2:03:29
proof of awareness. Big,
2:03:32
slow, lumbering, bureaucratic
2:03:35
national governments are now proactively
2:03:38
scanning their own nation's networks checking
2:03:40
the version of the systems that are publicly
2:03:42
exposed. Why isn't
2:03:45
a leading High-tech Silicon
2:03:47
Valley superstar like VMware
2:03:50
who produces highly sophisticated public
2:03:52
facing Internet servers
2:03:55
proactively scanning their
2:03:58
own customers to protect
2:04:00
them from the potentially
2:04:02
catastrophic consequences of
2:04:04
using the software they publish and
2:04:06
sell. I was unimpressed by
2:04:09
VMware's spokesperson blaming
2:04:11
their customers for not patching
2:04:13
when VMware is entirely able
2:04:16
to know who has patched what and
2:04:18
when. VMware is certainly
2:04:20
capable of scanning the Internet looking
2:04:22
for and checking the security of their own server
2:04:25
technology. One of this podcast's
2:04:28
ongoing questions and explorations
2:04:31
is about the post sales
2:04:34
responsibility of massively
2:04:37
profitable private enterprises whose
2:04:41
license agreements state that
2:04:44
they're gonna take your money and
2:04:46
plenty of it to support their growth.
2:04:49
But what you get in return
2:04:51
is whatever they feel like
2:04:53
providing. And they're not gonna
2:04:56
be in any way responsible for
2:04:58
what might happen to you afterward as
2:05:01
a result of your use of their
2:05:03
products for which you paid
2:05:05
good money, regardless of
2:05:07
what happens. Can you imagine
2:05:09
the chaos that would ensue if
2:05:12
automobile makers were able
2:05:14
to sell their multi ton vehicles
2:05:16
under these terms? Or how about
2:05:18
Boeing? Sure. Buy one
2:05:21
of our big new shiny passenger jets.
2:05:24
We had a bunch of very
2:05:26
enthusiastic summer interns design
2:05:29
the avionics for it, and they
2:05:31
mostly seem to work now. Cyber
2:05:35
threats are real and
2:05:37
growing, but the software industries
2:05:40
perverse and unique
2:05:43
utter lack of accountability for
2:05:45
its own failings removes
2:05:48
the only incentive for improvement
2:05:50
that's been shown to work. VMware
2:05:54
never bothered to protect their own customers
2:05:56
because it's been established that it's
2:05:58
their customers fault. For
2:06:00
not proactively patching, the
2:06:03
buggy software that VMware sold them
2:06:05
in the first place. That famous
2:06:07
definition of insanity is continuing
2:06:10
to do the same thing and expecting a different
2:06:12
outcome. Well, things are
2:06:14
gonna keep getting worse unless
2:06:16
we make them get better. So
2:06:19
far, there's not even a hint
2:06:21
of anything like that happening. I'll
2:06:29
finish on a happier note.
2:06:33
We have the chat GPT
2:06:36
astonishing reply of
2:06:39
the week. Courtesy
2:06:41
of one of our listeners. And
2:06:46
this is really becoming interesting. Somebody
2:06:49
said to chat GPT. Please
2:06:53
analyze and provide
2:06:55
a description of the function
2:06:57
of code that follows this
2:06:59
statement.
2:07:01
Okay? So this is a person who,
2:07:03
like, I I don't wanna say
2:07:05
they're lazy. They're maybe they're gonna
2:07:07
turn out to be smart. They've
2:07:10
and there and and we see here
2:07:13
in this quoted question, the
2:07:15
code. So it starts out script
2:07:18
source and it's got
2:07:20
a URL to Google APIs,
2:07:22
Ajax, LIVES, Jquery, three
2:07:24
point six one Jquery, know, min
2:07:27
dot j s, so it's a minified JavaScript.
2:07:29
So it loads that. Then the
2:07:31
actual scripting question, which
2:07:34
you can there's a couple of arguments in
2:07:36
in in the clear, but most
2:07:38
of it is gibberish. It's
2:07:41
all been scrambled and encrypted,
2:07:43
you know, minified, and
2:07:46
and obscured so nobody can
2:07:48
see what's in there. And you see little bits.
2:07:50
There's a return statement.
2:07:52
There's a parse inager
2:07:54
and another one and a and a few. So
2:07:57
anyway, it's
2:07:58
gobblingly gook. Right? Astronishingly,
2:08:02
This guy says, please analyze and provide
2:08:05
a description of the function of code that follows
2:08:07
the statement. Chat Chat GPT
2:08:09
says, And
2:08:12
I kid you not. The code
2:08:14
appears to be obfuscated JavaScript
2:08:17
code. That makes an Ajax request
2:08:19
to the Internet I'm sorry,
2:08:22
request to the URL and then
2:08:24
it gives the URL which it has
2:08:26
deophysgated. The
2:08:28
payload of the request contains
2:08:30
an email address that is passed
2:08:32
as a parameter. The response
2:08:35
of the request is written to the document.
2:08:38
The purpose of the code is unclear, but
2:08:40
it may contain malicious content such
2:08:43
as a such as a code that
2:08:45
performs unauthorized actions on
2:08:47
the client side or collects an
2:08:49
exfiltrate's sensitive information. I,
2:08:53
speaking it says chat GPT speaking
2:08:55
in the first person. I
2:08:57
would advise against running this
2:08:59
code as it may pose a security
2:09:02
risk to your system and data. Leo,
2:09:07
I don't know. I think it's smart. I
2:09:10
am astonished. Once
2:09:12
again, I am
2:09:15
astonished. And
2:09:17
I do see something happening. I've
2:09:19
seen some tweets which are
2:09:21
a little concerning. Because
2:09:24
people are starting to ask chat chat
2:09:27
GPT for help. I mean, they're like,
2:09:30
they're one of the things that needs
2:09:32
to be reinforced about this is
2:09:34
it is often wrong. Yeah.
2:09:37
It is it is not Maybe
2:09:39
it's like asking Uncle Benny who's,
2:09:42
you know, got a little bit of the, you
2:09:45
know, we're not sure about him's. This
2:09:48
thing, you know, it's it's
2:09:51
always sounds authoritative.
2:09:54
And so it's like it's selling its
2:09:56
own answers. But sometimes
2:09:58
it's just like way off. So,
2:10:02
you know, we we should, you know, Just
2:10:04
remind people, yes, you know,
2:10:06
you can use it as as Rob did
2:10:09
to create a template for some
2:10:11
code he would have never written. If he
2:10:13
had to do it himself, he had to go in and
2:10:15
fix it though. You know, it was broken in
2:10:17
a bunch of places. And and I'm not letting
2:10:20
it get near spin right. But
2:10:22
for but for what is worth, it's
2:10:25
worth something. And, boy, I do think
2:10:27
it's found its home in search engines,
2:10:30
Leo. The idea that this thing
2:10:32
could, you know, could I
2:10:34
mean, it is a search engine, essentially. But
2:10:36
but stick that on the front end and we
2:10:39
might really see search take on a whole new
2:10:41
form. Yeah.
2:10:42
Microsoft announced today they're gonna use it with
2:10:44
Bing in their edge browser
2:10:46
and Google's got something they're gonna announce, I think,
2:10:48
tomorrow. So we shall
2:10:51
see. It's exploding right
2:10:53
now.
2:10:54
It is just astonishing. I I know.
2:10:56
As I as I said last week, we're on the brink
2:10:58
of something. I don't know what. Nobody
2:11:00
knows what. I think this is, you know, still
2:11:03
early days, but But
2:11:05
we need to be careful not to, you know,
2:11:07
think that it actually has right answers to
Podchaser is the ultimate destination for podcast data, search, and discovery. Learn More