Podchaser Logo
Home
The Internet Dodged a Bullet - Wyze Breach, Patch Tuesday, KeyTrap

The Internet Dodged a Bullet - Wyze Breach, Patch Tuesday, KeyTrap

Released Wednesday, 21st February 2024
 1 person rated this episode
The Internet Dodged a Bullet - Wyze Breach, Patch Tuesday, KeyTrap

The Internet Dodged a Bullet - Wyze Breach, Patch Tuesday, KeyTrap

The Internet Dodged a Bullet - Wyze Breach, Patch Tuesday, KeyTrap

The Internet Dodged a Bullet - Wyze Breach, Patch Tuesday, KeyTrap

Wednesday, 21st February 2024
 1 person rated this episode
Rate Episode

Episode Transcript

Transcripts are displayed as originally observed. Some content, including advertisements may have changed.

Use Ctrl + F to search

0:00

It's time for security now. Steve Gibson is

0:02

here coming up. We're going to

0:04

talk about the webcams and accidentally put

0:06

everybody's video in everybody else's house.

0:09

Patch Tuesday is here. Well,

0:12

it was last week. Steve has some notes

0:14

on that. Why the flipper

0:16

zero is being banned in Canada and

0:19

a nightmare scenario with DNS Sec

0:21

that could have brought the whole

0:24

internet to its knees. Steve

0:27

explains next on security now.

0:31

Podcasts you love from

0:34

people you trust. This

0:37

is Twit. This

0:42

is security now with Steve Gibson, episode

0:44

962 recorded Tuesday, February 20th, 2024.

0:50

The internet dodged a

0:52

bullet. This episode is

0:54

brought to you by Panoptica.

0:57

Panoptica, Cisco's cloud application security

0:59

solution provides end to end

1:01

lifecycle protection for cloud native

1:03

application environments. It

1:05

empowers organizations to safeguard their

1:08

APIs, serverless functions, containers, and

1:11

Kubernetes environments. Panoptica ensures

1:13

comprehensive cloud security, compliance,

1:15

and monitoring at scale,

1:17

offering deep visibility, contextual

1:20

risk assessments, and actionable

1:22

remediation insights for all your

1:24

cloud assets. Powered by

1:27

graph-based technology, Panoptica's attack

1:29

path engine prioritizes and

1:31

offers dynamic remediation for

1:33

vulnerable attack vectors, helping

1:35

security teams quickly identify

1:37

and remediate potential risks

1:39

across cloud infrastructures. A

1:42

unified cloud native security

1:44

platform minimizes gaps from

1:46

multiple solutions, providing centralized

1:48

management, and reducing non-critical

1:50

vulnerabilities from fragmented systems. Panoptica

1:53

utilizes advanced attack path

1:55

analysis, root cause analysis,

1:57

and dynamic remediation techniques.

2:00

to reveal potential risks from an

2:02

attacker's viewpoint. This approach identifies new

2:05

and known risks, emphasizing critical attack

2:07

paths and their potential

2:09

impact. This insight is unique

2:11

and difficult to glean from

2:14

other sources of security telemetry

2:16

such as network firewalls. Get

2:18

more information on Panoptica's website

2:20

Panoptica.app. More details on

2:23

Panoptica's website Panoptica.app. It's

2:28

time for security now the show you

2:30

wait for all week long I know

2:32

it I know it one of the

2:34

nine best security shows in the world.

2:36

What number are we on that list

2:44

by the way number one I

2:46

hope? No no we

2:48

had to we went away I think

2:50

maybe seven but I don't it wasn't

2:52

ranked in like order of goodness it

2:54

might have been alphabetical I think it

2:56

was alphabetical all right well then we

2:59

would come in like yeah Applebee's security

3:01

show which

3:04

is a wonderful wonderful show. So

3:08

oh boy no one

3:11

is gonna believe this literally because

3:14

well you will once I'm done but this

3:18

has got to be the most underreported

3:20

event that I can remember

3:24

the and this sounds

3:26

like hyperbole and you know you know me

3:28

so it could well be but

3:33

the whole internet was

3:35

at risk for the last couple

3:37

months and a group

3:39

of researchers silently

3:43

fixed it what because

3:46

if the if it had been discovered

3:49

the internet could have been shut

3:51

down there it turns out there

3:53

was a problem that has been

3:56

in the DNS specification for 24

3:58

years years. It's

4:02

deliberate. So it wasn't

4:04

a bug, but it was

4:06

a feature that a group of German

4:08

researchers figured out how to exploit.

4:10

I'm stepping on my whole

4:13

storyline here because I'm

4:17

so excited by this. It's like, wow. Okay,

4:19

so this is episode 962 for February 20th,

4:25

and I revealed the Internet dodged

4:27

a bullet. Not

4:29

the worst bullet though, right? Didn't Dan Kaminsky

4:31

save the Internet back in 2008? No. Dan

4:34

is a

4:36

great publicist. Oh, was a great

4:42

publicist. Yes, the

4:45

issue of DNS queries

4:47

not having sufficient entropy was

4:50

important. I wrote the,

4:53

what the hell is it called?

4:55

I forgot now, the DNS. The spoofability test.

4:57

A lot of

5:01

work in order to allow people

5:03

to measure the entropy of the

5:07

DNS servers that are issuing queries

5:09

on their behalf. This

5:11

blows that away. I'm doing it

5:13

again. Anyway, so we're going to

5:18

answer some questions. It is

5:20

something. Leo, be careful not to look at the picture

5:22

of the week because this is a great

5:24

one. So we're going to answer

5:26

some questions. What's the worst mistake that

5:29

the provider of, and I'm

5:31

not had too much coffee,

5:33

of remotely accessible residential webcams

5:36

could possibly make? When

5:38

Lisa said, we don't want any cameras in

5:40

our house,

5:44

what was she worried would

5:46

happen? Oh, yeah. What

5:48

surprises did last week's

5:50

Patch Tuesday bring? Why

5:53

would any website put an upper

5:55

limit on password length? And

5:57

for that matter, what's up with the no use of special.

6:00

character's business. Will

6:02

Canada's ban on importing the

6:04

flipper zero hacking gadgets reduce

6:07

their car theft? Yeah. Exactly

6:11

why didn't the internet build in

6:13

security from the start? Like what

6:15

was wrong with them? How

6:18

could they miss that? Doesn't Facebook's

6:20

notice of a previous

6:22

password being used

6:24

leak information? Why isn't

6:28

TOTP just another password that's

6:30

unknown to an attacker? Can

6:33

exposing SNMP be

6:36

dangerous? Why doesn't

6:38

emails general lack of encryption

6:40

and other security make

6:43

email-only login quite

6:46

insecure? And finally,

6:48

what major cataclysm

6:51

did the internet just successfully

6:53

dodge and is it even

6:56

possible to have a minor

6:58

cataclysm? All

7:01

this and more from the

7:03

one of the best 17 cyber

7:05

security podcasts in the world. That's

7:08

a different one. Mine was nine. We were in the

7:10

top nine. Oh, well, Sam's is who says we're in

7:12

the top 17. So, well, but where? And

7:15

are they alphabetical? They're in random order, but we're

7:17

on the most important thing. I

7:22

mean, Sam's Institute is pretty credible. Sam's

7:24

good. Yeah, but I'd rather be in

7:27

the top nine. So today we'll be

7:29

taking a deep

7:34

dives after Leo,

7:37

after we examine a

7:39

potential solution to

7:41

global warming and energy production.

7:45

No, this is serious as

7:47

shown in our terrific picture

7:49

of the week. I can't

7:52

wait. Some things are just

7:54

so obvious in retrospect. Oh,

7:57

wow. This is a podcast for

7:59

the age. Thank God we got it in

8:01

before 999. Oh my goodness.

8:03

Otherwise, you know, we'd have to

8:05

wait until four digits. Well

8:10

we will continue with, what is this, episode?

8:12

I have, now I've taken it down. 962,

8:15

maybe I've counted them. Yes, one by one. In

8:18

just a moment. But first, a word from

8:21

our sponsor, Collide. We love

8:23

Collide. What do you

8:25

call endpoint security that works perfectly? Oh,

8:28

it's a great product. It works perfectly,

8:31

but makes users miserable. Well,

8:33

that's a guaranteed failure, right? Because

8:35

the user is going to work around

8:37

it. The old approach to endpoint security,

8:40

locked-out employee devices, rollout changes

8:42

through forced restarts, but that

8:44

just doesn't work. The

8:47

old approach is terrible

8:50

all around. IT is miserable. They've

8:52

got a mountain of support tickets. Employees

8:55

are miserable. They start using their own personal devices.

8:57

Now you're really going to be in trouble just

8:59

to get the work done. And the best, the

9:01

worst thing is the guy or gal who's writing

9:03

the check, the executives, they

9:05

opt out the first time they're late for a

9:08

meeting. It's like, I thought I'd use this. Look,

9:11

it's pretty clear. And if you're in

9:13

the business, you know this. You can't

9:15

have a successful security implementation unless

9:17

you get buy-in from the end users.

9:21

That's where Collide is great. They don't just buy

9:23

into it. They help you. They're part of your

9:25

team. Collide's

9:27

user-first device trust solution notifies users as

9:29

soon as it detects an issue on

9:32

their device. You

9:34

know, with a big klaxon or anything, but

9:36

just in a nice way. And it says,

9:38

here's how you can fix this without bothering

9:40

IT. Here's how you can make your device

9:42

more secure. And by

9:44

the way, users love it. They're on the team. They're

9:47

learning. They're doing the stuff that makes your company

9:49

succeed. And you'll

9:51

love it because untrusted devices are

9:53

blocked from authenticating. The users

9:56

don't have to stay blocked. They just fix it and they're

9:58

in. So you get a little carrot. stick

10:00

going on. Collide is designed for

10:02

companies with Okta. It works on

10:05

Mac OS, Windows, Linux, mobile devices,

10:07

BYOD, doesn't require MDM. If

10:10

you have Okta and you got that

10:12

half of the authentication job done but

10:14

you want a device trust solution that does the

10:16

other half and respects your team, Collide.

10:19

That's the solution. collide.com/security

10:21

now. Go there, you get

10:23

a demo, no hard sell, just explains how it

10:25

works and I think you'll agree it's a great

10:27

idea. collide.com

10:31

slash security now. We thank them

10:34

so much for their support. Now

10:36

let's see if Steve has

10:39

overhyped his picture of the week.

10:42

What is this going to do for us? It's going

10:44

to save the world. Solution

10:48

to global warming and energy production.

10:55

Well of course you put light

10:57

on the solar panels and infinite

10:59

electricity. That's exactly right

11:01

Leo. So what we

11:03

have for those who are not looking at the video

11:06

is a picture of a

11:08

rooftop with some solar panels mounted on

11:10

it as many are these days. What

11:13

makes this one different is that

11:15

it's surrounded by some high-intensity

11:17

lights pointing down at the

11:20

solar panels because you

11:22

know why not? They

11:24

you know it's right generate day and night. Perfect.

11:27

And my caption here I said when

11:29

you're nine years old you wonder why

11:32

no one ever thought of this before.

11:35

Adults are so clueless. I bet you even

11:37

knew better when you were nine. Well

11:40

you know it was interesting because this put

11:42

me in mind of

11:44

the quest for perpetual motion

11:46

machines. Remember those back in

11:48

our youth? I mean and

11:50

like even DaVinci was a

11:53

little bit obsessed with these things. There was

11:55

one really cool design where you had an

11:58

inner you had a a

12:01

wheel with marbles

12:03

riding tracks where

12:05

the marbles could roll in

12:08

and out and the

12:10

tracks were canted

12:13

so that

12:17

on one side of the wheel the marbles

12:20

rolled to the outside. Therefore

12:22

their weight was further away from the

12:24

center so pulling down

12:26

harder. And on the

12:28

other side the way the fixed tracks were

12:31

oriented the marbles would roll into the

12:37

center into the hub so their weight

12:39

was pulled up. There it is you

12:41

just found the perpetual motion machine exactly.

12:44

And of course it stopped turning. And

12:47

I mean there were again I was

12:49

you know five and interested in

12:51

physics already because I was wiring things up

12:54

with batteries and light bulbs and stuff when

12:56

I was four. So I

12:58

was you know I spent some length of time

13:00

puzzling that out. The good news is now as

13:02

an adult I don't give it a second thought.

13:06

You say I believe Newton's

13:08

law of the conservation of energy. You just

13:10

believe that's to be true. Well

13:13

the problem we have with our picture of

13:15

the week is that light

13:19

lights are not 100 percent

13:22

efficient in converting electricity into

13:25

light. For example they get

13:27

warm so you have some

13:29

energy lost in heat and

13:32

the physics of the conversion are also not

13:34

completely efficient. Solar panels are no more than

13:36

five or ten percent efficient. And

13:38

there you go. So you know the idea

13:40

would be you hook the output of the

13:43

solar panel to the input of the lights

13:45

and then you know when the sun

13:48

goes down this just keeps going. Somebody

13:51

has a good point though. Maybe

13:53

those lights are hooked up to the

13:55

neighbor's electricity. And

14:00

the only thing I could think when I

14:02

was trying to find a rationale was that

14:05

they might turn the lights on at

14:07

night because for some wacky reason whatever

14:11

they're powering from the solar panels

14:14

needs to be on all the

14:16

time or maybe they turn it

14:18

on on cloudy days because again

14:20

for the same reason. So it's sort of

14:22

a sun substitute because of

14:26

dumbness because it is dumb.

14:28

As you said solar panels are way inefficient

14:30

so you're going to get much less juice

14:32

out of the solar panels than you put

14:34

into the array of lights which are busy

14:36

lighting them. We're in a golden

14:39

era for scammers. You're just going to see endless

14:42

variations of this on YouTube and on

14:44

Twitter and using water to

14:46

power your car and this stuff just never dies.

14:48

It never dies. Wait, does that work? I've got

14:50

to try that. No. No.

14:53

Okay. No. Okay.

14:56

The following story, Leo, I'm reminded of

14:59

your wife Lisa's Wisely, so

15:01

to speak, because this story is about

15:03

Wise, forbidding

15:05

cameras of any kind

15:08

inside your home. Nine

15:10

to five max headline read,

15:12

Wise camera breach let 13,000

15:14

customers view other people's homes.

15:20

Oh boy. Wise hardware, Wise security failure

15:23

yet 13,000 customers see into other users'

15:25

homes. GeekWire,

15:29

Wise security cam incident that exposed images

15:32

to other users impacts more than 13,000

15:35

customers and even our good old bleeping computer,

15:37

Wise camera glitch gave 13,000 users a

15:39

peek into other homes. Now

15:44

one of our listeners, a user

15:46

of Wise monitoring cameras was kind

15:48

enough to share the entire email

15:51

he received from Wise, but bleeping

15:53

computer's coverage included all of those

15:55

salient details and added a bit

15:57

more color, as you might expect.

16:00

Here is what Bleeping just

16:02

wrote yesterday. They said, Wise

16:05

shared more details on

16:08

a security incident that impacted thousands

16:10

of users on Friday and

16:12

said that at least 13,000 customers

16:15

could get a peek into other

16:17

users' homes. The

16:19

company blames a third-party caching

16:22

client library recently added

16:24

to its systems, which

16:26

had problems dealing with a large

16:28

number of cameras that came online

16:30

all at once after

16:33

a widespread Friday outage. Multiple

16:36

customers had been reporting seeing

16:39

other users' video

16:42

feeds under the

16:44

events tab in the

16:47

app since Friday, with

16:49

some even advising other customers to

16:51

turn off the cameras until these

16:54

ongoing issues are fixed. Wise

16:56

wrote, quote, The outage

16:58

originated from our partner AWS

17:01

and took down Wise devices for

17:04

several hours early Friday morning. If

17:07

you tried to view live cameras

17:09

or events during that time, you

17:11

likely weren't able to. We're

17:13

very sorry for the frustration and confusion

17:16

this caused. As we worked

17:18

to bring cameras back online, we

17:20

experienced a security

17:22

issue. Some users

17:25

reported seeing the wrong

17:27

thumbnails and event videos.

17:38

We immediately removed access to the

17:41

events tab and started that investigation.

17:45

We bravely did that. Wise

17:47

says this happened because of

17:49

the sudden increased demand, and

17:53

I'll get to my skepticism on that in a minute, and

17:56

led to the mixing of

17:58

device IDs and

18:01

user ID mappings. You

18:03

don't ever want that to happen with your camera

18:05

system, causing the erroneous

18:07

connection of certain data with

18:10

incorrect user accounts. As

18:12

a result, customers could see

18:15

other people's video feed thumbnails

18:17

and even video footage

18:19

after tapping the camera thumbnails in

18:21

the WISE apps events tab. In

18:25

emails sent to affected

18:27

users, WISE confessed, quote,

18:29

we can now confirm that as cameras

18:32

were coming back online, about 13,000 WISE

18:34

users received

18:38

thumbnails from cameras that were not

18:41

their own and 1,504 users

18:43

tapped on them. We've

18:50

identified your WISE account

18:52

as one that was

18:54

affected. This means that

18:56

thumbnails from your events

18:59

were visible in another WISE

19:01

user's account and that

19:03

a thumbnail was tapped. Oh,

19:06

that is a confession. Somebody was

19:08

looking at your video, baby. Most

19:11

taps enlarged the thumbnail, but in

19:13

some cases it could have caused

19:15

an event video to

19:17

be viewed. Let me zoom in on

19:19

that one. What is that in the

19:21

corner over there? Yeah, that's right. WISE

19:25

has yet to share the exact

19:27

number of users who had their

19:29

video surveillance feeds exposed in the

19:31

incident. The company has now

19:33

added an extra layer of verification. Oh,

19:36

you betcha. For users who

19:38

want to access video content via

19:40

the events tab to ensure that

19:42

this issue will not happen in

19:44

the future. Is that really your

19:46

video you're about to look at?

19:49

Additionally, it adjusted systems

19:51

to avoid caching during

19:54

user device relationship checks

19:57

until it can switch to a

19:59

new client library, get rid

20:01

of that old cash, capable of

20:04

working correctly, which would be convenient,

20:06

during extreme events," they had in

20:08

quotes, like the Friday outage. Okay,

20:11

now, I like

20:14

Y's, and their cameras are

20:17

adorable, little, beautifully designed cubes.

20:19

You know, they look like

20:21

something Apple would produce. But

20:24

at this stage, in the evolution of

20:26

our understanding of how

20:28

to do security on the Internet,

20:31

I cannot imagine streaming

20:34

to the cloud any

20:37

content from cameras that are looking

20:39

into the interior of my home.

20:41

You know, maybe the backyard, but

20:44

even then, you know, who knows? You

20:47

know, the cloud and real-time images

20:49

of the contents of our homes

20:51

do not mix. I

20:54

understand that for most users, the

20:56

convenience of being able to log

20:59

in to a Y's camera monitoring

21:01

website to remotely view one's residential

21:04

video cameras is difficult to pass

21:06

up, you know, seductive, if

21:08

you weren't listening to this podcast. And

21:11

the whole thing operates with almost

21:13

no setup, you know, and Y's

21:15

response to this incident appears to

21:17

be everything one could want. I

21:19

mean, they've really been honest,

21:22

which could not have been easy. The

21:24

letter that our listener shared with me,

21:27

unlike the letter that Bleeping Computer quoted,

21:30

said that his account had not

21:32

been affected. So, you know,

21:34

Y's was on the ball, and

21:36

they clearly got logging working because

21:39

they knew that 1,504 people

21:42

did, you know, click on a

21:45

thumbnail, like, what is that? That doesn't

21:47

look like my living room. Like, oh,

21:50

it's not my living room. Wow, who's

21:52

that? Anyway,

21:54

I should add, however, that

21:56

I'm a bit skeptical about the

21:59

idea that simply

22:01

overloading a caching

22:03

system could cause

22:05

it to get its pointers scrambled and

22:08

its wires crossed. Thus

22:10

causing it to get its

22:12

users confused. If it really

22:14

did happen that way it was a crappy cache.

22:18

I'm glad they're thinking about like revisiting

22:20

this whole thing because there was a

22:22

problem somewhere. Anyway I

22:26

do like the cameras. I'm just not gonna

22:29

let them be exposed to the outside world. That

22:31

makes no sense. Okay.

22:35

By the way Jason on Mac break weekly

22:37

recommended a new Logitech camera that's

22:39

HomeKit enabled which means it doesn't

22:42

send video outside your house. You

22:45

know it's or I guess it

22:47

does probably did Apple encrypted

22:50

iCloud something like that. And

22:52

again done right probably. Right of

22:54

course. Yeah. So HomeKit and security

22:57

and Apple that seems like a

22:59

good choice. It's too

23:01

bad because the Wyze stuff is very inexpensive.

23:03

I've been recommending it for years and using

23:05

it too. Oh I think

23:07

in fact you order when they send you

23:09

money. Yeah practically. Yeah they are. And they're

23:12

just beautiful little things.

23:15

Yeah. Yeah it's too

23:17

bad. Okay we haven't checked in on

23:20

a Microsoft patch Tuesday for a while. Last

23:23

week's update was notable for

23:26

its quiet inclusion of a

23:29

mitigation for what the

23:31

industry's DNS server vendors

23:34

have described as quote the

23:37

worst attack on DNS

23:39

ever discovered unquote.

23:43

It seems to have slipped

23:46

under much of the rest of the

23:48

industry's radar but not

23:50

this podcast. Since it could

23:52

have been used to

23:54

bring down the entire internet it

23:57

is today's topic which we will of

23:59

course be getting back to later. But

24:01

first we're going to take a look at Patch

24:04

Tuesday and then answer some questions and have

24:06

some interesting deep dives. The

24:10

DNS attack didn't happen,

24:13

so the Internet is still here. Thus,

24:16

Microsoft doesn't get off the hook for

24:18

fixing another round of problems with its

24:21

products. Last week's patches

24:23

included fixes for a pair of

24:25

zero days that were being actively

24:27

exploited even though they were not

24:29

the worst from a CVSS rating

24:32

standpoint. That goes to

24:34

a different pair which earned 9.8

24:36

CVSS ratings. Overall, last

24:42

Tuesday Microsoft released patches to repair a total

24:44

of 73 flaws across its product lineup.

24:50

Of those 73, 5 are

24:52

critical, 65 are

24:54

important, and the last three have moderate

24:57

severity. Fixed separately were

24:59

24 flaws in

25:01

Edge which had been repaired in

25:03

the intervening month between

25:05

now and last month's updates. The

25:09

two zero days are a

25:12

Windows SmartScreen security bypass

25:14

carrying a CVSS of

25:16

only 7.6 and

25:19

an Internet shortcut files security

25:22

feature bypass with a CVSS

25:24

of 8.1. The

25:26

first one of those two, the

25:28

Windows SmartScreen security feature bypass, allows

25:31

malicious actors to inject their

25:33

own code into SmartScreen

25:35

to gain code execution,

25:37

which could then lead to data

25:40

exposure. The lack of system

25:42

availability could also happen or

25:44

both. Now, the reason it was

25:46

only rated at 7.6 is that

25:49

for the attack to work, the

25:51

bad guys needed to send the

25:53

targeted user a malicious file and

25:56

convince them to open it. So, it wasn't

25:58

like, you know, just receive a packet

26:00

and it's the end of the world, which

26:03

actually is what happened with this DNS business

26:05

we'll get to. The

26:08

other zero-day permitted

26:10

unauthenticated attackers to bypass

26:13

displayed security checks by

26:16

sending a specially crafted file to

26:18

a targeted user. But

26:20

once again, the user would somehow need

26:22

to be induced to take the action

26:24

of clicking on the link that they'd

26:26

received. So not

26:29

good. Was being actively exploited

26:31

in the field. Both of these were zero

26:34

days being used. Still,

26:37

this one rated an 8.1 and it's

26:39

fixed as of last Tuesday. The

26:42

five flaws deemed critical in

26:44

order of increasing criticality were

26:47

a Windows Hyper-V denial

26:49

of service vulnerability that got itself

26:51

a score of 6.5. So critical,

26:53

but not a high-speed CVSS.

26:57

The Windows Pragmatic General

27:00

Multicast, which is PGM,

27:03

Remote Code Execution Vulnerability, scored a

27:05

7.5. Microsoft

27:08

Dynamics Business Central slash

27:10

NAV Information Disclosure vulnerability

27:13

came in with an 8.0.

27:15

And finally, the two biggies

27:17

both getting a 9.8. To few people's surprise,

27:23

we have another Microsoft Exchange Server

27:25

Elevation or Privileged Vulnerability and

27:28

a Microsoft Outlook

27:31

Remote Code Execution Vulnerability.

27:33

Both of those 9.8, both of those easy

27:36

to do, both of those now

27:39

resolved as of last week. A

27:42

senior staff engineer at Tenable, the

27:44

security firm, said in

27:46

a statement that this was very

27:49

likely to be exploited and that

27:51

exploiting the vulnerability could result in

27:53

the disclosure of a targeted user's

27:55

NTLM, you know, NTLanman, version

27:58

2 hash, which could be

28:00

relayed back to a vulnerable exchange

28:02

server in an NTLM

28:04

relay or pass the hash attack

28:07

to allow the attacker to authenticate

28:09

as the targeted user. So it

28:11

was a way of getting into

28:14

Microsoft Exchange Server through this 9.8,

28:17

basically a

28:19

backdoor vulnerability. And

28:22

believe it or not, last Tuesday's updates

28:24

also fixed 15. I

28:28

had to like, what? Really? Yes,

28:31

15 remote code execution flaws

28:34

in Microsoft's WDAC

28:37

OADB provider for SQL

28:40

Server that an attacker

28:42

could exploit by tricking

28:44

an authenticated user into

28:46

attempting to connect to

28:48

a malicious SQL Server

28:50

via Windows OADB. So

28:54

15 remote code execution flaws. It must

28:56

be that someone found one and said,

28:59

wow, let's just keep looking. And

29:02

the more they looked, the more they found. So

29:04

Microsoft fixed all of those. And

29:07

rounding off the patch batch, as I

29:09

mentioned, is a mitigation,

29:12

not a total fix for a 24-year-old fundamental

29:15

design flaw, not

29:17

an implementation flaw, a design flaw

29:20

in the DNS SEC spec,

29:24

which had they known of it, bad

29:27

guys could have used to

29:29

exhaust the CPU resources of

29:31

DNS servers to lock them

29:34

up for up to 16

29:36

hours after receiving

29:38

just a single DNS

29:40

query packet. We'll

29:43

be talking about this bullet that the

29:45

internet dodged as

29:47

we wrap up today's podcast. But

29:49

first, Ben wrote,

29:51

hey, Steve, love the show. I

29:54

hated the security courses I took

29:56

in school, but you make it way

29:58

more interesting. I haven't missed

30:00

a podcast from you in three years. My question

30:02

was, I was recently

30:04

going through my password vault

30:06

and changing duplicate passwords. I

30:09

encountered a lot of sites

30:11

with length and symbol restrictions

30:13

on passwords, for example, no

30:15

passwords longer than 20 characters

30:17

or disallowing certain symbols. My

30:20

understanding of passwords is that they

30:22

all get hashed to a standard

30:25

length regardless of the input. So

30:27

it can't be for storage

30:30

space reasons. Why the

30:32

restrictions? Even if I

30:34

input an 8-character password, the hash will end

30:36

up being 128 bits or whatever. I

30:40

would love some insight because it makes no sense to

30:42

me. Thanks, Yuri. So that's

30:44

his real name. Okay, so

30:47

Yuri, you're not alone. When

30:49

you look closely at it, none

30:51

of this really makes any sense to

30:53

anyone. I think the

30:55

biggest problem we have today is that

30:58

there are no standards, no

31:00

oversight, and no regulation. And

31:05

everything that is going on behind

31:07

the scenes is opaque to the

31:09

user. We only

31:11

know in detail, for

31:14

example, what LastPass

31:16

was doing and the entire

31:18

architecture of it because it

31:20

really mattered to us. And

31:23

back then, we drilled down and got questions

31:25

from the guy that wrote that stuff. And

31:28

so we really understood what

31:31

the algorithms were. But

31:33

on any random given website, we

31:35

have no idea. There's just no visibility. You

31:38

can also kind of feel how antiquated

31:40

that is when you say the words, the guy

31:42

who wrote it. Besides

31:48

Yuri's software, Spinrite and

31:51

all of that, nothing's

31:54

written by one person. That's

31:56

nuts. Right. You're

31:59

right. Yeah, you're right. The fact

32:01

that Joe Seigrest wrote last past 30

32:03

years ago by himself is amazing. Yeah.

32:07

So everything is opaque to

32:09

the user. We

32:12

hope that passwords are stored on

32:14

a site's server in a

32:17

hashed fashion. But we don't

32:19

know that. And

32:21

assuming that they are hashed, we

32:24

don't know whether they're being hashed on the

32:26

client side in the user's browser or

32:28

on the server side after they

32:30

arrive in the clear over

32:33

an encrypted connection. That much we do

32:35

know because we can see that the

32:38

connection is encrypted. A

32:40

very, very long time ago

32:42

back at the dawn of

32:45

internet identity authentication, someone

32:47

might have been asked by

32:49

a customer support person over

32:52

the phone to prove

32:54

their identity by

32:56

reading their password aloud.

32:59

I'm sure that happened to a few of us.

33:01

I'm sure I was asked to do it like

33:04

at the very beginning of all this. The

33:06

person at the other end of the line would

33:09

be looking at it on their

33:11

screen, which they had pulled up for

33:13

our account, to see

33:15

whether what we read to them over

33:17

the phone matched what they had on

33:19

record. That's the way it used to be.

33:21

Of course, as we know,

33:23

this could only be done if

33:26

passwords were stored as they

33:28

actually were given to

33:31

the server in the clear, as indeed

33:34

they originally were. And in

33:36

that case, you

33:39

can see why the use

33:41

of special characters with names

33:43

like circumflex, tilde, pound,

33:47

ampersand, back apostrophe, and

33:49

left curly brace might

33:52

be difficult for some people

33:54

at each end of the

33:56

conversation to understand. Tilde, what's

33:58

that circumflex? What's a

34:00

circumflex? What? So

34:03

restricting the character set to a

34:05

smaller, common set of characters made

34:07

sense back then. And

34:10

we also know that in

34:12

those very early days, a

34:14

web server and a web

34:16

presence was often just a

34:18

fancy graphical front end to

34:20

an existing monstrous, old school

34:22

mainframe computer up on

34:24

an elevated floor with lots of

34:27

air conditioning. And

34:29

that computer's old account

34:32

password policies predated

34:34

the internet. So even

34:36

after a web presence was placed

34:38

in front of the old mainframe,

34:41

its ancient password policies were

34:43

still being pushed through to

34:45

the user. Today,

34:49

we all hope, all, that

34:51

none of that remains. But

34:53

if we've learned anything on this podcast,

34:55

it's to never discount the power and

34:57

the strength of inertia. Even

35:00

if there's no longer any reason to

35:02

impose password restrictions of any kind,

35:05

well, other than minimum length, that

35:07

would be good. Because

35:10

the password A is not a good one.

35:12

Restrictions may still be in place

35:15

today simply because they once were.

35:18

And hopefully, all consumers have learned the

35:20

lesson to never disclose a

35:22

password to anyone at any time

35:24

for any reason. We

35:26

see reminders from companies which are

35:28

working to minimize their own fraud,

35:31

explicitly stating that none of their

35:33

employees will ever ask for a

35:35

customer's password under any

35:38

circumstances. And we're hoping that's because

35:40

they don't add the passwords under

35:42

any circumstances. They were hashed a

35:44

long time ago, and they're just

35:46

being stored. The

35:49

gold standard for password

35:51

processing is for JavaScript

35:53

or WebAssembly running on

35:55

the client, that is the user's

35:58

browser, to perform the initial. password

36:01

qualification and then its hashing.

36:05

Some minimum length should be

36:08

enforced. All characters

36:10

should be allowed because why

36:13

not? And requirements

36:15

of needing different character families,

36:18

upper and lower case, numbers

36:21

and special symbols also

36:23

make sense. I will

36:25

protect people from using 123456 or password as their

36:27

password. And

36:32

those minimal standards should be clearly

36:34

posted whenever a new password is

36:36

being provided. It's really annoying

36:39

when you're asked to create an account

36:41

or change your password and then after

36:44

you do it comes up and

36:46

says, oh we're sorry that's too long. Or

36:49

oh you didn't put a special character in. It's

36:51

like why didn't you tell me first? Why?

36:55

Ideally there should be a list

36:57

of password requirements with

36:59

a checkbox appearing in front

37:01

of each requirement dynamically

37:04

checked as its stated

37:06

requirement is met. And

37:09

the submit button should be

37:11

grayed out and disabled until

37:13

all requirements have checkboxes in

37:16

front of them. Thus those requirements

37:18

have been met and a

37:20

password strength meter would be another nice

37:22

touch. Once

37:25

someone arrives at the server,

37:27

once something that has been submitted

37:29

from the user arrives at the server,

37:32

then high power systems on

37:34

the server side can hash

37:36

the live-in daylights out of

37:38

whatever arrives before it's finally

37:40

stored. But

37:42

since we also now live

37:44

in an era where mass

37:46

data storage has become incredibly

37:49

inexpensive and where there's

37:51

very good reason to believe

37:53

that major world powers are

37:56

already recording pretty much

37:58

everything on the Internet. internet

38:00

transactions in the hope of

38:02

eventually being able to decrypt

38:05

today's quantum unsafe

38:07

communications once the use of

38:10

quantum computers becomes practical, there

38:13

is a strong case to be made

38:15

for having the user's client hash

38:17

the qualifying password before it

38:19

ever leaves their machine to

38:22

traverse the internet. As upon

38:24

a time we coined the

38:26

term PI, P-I-E, pre-internet encryption.

38:28

So this is like that.

38:31

This would be I guess

38:34

pre-internet hashing. But

38:36

you know JavaScript or preferably WebAssembly

38:38

should be used to hash the

38:41

user's qualifying input. Much

38:43

of that gold standard that I

38:45

just described is user facing and

38:48

its presence or absence is obvious

38:50

to the user. You

38:52

know unfortunately we rarely see that

38:54

going on today. It

38:58

would be necessary to reverse

39:00

engineer individual web

39:02

applications if we

39:04

wished to learn exactly how each

39:07

one operates. Since avoiding

39:09

the embarrassment of breaches and disclosures

39:11

is in each company's best interest

39:13

and since none of the things

39:15

I've described is at all difficult

39:18

to deploy today we can hope

39:20

that the need to modernize

39:22

the user's experience while improving

39:25

their security will gradually overcome

39:27

the inertia that will always

39:29

be present to some degree.

39:32

But you know so we'll always

39:34

be dragging forward some of

39:36

the past but at some point you

39:38

know anything should be catching up.

39:43

I like it. Filipe

39:46

Mafra said hello Steve. First

39:49

of all I'd like to thank you and Leo for the

39:51

great show. I'd like to bring

39:53

you something very interesting that recently happened on

39:55

this side of the border. The

39:58

Canadian Industry Minister Francois

40:01

Philippe Champagne proudly

40:03

tweeted on February 8

40:05

that they are banning

40:07

the importation, sale

40:09

and use of hacking devices

40:12

such as flipper zero that

40:14

are being widely used for auto theft.

40:18

This is an attempt to respond

40:20

to the huge increase in auto

40:22

thefts here in Canada. Even

40:25

if I believe it is good that the government is

40:27

trying to address this issue, I found

40:30

myself rather

40:33

than blocking the

40:35

usage of such devices, it

40:38

would be better if the industry

40:40

was required to make things right

40:42

by design. This pretty

40:44

much aligns with last week's SecurityNow episode

40:47

960 regarding security

40:49

on PLCs, Programmable Logic Controllers,

40:51

as we see no commitment

40:53

from those industries to make

40:55

their products safe by design.

40:59

Anyways, I wanted to share this

41:01

with you and get your perspectives.

41:03

Thank you again for the show.

41:05

Looking forward to 999 and beyond.

41:07

Best regards, Philippe. In

41:11

past years, we have

41:13

spent some time looking closely

41:15

at the automotive remote key

41:18

unlock problem. What

41:20

we learned is that it is

41:22

actually a very

41:24

difficult problem to solve fully

41:26

and that the

41:28

degree of success that has

41:31

been obtained by automakers varies

41:33

widely. Some systems are

41:35

lame and others are just about as good

41:37

as can be. We

41:40

have seen even very

41:42

cleverly designed systems, like as

41:44

good as they could be,

41:47

fall to ingenious

41:49

attacks. Remember

41:52

the one where a system was

41:54

based on a forward rolling code

41:57

that was created by a counter in

41:59

the the key fob being

42:01

encrypted under a secret

42:04

key and the result of that

42:06

encryption was transmitted to the car.

42:09

This would create a completely

42:12

unpredictable sequence of

42:14

codes. Every

42:16

time the unlock button was pressed, the

42:19

counter would advance and the

42:21

next code would be generated and transmitted.

42:24

And no code ever received

42:27

by the auto would be

42:30

honored a second time. So

42:33

anyone snooping and sniffing

42:35

the radio could

42:37

only obtain code that had

42:39

just been used and would

42:42

no longer thus be useful again.

42:45

So what did the super clever

42:47

hackers do? They created

42:49

an active attack. When

42:53

the user pressed the unlock button, the

42:56

active attack device would

42:58

itself receive the code

43:00

while simultaneously emitting a

43:02

jamming burst to

43:04

prevent the automobile from receiving

43:06

it. So the car

43:09

would not unlock. Since that

43:11

happens when you are too far away from the car

43:13

and it is not that uncommon, the

43:15

user would just shrug and press the

43:17

unlock button again. Last

43:20

time the active attacking device would

43:22

receive the second code, emit

43:25

another jamming burst to prevent the

43:27

car from receiving the second code,

43:30

then itself send

43:32

the first code it had received

43:35

to unlock the car. So

43:38

the user would have to press the button

43:40

twice but they just figured the first one

43:42

didn't make it, the second one unlocked the

43:44

car. By

43:49

implementing this bit of subterfuge, the

43:51

attacker is now in

43:53

possession of a code that

43:55

the key fob has issued,

43:58

thus it will be valid. it,

44:00

but the car has never seen

44:02

it. And it's the

44:04

next key in the sequence from the

44:06

last code that the car did receive.

44:09

It is diabolically brilliant, and

44:12

I think it provides some

44:14

sense for what automakers are

44:17

up against. From

44:19

a theoretical security standpoint,

44:21

the problem is that all

44:23

of the communication is one-way

44:26

key fob to auto.

44:29

The key fob is issuing

44:31

a one-way assertion instead

44:34

of a response to a

44:36

challenge. What's needed

44:38

to create a fully secure system

44:41

would be for the key fob's unlock

44:43

button to send a

44:45

challenge request to the car. Upon

44:48

receiving that request, the car

44:50

transmits a challenge in the

44:53

form of an unpredictable value

44:55

resulting from encrypting a counter. Again,

44:58

the counter is monotonic, upward

45:01

counting 128 bits, and it will never repeat

45:05

during the lifetime of the universe, let

45:07

alone the lifetime of the car or

45:09

its owner. So

45:11

upon receiving that unique

45:14

challenge code sent by

45:17

the car, the key

45:19

fob encrypts that 128-bit

45:22

challenge with its own

45:24

secret key and sends

45:27

the result back to the car. The

45:29

car, knowing the secret kept

45:31

by its key fobs, performs

45:34

the same encryption on

45:37

the code it sent and

45:39

verifies that what the

45:41

key fob has sent it was

45:44

correct. Now

45:46

I cannot see any way

45:49

for that system to be defeated.

45:52

The car will never send the

45:54

same challenge and the

45:56

key will never return the same

45:58

response. amount

46:00

of recording that challenge and

46:02

response dialogue will inform an

46:04

attacker of the proper responses

46:07

to future challenges. If

46:10

some attacker device blocks the reception,

46:12

the car will send a different

46:14

challenge, the key will respond with

46:16

a different reply, and once

46:19

that reply is used to unlock

46:21

the car, the car will no

46:23

longer accept it again. So

46:26

the only problem with this system is

46:29

that now both endpoints

46:32

need to contain transceivers

46:35

capable of receiving and

46:37

transmitting. Previously, the

46:40

fob only had to transmit and the car

46:42

only had to receive. So

46:44

transceivers add some additional cost,

46:46

though not much in production

46:48

since both already contain radios

46:50

anyway. And what this does

46:52

mean is that a simple

46:55

software upgrade to the

46:57

existing hardware installed base will

47:00

not and cannot solve

47:03

this problem. I doubt

47:05

it is possible to create a

47:08

secure one-way system that is safe

47:10

against an active attacker while still

47:12

reliably unlocking the vehicle without unduly

47:15

burdening its user. The

47:17

system I have just described is not

47:19

rocket science, it is what any crypto-savvy

47:21

engineer would design. And since

47:24

this problem is also now well understood,

47:26

I would be surprised

47:28

if next generation systems which

47:30

fix this in this way

47:33

once and for all were not

47:35

already on and off the drawing

47:37

board and headed into production. But

47:41

that doesn't solve the problem which exists

47:43

and which will continue to exist for

47:45

all of today's automobiles. So

47:49

now let's come back to Philippe's point

47:51

about Canada's decision to ban the importation

47:53

of devices such as the flipper zero.

47:57

We Know that doesn't solve the problem. Will.

48:00

Not reduce the severity of the problem?

48:02

Yeah, Probably. Somewhat. Kids.

48:05

Will spring up to allow people to

48:07

build their own. Canada is

48:09

a big place. There's nothing

48:11

to prevent someone from firing

48:13

up manufacturing and creating home

48:15

home grown flippers. Euros or

48:17

they're like it's an open

48:20

source device. All. Feel I

48:22

mean like the design as all

48:24

their what we keep seeing. However,

48:27

Is that low hanging fruit?

48:29

Is the fruit that gets picked

48:32

in eden? And many people

48:34

won't take the time or trouble

48:36

to exert themselves to climb a

48:38

tree to obtain the higher hanging

48:40

fruit. And off hand of

48:42

a piece sure work for it. Perhaps.

48:45

Later. So I would

48:47

argue that making cars theft even

48:49

somewhat more difficult will likely be

48:52

a good thing. And. The Flipper

48:54

Zero is at best a hacker's gadget You

48:56

It's not as if it has huge non

48:58

a hacker app with ocean was. It's a

49:00

lot of fun. It is a lot

49:02

of fun and I was gonna use it on my

49:04

car. And then Russell said don't

49:06

because you could actually lock yourself out of

49:08

the car to the car. Security features will

49:11

see that you're doing it. And.

49:13

Will prevent you from using your irregular

49:15

five after that. so I. I.

49:17

Declined but I was able to get into the office

49:19

as able to clone Ira. Are. Key: Fob.

49:22

And. Use it. Oh yeah is a as

49:24

I do a little bit of of brushing

49:26

up on a yesterday or is it is

49:28

a very cool device again I know other

49:30

Robert so or is now in Italy or

49:32

you gave it a good home I think

49:35

it really going to get a lot of

49:37

it is set up and actually you know

49:39

of of from a hardware heck is there

49:41

Boyd all of the little Gp I owe

49:43

pins along the bathos really cool is very

49:45

very great device and our i think you

49:48

could duplicate with an Arduino or any a

49:50

variety but yes but as we've seen. Packaging

49:52

counts Leisure Leisure let like

49:54

remember this picture of of

49:57

of that that that Tpm

49:59

buster. That we talked about last

50:01

week where it has that the law

50:03

rower of a pogo pins on one

50:05

side i it just looked adorable finity

50:08

like wow that's very cool scared. So

50:11

let's let's take a break and then

50:13

what are we talking about? Next we

50:15

got, oh where did talk about why

50:17

the internet didn't start off having security

50:19

in the first place? Oh. Yeah,

50:22

I interviewed them. I

50:24

guess was Vint Cerf the other the internet

50:26

back in the day. I asked him. You.

50:29

Know: why didn't you think of putting Crypto in.

50:32

And. He said we did. No.

50:35

One knew his and.

50:37

He. Said we would. Now we now know

50:39

how people use the internet but at

50:42

the time may web very curious to

50:44

you were you have to say about

50:46

this when the first a word from

50:48

our sponsor Vance her. From.

50:50

Dozens of spreadsheets too.

50:53

Fragmented tools. And

50:55

manual security reviews that sounds

50:57

familiar. Managing the requirements for

50:59

modern compliance and security programs

51:01

is as is just that

51:03

is has gotten that vance

51:05

her the A and T

51:08

A. Is. The leading

51:10

trust management platform. That. Helps

51:12

you centralize your efforts to establish

51:14

trust and label growth across your

51:17

organization. G To Loves Vanna. Year.

51:19

After Year. Here's a very typical

51:21

review from a Chief Technology officer

51:23

quotes. There is no doubt about

51:26

that has effect on building trust

51:28

with our customers. As we work

51:30

more with Vanna, we can provide

51:32

more information to our currents and

51:34

potential customers about how committed we

51:36

are. Information Security and Vance It

51:38

is at the heart of it. He.

51:41

Was very happy with Manta automate

51:44

up to ninety percent of compliance,

51:46

strengthen your security posture, streamlined security

51:49

reviews, and reduce third party risk.

51:51

All with Santa. And speaking of

51:53

risk, Vance is offering you a

51:56

free risk assessment. Very.

51:58

Easy to do know press. The

52:00

vantage.com/security Now. Generate

52:03

a gap assessment of you security in your

52:06

compliance postures. This is information I know nobody

52:08

wants to now, but you need to know.

52:10

Discover your shadow. I t. And.

52:12

Understand the key action to

52:14

d risk your organization. It's

52:16

all advantage.com sliced Security Now

52:18

you're free! Risk assessment awaits

52:21

V A N T A.

52:23

Van. As I com/security Now I love

52:25

their slogan. Compliance. That.

52:28

Doesn't suck too much. Ah,

52:32

And guess you know it's kind of an inch. Allied

52:35

are I'd Steve's on Will

52:37

M. Scott tweets Steve, I'm

52:39

wondering about your thoughts. The

52:41

cyber security community seems to

52:43

bemoan the lack of security

52:45

baked into the original internet

52:47

doesn't and ceaselessly encourages designers

52:50

of new technologies to bake

52:52

in security from the get

52:54

go. Well, we certainly grew

52:56

the second half of that.

52:58

Several books I'm reading for

53:00

a cyber and information warfare

53:02

class suggest that government regulation

53:04

to require. Security is the answer

53:07

and should have been placed on

53:09

the internet. in the initial design.

53:12

However, I suspect it's security.

53:14

had been a mandate. On day

53:16

one, the robust cyber community we

53:19

have the day would not exist.

53:21

I see the internet as more

53:23

of a wicked problem where solutions

53:25

and problems emerged together but cannot

53:27

be solved up front. Your thoughts.

53:30

Thank. You for your continued service to the

53:32

community. Okay, I

53:35

suppose that my first thought. Is

53:38

that those writing such things

53:40

may be too young to

53:42

recall the way the world

53:44

was. When. The Internet was being

53:46

created. I'm not too

53:48

young. I. was there looking

53:51

up at the emp the

53:53

interface medals were officer a

53:55

big imposing of white cabinet

53:57

sitting at stanford's ai lab

54:00

in 1972 as

54:02

the thing known at the time as Darpanet

54:05

was first coming to life.

54:09

The problems these authors are lamenting,

54:11

not being designed in from the start,

54:14

didn't exist before the

54:16

Internet was created. It's

54:19

the success of the Internet and

54:21

its applications that created the problems

54:23

and thus the needs we have

54:26

today. So

54:28

back then we didn't

54:30

really even have crypto yet. It's

54:33

nice to say, well, that should have been

54:35

designed in from the start, but

54:37

it wasn't until four years later in

54:40

1976 that Whit Diffie,

54:42

Marty Hellman and Ralph

54:44

Merkel invented public key

54:47

crypto. And

54:50

a huge gulf exists

54:52

between writing the

54:54

first academic paper to

54:57

outline a curious and

54:59

possibly useful theoretical operation

55:02

in cryptography and

55:04

designing any robust

55:06

implementation into network

55:08

protocols. No one knew

55:10

how to do that back then and we're

55:13

still fumbling around, finding mistakes

55:15

in TLS decades later. And

55:19

one other thing these authors may have

55:21

missed in their wishful timeline is

55:23

that the applications of

55:25

cryptography were classified by

55:28

the U.S. federal government as

55:30

munition. In

55:33

1991, 19

55:36

years after the first

55:38

imps were interconnected, Phil

55:41

Zimmerman, PGP's author, had

55:44

trouble with the legality

55:46

of distributing PGP over

55:48

the Internet. Today,

55:51

no one would argue that

55:53

security should always be designed in

55:56

from the start. And

55:58

we're still not even doing that very soon. We're

56:00

still exposing unsecurable web interfaces

56:03

to the public-facing Internet, and

56:05

that's not the Internet's fault.

56:08

So anyone who suggests that the Internet

56:11

should have been designed with security built

56:13

in from the start would

56:15

have to be unaware of the way

56:18

the world was back

56:20

when the Internet was actually first

56:22

being built. Some

56:24

of us were there. Yeah,

56:27

you know, Vint Cerf did say the one

56:29

thing he would have added had they been

56:31

able to was encryption. Yeah.

56:33

That's exactly right. Yeah, but they just couldn't

56:35

at the time. No, I mean, it didn't

56:38

exist. It literally, like, you know, sure. I

56:41

mean, we knew about the Enigma machine.

56:43

Right. But, you know, you're not going

56:45

to have gears and wires on your

56:47

computer. You don't have time to do

56:49

that. Yeah. You know, like,

56:51

what? No. So, I mean,

56:53

so, you know, we were playing around with these ideas.

56:55

And Leo, remember also when HTTPS

56:58

happened, when Navigator

57:03

came out and SSL was

57:05

created, there was

57:07

a 40-bit limit on any

57:09

cipher that would leave the U.S.

57:12

Right. So, you know, we

57:14

had 128 bits, you know, as

57:16

long as it didn't try to make a connection

57:18

outside the U.S. I mean, it was a mess

57:20

back then. It was. It was.

57:23

Yeah. So, Jonathan Haddock said,

57:25

Hi, Steve. Thanks for the

57:28

podcast. Having been listening from very

57:30

early on. I was just listening

57:32

to Episode 961 last week and

57:34

the section about Facebook telling users

57:36

they entered an old password. A

57:39

downside of this practice is that

57:41

an attacker knows the incorrect password

57:44

was one that the person has

57:46

used. When the

57:48

prevalence of password reuse, the

57:51

attacker is then in possession of

57:53

a confirmed previous password that

57:56

could still work for

57:58

that user elsewhere. A

58:01

middle ground would be to simply

58:03

say when the password was last

58:05

changed. That way the user

58:08

experience is maintained, but the

58:10

previous validity of the password

58:12

is not exposed. Thanks again,

58:15

Jonathan. And I think

58:17

he makes a very good point. It's

58:19

not as useful to tell

58:22

a user that their password

58:24

was recently changed, but

58:27

the fact is that it could be

58:30

useful to an attacker. That

58:35

is, just telling the user

58:37

your password was changed last

58:40

Tuesday, well, okay, that's definitely

58:42

more useful. But

58:45

the difference that Jonathan is pointing

58:47

out is that if a bad

58:49

guy is guessing passwords and

58:52

is told by Facebook, well, close,

58:55

you got close, this is

58:57

the password from last week. Well, now

59:00

the attacker knows he can't get

59:02

into Facebook with that password, but

59:04

he may be able to get

59:06

into something else this user logs

59:09

into with that password if they

59:11

were reusing that password elsewhere. So

59:14

it's weird because something was bothering me

59:16

a little bit when we talked about

59:18

this last week. Like, why

59:20

isn't this a problem?

59:22

It turns out there is some

59:25

information leakage from Facebook telling their

59:27

users that. Probably

59:29

it's worth doing still,

59:31

but I'm glad that Jonathan brought this up. Yeah.

59:35

Sean Milowcheck said, and while we're sharing,

59:38

oh, I said, and

59:41

while we're sharing brilliant observations from

59:43

our listeners, here's one. From Sean

59:45

Milowcheck, he said, on the topic

59:48

of passwordless email-only login, I

59:50

think the largest benefit to

59:52

the companies is that this is, that

59:56

is using email links. for

1:00:00

login which we talked about last week. He

1:00:02

said the largest benefit to the companies is

1:00:05

that this practically eliminates

1:00:08

password sharing. Oh,

1:00:11

you're right. Isn't that cool?

1:00:13

Yes. He says it's

1:00:15

one thing to give someone your password

1:00:17

for a news or a streaming site.

1:00:20

It's quite another to give them

1:00:22

access to your email and

1:00:25

probably the ability to reset your

1:00:27

banking password among other things. So

1:00:30

Sean, brilliant point. Thank you.

1:00:33

Lars Exeler said, Hi Steve, love

1:00:36

the show and just became a

1:00:38

twit, a club twit

1:00:40

member. He said,

1:00:42

I'm wondering about TOTP

1:00:45

as a second factor. So

1:00:48

a password is, a password

1:00:51

is the something you know factor. A

1:00:54

TOTP is the something you

1:00:56

own factor. My

1:00:58

question is, isn't the

1:01:00

TOTP just based on a

1:01:03

lengthy secret in the

1:01:05

QR code? So in

1:01:07

my mind, it's just like a

1:01:09

second password with the

1:01:12

convenience of presenting itself as six

1:01:14

digits because it's combined with the

1:01:16

actual time. What am

1:01:18

I missing here? Regards from

1:01:21

Germany, Lars. Okay. A

1:01:24

password, right, is

1:01:26

something you know. But

1:01:28

the way to think of the

1:01:31

TOTP, which is based upon a

1:01:33

secret key, is that

1:01:35

it's transient and

1:01:38

time limited proof of

1:01:40

something else you

1:01:42

own or know, but transient

1:01:45

and time limited proof. And

1:01:49

as we know, a password can be

1:01:51

used over and over. But

1:01:53

in any properly designed TOTP

1:01:55

based authentication, the six digit

1:01:57

token not only

1:02:00

expires every 30

1:02:02

seconds, but each such

1:02:04

token is invalidated after

1:02:06

its first use. This

1:02:09

means that even an immediate

1:02:12

replay attack, which still fits

1:02:14

inside that 60-second expiration window,

1:02:17

will be denied. It's

1:02:20

those two distinctions that make

1:02:22

the TOTP a very powerful

1:02:25

and different form of

1:02:27

second factor. Yeah,

1:02:30

if you were sharing the secret every

1:02:32

time, then it would just be another

1:02:34

password. But you're not. Right. That's

1:02:37

not voting back and forth. You are

1:02:39

proving that you know the secret without

1:02:42

sharing it, and that's the key. One step

1:02:44

better. A little bit better. Yes.

1:02:48

Gavin Leno wrote,

1:02:51

and this is a long one, but this needs to

1:02:53

be shared. Dear Steve, I

1:02:56

first came across GRC in the early 2000s

1:02:59

for recovering hard disks and your shields

1:03:01

up scanner, just as I was starting

1:03:03

the career, I was

1:03:06

at the start of a career in IT and

1:03:08

IT security. About

1:03:10

a year ago, I rediscovered you

1:03:12

on the Security Now podcast and

1:03:14

also discovered the wider TWIT network.

1:03:17

The Security Now podcast over the last year

1:03:20

is now the highlight of my week and

1:03:22

even goes so far as to have a

1:03:24

diary note in my calendar so I

1:03:27

can try to listen to the creation

1:03:29

live at

1:03:31

what is late night where I live.

1:03:34

Gavin hails from Germany. He

1:03:38

said, this week I discovered

1:03:40

an issue with routers provided

1:03:42

by one of our local

1:03:44

ISPs, and I thought if

1:03:46

there was ever a reason to drop you

1:03:48

a message worthy of your time, this was

1:03:51

it. And

1:03:53

I would add even worthy

1:03:55

of our listeners' time. He said,

1:03:57

over my 30 years of work, I have been in the

1:03:59

industry for over a decade. or so in IT.

1:04:02

I, as I'm sure we all have, gathered

1:04:04

a number of people who we look after

1:04:06

for home user family

1:04:09

and friends IT support. I

1:04:12

noticed over the last week or so, some of my

1:04:14

400 or so

1:04:17

private clients were

1:04:19

reporting strange issues. They

1:04:22

reported to be getting warnings

1:04:24

when browsing that sites were

1:04:26

dangerous and were getting

1:04:28

a lot of 404s from streaming

1:04:30

services which weren't working and

1:04:32

connections to email services via IMAP

1:04:35

secured with SSL were not working.

1:04:38

Connection and certificate errors and such.

1:04:42

When I sat back and thought about it, all

1:04:44

the users were on the

1:04:47

same ISP and were

1:04:49

all using the ISP provided

1:04:51

router. He said side

1:04:53

note, I usually recommend replacing

1:04:55

the ISP's router with something

1:04:58

more substantial but when it's

1:05:00

working fine or doesn't need anything better,

1:05:02

you know like a single elderly home

1:05:04

user with an iPad and nothing else,

1:05:07

it's hard to justify the spend. So

1:05:10

he says I got access to one

1:05:13

of the users routers and

1:05:15

after some investigation I found

1:05:17

that the DNS servers listed

1:05:20

just didn't look right. A trace

1:05:23

route showed the server was in the

1:05:25

Netherlands and we're in Guernsey,

1:05:28

Germany. The listed owner

1:05:30

of the IP had nothing

1:05:33

to do with the local ISP. Reverting

1:05:36

the DNS servers to the

1:05:39

ISP's addresses resolved all

1:05:41

the issues and he

1:05:43

says later I reset them to quad 9.

1:05:46

And here it is. I also

1:05:49

found that the ISP

1:05:52

had left SNMP

1:05:55

enabled on the WAN

1:05:57

interface with no

1:06:00

restriction and

1:06:02

the username and password to gain access

1:06:05

to the router were both admin.

1:06:11

That's for their convenience, not yours.

1:06:13

Oh boy. That's exactly right. He

1:06:15

said, I have rotated through all of my

1:06:18

other customers that have the same router and

1:06:21

am locking them down, checking

1:06:23

the DNS servers, disabling SNMP

1:06:26

and resetting the password.

1:06:29

They even go so far as replacing

1:06:31

the routers in their entirety because

1:06:34

they may have also been compromised

1:06:36

in some other yet undiscovered way.

1:06:38

Yes, I would say that he's

1:06:40

been listening to this podcast. He

1:06:43

said, I wrote to the ISP

1:06:45

yesterday to explain the

1:06:47

issue and received a call back

1:06:49

today. It did

1:06:51

take me some time to

1:06:54

explain that any suitably

1:06:56

skilled bad actor can

1:06:59

connect to the routers via

1:07:02

SNMP, which

1:07:05

simple network management protocol.

1:07:08

And boy, is it dangerous with

1:07:10

the default credentials admin

1:07:13

admin and

1:07:16

reconfigure and reconfigure

1:07:19

the router settings. The

1:07:22

engineer I was speaking to

1:07:24

had trouble comprehending how

1:07:27

this was possible without

1:07:29

the web management enabled on

1:07:31

the WAN side. But

1:07:34

he eventually understood we

1:07:37

contacted another customer

1:07:39

while we were talking and

1:07:43

he got to see the

1:07:45

issue firsthand and

1:07:47

is taking this up with their

1:07:49

security team internally. Good. Lord,

1:07:52

I hope so. He said, my

1:07:54

understanding for their reason here it is

1:07:57

Leo, get this, my understanding

1:08:00

for their reason for this configuration is,

1:08:02

believe it or not, it was on

1:08:04

purpose. They want to

1:08:06

be able to remotely connect

1:08:08

to a customer's router

1:08:10

to assist in fault

1:08:12

finding. Yep. And

1:08:15

ha, they won't have to look very

1:08:17

far. Yep. They have a script that

1:08:20

connects to the routers

1:08:23

via SNMP and

1:08:25

enables web management. Yep. On the

1:08:27

WAN interface. There you have it

1:08:29

right there. Jesus.

1:08:34

For a period of time and

1:08:36

then later it removes the

1:08:39

web-based access again. They

1:08:42

didn't consider that the

1:08:45

OpenSNMP management interface

1:08:48

with default credentials is

1:08:50

an easy way to guess community string

1:08:53

or and an easy way to guess

1:08:55

community string could be exploited in this

1:08:57

way. We never thought

1:09:00

of that. Yeah. Wow.

1:09:02

What? Yeah, we're

1:09:04

accessing them remotely. But that's us.

1:09:07

You mean the bad guys can too? Wow.

1:09:11

The engineer, he says, the engineer

1:09:13

did seem quite disgruntled to

1:09:15

get this, disgruntled that

1:09:18

I often replace the

1:09:20

ISP provider router. But

1:09:23

as, yes, as I explained to

1:09:25

him, if it has a vulnerability

1:09:27

on it, I'm going to replace

1:09:29

it without any consideration for their

1:09:31

convenience of remote

1:09:33

diagnosis. He

1:09:36

said, thank you, Steve. I hope this may

1:09:38

get a mention on the show and how,

1:09:41

but regardless, I really do

1:09:43

look forward every week to your

1:09:45

deep dives into detail behind the

1:09:47

week's security related news. Warm regards,

1:09:49

Gavin from Guernsey. So wow,

1:09:51

Gavin, thank you for sharing that. Apparently

1:09:54

this ISP believes that SNMP

1:09:57

stands for simply not my

1:09:59

problem. But

1:10:02

they would be wrong. They

1:10:04

would be wrong. And this

1:10:07

deliberate unprotected SNMP exposure with

1:10:09

default credentials is breathtaking. You

1:10:12

know, it is a horrific breach of

1:10:14

security. SNMP, for

1:10:16

those who don't know, is a

1:10:18

funky, non-user friendly,

1:10:21

UDP based protocol. It

1:10:25

was originally in, and still

1:10:27

present, in contemporary

1:10:29

devices, but like very old

1:10:33

original networking gear, which

1:10:35

allows for both the

1:10:38

remote, over-the-wire monitoring and

1:10:40

control of that gear.

1:10:43

Unfortunately, it also comes from an

1:10:45

era before security was being given

1:10:48

the do that it deserves. On

1:10:51

the one hand, it's a good

1:10:53

thing that this ISP has the

1:10:55

wisdom to keep their customers' web

1:10:57

management interfaces disabled by default, but

1:10:59

that good intention is completely undone

1:11:02

by their persistent exposure

1:11:04

of the router's SNMP

1:11:06

service, which I'm

1:11:08

still shaking my head over. So

1:11:12

one remaining mystery is what

1:11:15

was going on with the

1:11:17

DNS rerouting. Given

1:11:19

that most of today's DNS is

1:11:21

still unencrypted in the clear

1:11:25

UDP, this would

1:11:27

be part of the requirement

1:11:29

for site spoofing, to be

1:11:31

able to spoof DNS. But

1:11:34

with browsers now having become

1:11:37

insistent upon the use of

1:11:39

HTTPS, it's no longer

1:11:41

possible to spoof a site over

1:11:43

HTTP. So someone

1:11:45

would need to be able to create web

1:11:48

server certificates for the sites they wish to

1:11:50

spoof. The fact that

1:11:52

it's so widespread across an ISP's

1:11:54

many customers tells us that it's

1:11:56

probably not a targeted

1:11:58

attack. means it's somehow

1:12:01

about making money or

1:12:04

intercepting a lot of something. Having

1:12:06

thought about this further though, Leo,

1:12:08

he mentioned lots of email problems

1:12:10

when people are using secure IMAP.

1:12:13

Well, not all email is

1:12:16

secure. And so

1:12:20

you could spoof email MX

1:12:22

records in DNS in order

1:12:25

to route email somewhere else,

1:12:28

in which case clients would have a

1:12:30

problem connecting if

1:12:33

the DNS was spoofed over

1:12:35

a TLS connection.

1:12:39

So, basically it's really

1:12:41

only HTTP in the

1:12:43

form of HTTPS where

1:12:45

we've really got our security buttoned

1:12:48

down tight with

1:12:50

web server certificates. Other things

1:12:52

that use DNS, and there are lots of other

1:12:54

things that use DNS, they're still

1:12:56

not running as securely as the

1:12:58

web is. So that

1:13:00

could be an issue. Zef

1:13:03

Jelen said, Steve, listening to SN961

1:13:06

and thinking about the discussion on

1:13:08

password list logins and using

1:13:10

email as a single

1:13:12

factor, is my

1:13:14

recollection correct that

1:13:16

email is generally transmitted in

1:13:18

the clear? Can't

1:13:22

some unscrupulous actor sniff

1:13:24

that communication? I'm

1:13:26

especially concerned if the email contains

1:13:28

a link to the site. This

1:13:31

provides all the necessary info to

1:13:33

anyone who can view the email.

1:13:36

A six digit code without any reference

1:13:38

to the site where it should be

1:13:40

used would be more secure

1:13:43

as it is both out of band but

1:13:46

also obfuscated from the site.

1:13:49

Of course, this is all based on

1:13:51

my possibly antiquated recollection of the lack

1:13:53

of privacy in email. Okay,

1:13:56

now the

1:13:59

email... transit encryption

1:14:02

numbers vary. I

1:14:05

saw one recent statistic that said that

1:14:07

only about half of email is

1:14:10

encrypted in transit using

1:14:12

TLS, but Google's most

1:14:14

recent stats for Gmail state

1:14:17

that 98 to

1:14:19

99 percent of

1:14:22

their email transit both incoming

1:14:24

and outgoing is encrypted

1:14:27

and it's a good feeling to have that. GRC's

1:14:31

server fully supports all of

1:14:33

the transit encryption standards so

1:14:36

it will transact using TLS

1:14:39

with the remote end whenever it's

1:14:41

available and it's a good

1:14:43

feeling knowing that when Sue and Greg

1:14:45

and I exchange anything through email since

1:14:48

our email clients are all

1:14:51

connecting to directly to GRC's

1:14:53

email server everything is

1:14:55

always encrypted end-to-end

1:14:58

but it's not encrypted at rest.

1:15:01

Jeff's point about the possible lack

1:15:04

of email encryption in transit is

1:15:06

still a good one since email

1:15:08

security is definitely lagging behind

1:15:11

web security now

1:15:13

that virtually everything has switched over

1:15:15

has switched over to HTTPS. On

1:15:19

the web we would never even

1:15:21

consider logging into a

1:15:24

site that was not encrypted and

1:15:26

authenticated using HTTPS. For one thing

1:15:28

we would wonder why the site

1:15:30

was doing that and like alarm

1:15:32

bells would be going off and

1:15:35

even back when almost everything

1:15:38

was still using HTTP websites

1:15:41

would switch their connections over to

1:15:43

HTTPS just long enough to send

1:15:45

the login page and receive the

1:15:47

users credentials in an encrypted channel

1:15:50

and then drop the user back

1:15:52

to HTTP. So

1:15:55

by comparison to the

1:15:57

standards set by today's web

1:15:59

logging in with

1:16:01

HTTPS, email

1:16:04

security is sorely

1:16:06

lacking. If we revisit

1:16:08

the question then of

1:16:10

using email only to log

1:16:13

into a site where that

1:16:15

email carries what is effectively

1:16:17

a login link or

1:16:19

even a six-digit authentication

1:16:21

token, what we've done

1:16:24

is reduced the login security

1:16:27

of the site to that

1:16:29

of email, which today we

1:16:32

would have to consider to be

1:16:34

good but not great and

1:16:36

certainly far from perfect. Another

1:16:39

difference worth highlighting is that a

1:16:42

web browser's connection to the website

1:16:44

it's logging into is

1:16:46

strictly and directly end-to-end

1:16:48

encrypted. The browser

1:16:52

has a domain name and it

1:16:54

must receive an identity asserting certificate

1:16:56

that is valid for that domain

1:16:59

but this has never been true

1:17:01

for email. Although modern email does

1:17:03

tend to be point-to-point with

1:17:06

the server at the sender's domain

1:17:08

directly connecting to the server at

1:17:10

the recipient's domain that's not always

1:17:12

nor is it necessarily true. Email

1:17:15

has always been a store and forward

1:17:19

transport. Depending upon

1:17:21

the configuration of the sender and

1:17:23

receivers domains email might

1:17:25

make several hops from start to

1:17:28

finish and since the

1:17:30

body of the email doesn't contain

1:17:32

any of its own encryption that

1:17:34

email at rest is subject to

1:17:37

eavesdropping. Last

1:17:39

week I coined the term login

1:17:42

accelerator to more

1:17:44

clearly delimit that

1:17:47

value added by a password

1:17:50

and the whole point of the

1:17:52

email loop authentication model is the

1:17:55

elimination of the something you need

1:17:57

to know factor. Yet

1:17:59

without the something you need

1:18:01

to know factor, the security of

1:18:04

email loop authentication can be

1:18:06

no better than the security

1:18:08

of email, which

1:18:10

we've agreed falls short of

1:18:12

the standards set by the

1:18:14

Web's direct end-to-end

1:18:17

certificate identity verification

1:18:19

with encryption. As

1:18:23

a result of this analysis, we might

1:18:25

be inclined to discount the notion of

1:18:28

email loop authentication as

1:18:30

being inferior to

1:18:32

Web-based authentication with all of

1:18:34

its fancy certificates and encryption,

1:18:37

except for one thing. The

1:18:40

ubiquitous, I forgot my

1:18:43

password, get out of

1:18:45

jail free link is

1:18:47

still the party spoiler. It

1:18:49

is everywhere and its presence

1:18:52

immediately reduces all of

1:18:55

our much-valueed Web security

1:18:58

to the security of email, imperfect

1:19:01

as it may be. Even

1:19:04

without email loop authentication, if a

1:19:06

bad guy can arrange to gain

1:19:08

access to a user's email, they

1:19:11

can go to any site where

1:19:13

the user maintains an account, click

1:19:16

the I forgot my password link,

1:19:19

receive the password recovery link by

1:19:21

intercepting their email and do

1:19:23

exactly the same thing as if

1:19:25

that site was only using email

1:19:28

loop authentication. What

1:19:31

this finally means is

1:19:33

that all of our

1:19:35

identity authentication login, either

1:19:38

using all of the fancy Web technology

1:19:41

or just a simple email loop, is

1:19:44

not actually any

1:19:46

more secure than

1:19:49

email. And

1:19:52

that simple email loop authentication

1:19:54

without any password is

1:19:56

just as secure as Web

1:19:58

authentication. so long

1:20:00

as web authentication includes, and

1:20:03

I forgot my password, email

1:20:05

loop bypass. So

1:20:08

the notion of a password being

1:20:10

nothing more than a login accelerator

1:20:12

holds, and the world should

1:20:14

be working to increase the

1:20:17

security of email since

1:20:19

it winds up being the linch pin

1:20:21

for everything. So

1:20:25

really interesting thought experiment

1:20:27

there. Two last quickies.

1:20:30

Alex Niehaus, a long

1:20:32

time friend of the podcast and early

1:20:34

advertiser, I think our

1:20:37

first advertiser of the podcast. He

1:20:40

said regarding 961 last

1:20:43

week's episode and overlay networks, PF

1:20:45

Sense has a killer tail-scaled

1:20:48

integration. With

1:20:51

this running, you can set

1:20:53

up PF Sense to be

1:20:55

an exit node, combining safe

1:20:58

local access and VPN-like move

1:21:00

my apparent IP address. That's

1:21:03

useful for services that are tied to your

1:21:05

location. Second, some cable

1:21:08

providers' apps require UPnP, which

1:21:10

you can, at your

1:21:13

own risk, enable in PF Sense

1:21:15

tail scale. So

1:21:17

Alex, thank you for that. Given that

1:21:19

I'm myself a PF Sense shop, as

1:21:21

Alex knows, he and

1:21:23

I sometimes talk about PF Sense, and

1:21:26

the tail scale integrates well with PF

1:21:29

Sense's free BSD Unix,

1:21:32

tail scale will certainly be a contender

1:21:34

for a roaming overlay network. So thank

1:21:37

you, Alex. And

1:21:39

finally, just a reminder from

1:21:41

a listener, he said, hi, Steve.

1:21:44

Given your discussion of throwaway

1:21:46

email addresses during last Security

1:21:49

Now podcast episode, I'd

1:21:51

like to bring your attention

1:21:53

to DuckDuckGo's email protection service.

1:21:56

This free service is designed to

1:21:58

safeguard users' email. email privacy

1:22:01

by removing hidden trackers and

1:22:03

enabling the creation of unlimited,

1:22:06

unique private email addresses on

1:22:08

the fly. This

1:22:11

feature allows users to maintain a

1:22:13

distinct email address for each website

1:22:15

they interact with. What

1:22:18

makes this even more compelling is

1:22:21

DuckDuckGo email protection's

1:22:24

integration with Bitwarden.

1:22:27

Bitwarden has included

1:22:30

DuckDuckGo among its

1:22:32

supported services alongside other

1:22:34

email forwarding services. This integration

1:22:37

allows Bitwarden users to easily

1:22:39

generate unique email addresses for

1:22:41

each website they interact with.

1:22:44

Take a look at, and he provided the links

1:22:46

which I've got in the show notes. So I

1:22:50

thank our listener very much for that. And

1:22:54

Leo, let's

1:22:56

take a rest. A minute, tell

1:22:59

our listeners about our sponsor, and

1:23:01

then oh boy, have I got

1:23:03

a fun deep dive for everybody.

1:23:06

Something just happened when nobody was

1:23:08

looking. Wow. Well, one

1:23:10

thing I'd like to do, we've got a couple of

1:23:13

things I'd like to take care of. One is, you've

1:23:15

mentioned ClubTwit a couple of times now. I probably should

1:23:17

give people more information about that.

1:23:21

You know, more and more I look at what's going on on

1:23:24

the Internet in terms of supposedly tech

1:23:26

news sources or tech review sources. And

1:23:29

more and more I think, you know, we're kind of

1:23:31

old fashioned here at Twit. Nobody

1:23:34

would deny that Steve and I are

1:23:37

not old timers. We don't

1:23:40

generate link bait. We

1:23:42

don't make stuff up. We don't do

1:23:45

fake reviews. We buy

1:23:47

the products we review. You

1:23:50

know, really kind of old school, but

1:23:53

a little bit out of step with the way things are today. Now

1:23:56

I hope, and I think that you probably like what

1:23:58

we do. You're listening to shows after all. But

1:24:01

we're going to need your support

1:24:03

because frankly, I'm not an influencer

1:24:05

with a million subscribers on YouTube. Steve

1:24:08

should be, but he's not either. We

1:24:12

don't do thumbnails of us going, oh my God,

1:24:14

we don't over-hype stuff. We

1:24:17

do our jobs and I think we do a

1:24:19

very good job, but advertisers want the other one.

1:24:21

So it's getting harder and

1:24:23

harder to get advertising dollars. That means

1:24:26

in the past we've been able to support ourselves entirely that

1:24:28

way. We need to rely on

1:24:30

you a little bit and I think it's kind of a better

1:24:32

way to do it because we know we're

1:24:34

giving you what you want when you give us that

1:24:37

seven bucks a month as a member of Club Twit.

1:24:39

What do you get? Add free versions of all

1:24:41

the shows, the show included. We

1:24:44

have now putting out audio of all

1:24:46

of the shows we used to have behind the

1:24:48

Club's Hip Paywall, but you will get video of

1:24:50

all those shows if you're a Club Twit member.

1:24:52

You get access to the Discord, lots of additional

1:24:55

content we don't put out like Stacy's Book Club,

1:24:57

the Banned Herba 4 and

1:24:59

After shows, things like that. And

1:25:02

most importantly, you get the good feeling to

1:25:05

know that you're supporting the kind of technology

1:25:07

journalism that I think we need

1:25:09

more than ever. So

1:25:11

if you agree with me, please visit

1:25:13

twit.tv slash Club Twit. You

1:25:15

can buy this show individually. If you say, I

1:25:17

only listen to Security Now, that's $2.99 a month.

1:25:21

There's a link there as well. So,

1:25:23

we're up to 11,292 paid members. That

1:25:32

means about 300 people joined this week.

1:25:34

Thank you to those. And

1:25:37

let's get another 300 next week. It

1:25:40

makes a big difference. And we figure we really probably need to

1:25:42

get to 35,000 members to be absolutely

1:25:44

sustainable in

1:25:48

the long run. So we're only about a third of the way

1:25:50

there. So please make

1:25:53

up the difference, if you will, at twit.tv slash

1:25:55

Club Twit. good

1:26:00

friends they've been with us a long as

1:26:18

long as they've been around when they

1:26:21

first started they came to us immediately

1:26:24

and now as part of ACI learning IT

1:26:26

Pro has really expanded what it can do

1:26:28

providing even more support for IT

1:26:31

individuals and teams.

1:26:34

ACI learning covers all your needs what

1:26:37

not just IT but cybersecurity

1:26:39

and audit as well you'll

1:26:42

get a personal account manager who'll help you

1:26:44

make sure there is no redundancy in your

1:26:47

training you're not wasting your

1:26:49

time or money or theirs your

1:26:51

account manager will work with you to ensure

1:26:53

your team only focuses on the skills that

1:26:55

matter to you and your organization leave

1:26:58

the unnecessary costly training behind

1:27:00

which by the way also keeps your employees

1:27:02

happy that team they don't want to see

1:27:04

videos about stuff they already know they want

1:27:06

to learn new things they love learning new

1:27:08

things and ACI learning

1:27:10

has kept all the fun all the

1:27:12

personality of IT Pro TV all the

1:27:14

deep passionate love for this

1:27:16

stuff that IT Pro TV is famous

1:27:18

for and amplifying

1:27:20

their robust solutions for all your training needs

1:27:23

let your team be entertained as well as

1:27:25

informed with short form format content

1:27:27

and now thanks to ACI learning over 70

1:27:29

to 100 hours

1:27:32

7,200 hours of absolutely up-to-date content

1:27:36

covering the latest search the

1:27:38

latest exams the latest information

1:27:41

the stuff you need to

1:27:43

know visit go.acilarning.com/twit if

1:27:45

you've got a team fill out the form the discounts

1:27:47

are based on the size of your team and they

1:27:49

go as high as 65%

1:27:52

off fill out the form find out how much you can save on

1:27:55

an IT Pro Enterprise Solution

1:27:57

plan when you visit go.acilarning.com

1:28:00

flash to it. And as always we

1:28:02

thank ACI Learning for their support

1:28:05

of security now. Okay

1:28:08

this I've been waiting all day for Steve

1:28:10

what the heck. Okay.

1:28:12

Are we in trouble? No.

1:28:14

Oh good. The bullet was dodged but

1:28:18

the story is really interesting. The

1:28:21

vulnerability has been codenamed key trap.

1:28:24

And if ever there was

1:28:26

a need for responsible disclosure of

1:28:28

a problem this was that

1:28:31

time. Fortunately responsible

1:28:33

disclosure is what it

1:28:35

received. What

1:28:37

the German researchers who discovered

1:28:39

this last year realized was

1:28:43

that an aspect of the design

1:28:45

of DNS, specifically the

1:28:47

design of the secure capabilities

1:28:49

of DNS known as DNSsec

1:28:53

could be used against any

1:28:56

DNSsec capable DNS

1:28:58

resolver to bring that

1:29:00

resolver to its knees. The

1:29:03

receipt of a single UDP

1:29:06

DNS query packet

1:29:09

could effectively take the DNS

1:29:11

resolver offline for as many

1:29:14

as 16 hours, pinning

1:29:16

its processor at a hundred percent and

1:29:19

effectively denying anyone else that

1:29:21

servers DNS services. It would

1:29:25

have therefore been possible to

1:29:27

spray the internet with the

1:29:29

single packet DNS queries to

1:29:32

effectively shut down all DNS

1:29:34

services across the entire globe.

1:29:38

Servers could be rebooted once

1:29:40

it was noticed that they had effectively hung but

1:29:43

if they again received another innocuous

1:29:45

looking DNS query packet which is

1:29:47

their job after all they

1:29:50

would have gone down again. Eventually

1:29:52

of course the world would have

1:29:54

discovered what was bringing down all

1:29:56

of its DNS servers but

1:29:59

the outage could have been protracted,

1:30:01

and the damage to the world's

1:30:03

economies could have been horrendous. So

1:30:05

now you know why this podcast is titled,

1:30:08

The Internet Dodged a Bullet. We

1:30:12

should never underestimate how utterly

1:30:14

dependent the world has become

1:30:16

on the functioning of the

1:30:18

Internet. I mean, it's like,

1:30:21

what don't we do with the

1:30:23

Internet now? Like all of our

1:30:25

entertainment, all of our news? Do

1:30:27

I have an FM radio? I'm not sure. Okay.

1:30:32

The detailed research paper describing

1:30:34

this was just publicly released

1:30:36

yesterday, though the problem, as

1:30:38

I said, has been known

1:30:40

since late last year. Here's

1:30:44

how the paper's abstract describes

1:30:46

what these German researchers discovered

1:30:48

to their horror.

1:30:52

They wrote, availability

1:30:55

is a major concern in the

1:30:57

design of DNSSEC. To

1:31:00

ensure availability, DNSSEC

1:31:02

follows Postel's law,

1:31:05

which reads, Be liberal in

1:31:07

what you accept and conservative

1:31:09

in what you send. Hence,

1:31:12

name servers should send

1:31:15

not just one matching key

1:31:17

for a record set, but

1:31:19

all the relevant cryptographic material.

1:31:22

In other words, all the

1:31:24

keys for all the ciphers that

1:31:27

they support and all

1:31:29

the corresponding signatures. This

1:31:32

ensures that validation will

1:31:34

succeed, and hence

1:31:37

availability, even if

1:31:39

some of the DNSSEC keys

1:31:41

are misconfigured, incorrect

1:31:43

or correspond to unsupported

1:31:46

ciphers will be maintained.

1:31:49

They write, We show that

1:31:52

this design of DNSSEC

1:31:56

is flawed. By

1:31:58

exploiting vulnerable recommendations in

1:32:02

the DNS SEC standards. We

1:32:05

develop a new class of

1:32:07

DNS SEC based algorithmic

1:32:10

complexity attacks on DNS

1:32:13

which we dub key trap

1:32:15

attacks. All popular

1:32:19

DNS implementations and

1:32:21

services are vulnerable.

1:32:26

With just a single DNS packet, the key trap attacks

1:32:28

lead to a 2 million times spike

1:32:30

in CPU

1:32:40

instruction count in

1:32:42

vulnerable DNS servers. Remember

1:32:45

that is all DNS servers

1:32:48

stalling some for as long as 16

1:32:51

hours. This

1:32:53

devastating effect prompted major

1:32:55

DNS vendors to refer

1:32:58

to key trap as

1:33:00

the worst attack on

1:33:02

DNS ever discovered. Exploiting

1:33:05

key trap, an attacker

1:33:07

could effectively disable internet

1:33:09

access in any system

1:33:12

utilizing a DNS SEC

1:33:14

validating resolver. We

1:33:17

disclosed key trap to vendors

1:33:19

and operators on November 2,

1:33:21

2023. We

1:33:25

confidentially reported the vulnerabilities

1:33:27

to a closed group

1:33:29

of DNS experts, operators

1:33:32

and developers from the industry.

1:33:36

Since then, we have been working

1:33:38

with all major vendors to mitigate

1:33:40

key trap, repeatedly discovering

1:33:43

and assisting in closing

1:33:46

weaknesses in proposed

1:33:48

patches. Following

1:33:50

our disclosure, an umbrella CVE

1:33:53

was assigned. Okay,

1:33:55

so believe it or not,

1:33:59

all that just actually

1:34:01

happened, as they say, behind

1:34:04

closed doors. Google's

1:34:06

public DNS and

1:34:08

Cloudflare were both vulnerable,

1:34:11

as was the very popular,

1:34:13

the most popular and widely

1:34:15

deployed, Bind 9 DNS implementation,

1:34:17

and it was the one,

1:34:20

we'll see why later, that

1:34:22

could be stalled for as

1:34:24

long as 16 hours after

1:34:26

receiving one packet. So

1:34:28

what happened? As

1:34:31

the researchers wrote, months

1:34:33

before all this came to

1:34:35

light publicly, all major implementations

1:34:37

of DNS had already been

1:34:39

working on quietly updating because

1:34:42

had this gotten out, it

1:34:44

could have been used to bring the

1:34:46

internet down. With

1:34:49

Microsoft's release of patches, which

1:34:51

included mitigations for this, last

1:34:54

week, and after waiting

1:34:57

a week for them to be deployed,

1:34:59

the problem is mostly resolved. If

1:35:02

you'll pardon the pun, I

1:35:04

say largely because

1:35:06

this is not a bug in an

1:35:08

implementation, and apparently, even

1:35:10

now, as we'll see, some

1:35:12

DNS servers will still have

1:35:14

their processors pinned, but

1:35:17

they will still at least be

1:35:19

able to answer other DNS query

1:35:21

functions. It sounds

1:35:23

as though thread or process priorities

1:35:26

have been changed to prevent the

1:35:28

starvation of competing queries, and we'll

1:35:30

actually look in a minute at

1:35:32

the strategies that have been deployed.

1:35:36

Characterizing this as

1:35:38

a big mess would not

1:35:40

be an exaggeration. T-Trap

1:35:43

exploits a fundamental flaw in

1:35:45

the design of DNSSEC, which

1:35:47

makes it possible to

1:35:50

deliberately create a set

1:35:52

of legal, but

1:35:55

ultimately malicious, DNSSEC

1:35:57

response records. which

1:36:00

the receiving DNS server will

1:36:03

be quite hard pressed to

1:36:05

untangle. Once the

1:36:07

researchers had realized that they were

1:36:10

onto something big, they began exploring

1:36:12

all of the many various ways

1:36:14

DNS servers could be stressed. So

1:36:17

they created a number of different attack

1:36:20

scenarios. I want

1:36:22

to avoid getting too far into the

1:36:24

weeds of the design and operation of

1:36:26

DNS Sec, but at the

1:36:28

same time I suspect that this podcast's

1:36:30

audience will appreciate seeing a bit more

1:36:32

of the detail so that the

1:36:34

nature of the problem can be better appreciated.

1:36:37

The problem is rooted in DNS

1:36:39

Sec's provisions for the

1:36:42

resolution of key

1:36:44

tag collisions, the

1:36:47

handling of multiple keys when

1:36:49

they're present, and

1:36:51

multiple signatures when they're

1:36:53

offered. I'm going

1:36:55

to quote three paragraphs

1:36:58

from their research paper, but

1:37:01

just sort of let it wash over you so that

1:37:03

you'll get some sense for what's going

1:37:06

on without worrying about understanding it in

1:37:08

any detail. You will not be

1:37:10

tested on your understanding of this.

1:37:13

Okay, so they explain. We

1:37:16

find that the flaws in the

1:37:18

DNS Sec specification are rooted in

1:37:20

the interaction of a

1:37:22

number of recommendations that in

1:37:25

combination can be exploited

1:37:27

as a powerful attack vector.

1:37:30

Okay, so first, key

1:37:32

tag collisions. They

1:37:35

write, DNS Sec

1:37:38

allows for multiple cryptographic

1:37:40

keys in a given

1:37:42

DNS zone. Zone

1:37:45

is the technical term for a

1:37:47

domain, essentially. So when they say

1:37:49

zone, they mean a DNS domain.

1:37:53

For example, they say during

1:37:55

key rollover for a

1:37:58

multi-algorithm or for multi-algorithm. algorithm

1:38:00

support, meaning you

1:38:03

might need multiple cryptographic keys.

1:38:05

If you're retiring one

1:38:08

key, you want to bring the new key

1:38:10

online before the old key

1:38:13

goes away. So for a while, you've

1:38:15

got two or more keys. Or

1:38:18

multi-algorithm support might need different

1:38:20

keys for different algorithms if

1:38:23

you want to be more

1:38:25

comprehensive. So they said, consequently,

1:38:27

when validating DNSsec, DNS

1:38:30

resolvers are required

1:38:32

to identify a suitable cryptographic

1:38:34

key to use for signature

1:38:36

verification. Because the zone is

1:38:38

signed, so you want to

1:38:40

verify the signature of the

1:38:42

zone to verify it's not

1:38:44

been changed. That's what DNSsec

1:38:46

is all about, is preventing

1:38:50

any kind of spoofing. So

1:38:53

they said, DNSsec uses key

1:38:55

tag values to differentiate

1:38:57

between the keys, even

1:39:00

if they are of

1:39:02

the same zone and use the same

1:39:04

cryptographic algorithm. So they just could be

1:39:06

redundant keys for some reason. They

1:39:09

said the triple of zone,

1:39:11

algorithm, and key tag is

1:39:14

added to each respective

1:39:16

signature to ensure efficiency

1:39:18

in key signature matching.

1:39:21

Again, don't worry about the

1:39:23

details. When validating a signature,

1:39:25

resolvers check the signature header

1:39:27

and select the key with

1:39:29

the matching triple for validation.

1:39:32

However, the triple is

1:39:35

not necessarily unique. That's

1:39:38

the problem. Multiple different

1:39:40

DNS keys can

1:39:43

have an identical triple.

1:39:46

That is to say, an identical tag. This

1:39:48

can be explained by the calculation of

1:39:51

the values in the triple. The

1:39:53

algorithm identifier results directly

1:39:55

from the cipher used

1:39:57

to create the signature

1:40:00

and is identical for all keys generated

1:40:02

with a given algorithm. DNSSEC

1:40:05

mandates all keys used for validating

1:40:07

signatures in a zone to be

1:40:10

identified by the zone name. Consequently,

1:40:13

all DNSSEC keys that may

1:40:15

be considered for validation trivially

1:40:17

share the same name. Since

1:40:20

the collisions in algorithm ID

1:40:22

and key name pairs are

1:40:24

common, the key

1:40:26

tag is calculated with

1:40:29

a pseudo-random arithmetic function

1:40:31

over the key bits to

1:40:33

provide a means to distinguish

1:40:35

same algorithm, same name keys.

1:40:38

Again, just let this glaze

1:40:40

over. Using

1:40:42

an arithmetic function instead of

1:40:44

a manually chosen identifier, Ease's

1:40:46

distributed key

1:40:49

management for multiple parties in

1:40:51

the same DNS zone. Instead

1:40:54

of coordinating key tags to

1:40:56

ensure their uniqueness, the

1:40:58

key tag is automatically calculated.

1:41:01

However, here

1:41:03

it comes, the space of

1:41:06

potential tags is limited by the 16

1:41:08

bits in the key tag

1:41:12

field. Key tag

1:41:14

collisions, while unlikely, can

1:41:16

thus naturally occur in

1:41:18

DNSSEC. This is explicitly

1:41:21

stated in RFC 4034, emphasizing

1:41:26

that key tags are

1:41:28

not unique identifiers.

1:41:32

As we show, colliding key

1:41:34

tags can be exploited to

1:41:37

cause a resolver not to be

1:41:39

able to uniquely identify a suitable

1:41:42

key efficiently, but to

1:41:44

have to perform validations with

1:41:47

all the available keys,

1:41:50

inflicting computational effort

1:41:53

during signature validation.

1:41:56

Okay, now just to interrupt this for a second. Cryptographic

1:42:01

keys are

1:42:03

identified by tags and

1:42:06

those tags are automatically assigned

1:42:08

to those keys. Work

1:42:12

on DNSsec began

1:42:14

way back in the 1990s

1:42:16

when the Internet's

1:42:19

designers were still

1:42:21

counting bits and

1:42:23

were assigning only as many

1:42:25

bits to any given

1:42:28

field as would

1:42:30

conceivably be necessary. Consequently,

1:42:34

the tags being

1:42:36

assigned to these keys were

1:42:38

and are still today only 16 bits

1:42:42

long. Since these

1:42:45

very economical tags only

1:42:47

have 16 bits thus

1:42:50

one of 64k possible

1:42:52

values enter tag

1:42:55

collisions while unlikely you

1:42:57

know one of of 65536 and

1:43:01

we've got the birthday paradox which makes collisions

1:43:03

more you know happen more often than you'd

1:43:06

expect. If DNSsec

1:43:08

were being designed today, huh,

1:43:11

tags would be the output of

1:43:14

a collision free cryptographic hash function

1:43:16

and there would be no provision

1:43:18

for resolving tag collisions because there

1:43:21

would be none. The

1:43:23

paragraph I just read said that

1:43:26

the key tag is calculated

1:43:28

with a pseudo random arithmetic

1:43:30

function. In other words, something

1:43:33

simple from the 90s that

1:43:36

scrambles and mixes the bits around but

1:43:38

doesn't do much more. I

1:43:41

might get a salad spinner. Right,

1:43:43

exactly. It's still salad, it's just

1:43:45

better now. Consequently,

1:43:48

servers need to

1:43:51

consider that key tags are

1:43:53

not unique and what so

1:43:55

what the attack is is it deliberately

1:43:57

makes all the key tags identical.

1:44:00

identical forcing the server to

1:44:02

check them all. Oh brilliant.

1:44:05

Yes, yes. How many key

1:44:07

tags can you put in there? We're

1:44:10

getting there. We're

1:44:12

getting there, but you make them all the

1:44:14

same and the server can't use it to

1:44:16

select the key. It's got to try them

1:44:18

all. Right. So the first

1:44:21

attack is key tag collisions. Literally,

1:44:24

they all collide. Okay,

1:44:27

on to the next problem. Multiple keys. The

1:44:30

DNS specification mandates that a

1:44:32

resolver must try all

1:44:35

colliding keys until

1:44:37

it finds a key that

1:44:40

successfully validates the signature or

1:44:43

all keys have been tried. Of course.

1:44:46

Right. It could possibly

1:44:48

go wrong. That's right. The

1:44:50

requirement is meant to ensure availability,

1:44:53

meaning DNS-Sec will try as hard

1:44:55

as it can to find a

1:44:57

successful signature. Even

1:44:59

if colliding keys occur such that

1:45:01

some keys may result in failed

1:45:04

validation, the

1:45:06

resolver must try validating with

1:45:08

all the keys until a

1:45:10

key is found that results

1:45:12

in a successful validation. This

1:45:16

ensures that the signed

1:45:18

record remains valid and

1:45:20

the corresponding resource therefore remains

1:45:23

available. However, what

1:45:25

they call eager, they say,

1:45:27

however, this eager validation can

1:45:31

lead to heavy computational effort

1:45:33

for the validating resolver since

1:45:35

the number of validations grows

1:45:38

linearly with the number of

1:45:40

colliding keys. So

1:45:42

for example, if a signature

1:45:44

has 10 colliding keys with

1:45:47

all with identical algorithm

1:45:49

identifiers, the

1:45:51

resolver must conduct 10

1:45:53

signature validations before concluding

1:45:56

that the signature is invalid

1:45:59

while Colliding keys are

1:46:01

rare in real-world operation. We

1:46:04

show that records created to deliberately

1:46:07

contain multiple colliding keys,

1:46:10

meaning all the keys

1:46:12

are colliding, can

1:46:16

be efficiently crafted by

1:46:18

an adversary imposing

1:46:20

heavy computation upon

1:46:23

a victim resolver. Okay

1:46:25

and the third and final problem

1:46:28

is multiple signatures. The

1:46:31

philosophy, they said, of trying

1:46:34

all the cryptographic material available

1:46:36

to ensure that the validation

1:46:39

succeeds also applies to

1:46:41

the validation of signatures. Creating

1:46:45

multiple signatures for a given

1:46:47

DNS record can happen, for

1:46:49

example, during a key rollover.

1:46:52

The DNS server adds a

1:46:54

signature with the new key

1:46:57

while retaining the old signature

1:46:59

to ensure that some signature

1:47:01

remains valid for all resolvers

1:47:03

until the new key has

1:47:05

been propagated. Thus, parallel

1:47:08

to the case of colliding

1:47:10

keys, the RFCs

1:47:12

specify that in the case

1:47:15

of multiple signatures on

1:47:17

the same record, a resolver should

1:47:19

try all the

1:47:21

signatures it received until

1:47:24

it finds a valid signature or until

1:47:27

it's used up all

1:47:29

the signatures. Okay, so

1:47:32

we have the essential design

1:47:34

features which were put

1:47:36

into the DNS specification in

1:47:39

a sane way, I mean

1:47:41

all this makes sense, with

1:47:43

the purpose of never failing to

1:47:46

find a valid key

1:47:49

and signature for a zone

1:47:51

record. Their term

1:47:53

for this, of course, is eager

1:47:55

validation. They write, we

1:47:58

combine the these requirements

1:48:02

for the eager validation of

1:48:04

signatures and of keys along

1:48:06

with the colliding key tags

1:48:10

to develop powerful

1:48:12

DNS-seq-based algorithmic complexity

1:48:14

attacks on validating

1:48:16

DNS resolvers. Our

1:48:18

attacks allow a low

1:48:21

resource adversary to fully

1:48:24

DOS a DNS resolver

1:48:26

for up to 16 hours

1:48:31

with a single DNS

1:48:33

request. One

1:48:38

request and the server goes down for 16

1:48:40

hours. They

1:48:43

wrote members from the

1:48:45

31 participant task force of

1:48:48

major operators, vendors and

1:48:50

developers of DNS and

1:48:52

DNS-seq to which we

1:48:54

disclosed our research dubbed

1:48:56

our attack the most devastating

1:48:59

vulnerability ever found in DNS-seq.

1:49:03

Now the researchers devised a

1:49:05

total of four different

1:49:08

server-side resource exhaustion

1:49:10

attacks. And

1:49:13

I have to say Leo, I was a little

1:49:15

bit tempted to title today's

1:49:17

podcast after three of them.

1:49:20

Had I done so, today's podcast

1:49:23

would have been titled SigJam

1:49:25

Lock Cram and Hashgraph. I

1:49:29

would have done that. I know. That's

1:49:31

good. SigJam Lock

1:49:33

Cram and Hashgraph. That's my

1:49:36

law firm. And

1:49:40

while I certainly acknowledge that would have been

1:49:42

fun, I really didn't want to pass

1:49:44

up, you know, I

1:49:47

didn't want to lose sight of the fact

1:49:49

that the entire global internet really

1:49:51

did just dodge a bullet. And

1:49:55

we don't know which

1:49:57

foreign or domestic cyber

1:49:59

intelligence services, may

1:50:02

today be silently saying,

1:50:04

Darn it! They

1:50:06

found it! That was

1:50:09

one we were keeping in our back

1:50:11

pocket for a rainy day while

1:50:13

keeping all of our foreign

1:50:16

competitor DNS server targeting

1:50:19

packages updated. Because

1:50:22

this would have made one a hell of a weapon.

1:50:26

Okay, so what are the

1:50:29

four different attacks? SigJam

1:50:32

utilizes an attack with

1:50:34

one key and many

1:50:37

signatures. They write, the

1:50:39

RFC advises that a resolver should

1:50:41

try all signatures until a signature

1:50:44

is found that can be validated

1:50:46

with the DNS key. This

1:50:49

can be exploited to construct an attack, we are

1:50:51

going to answer your question about how many, using

1:50:54

many signatures that all point to

1:50:56

the same DNS set key. Using

1:50:59

the most impactful algorithm, meaning the

1:51:02

most time consuming to verify, an

1:51:05

attacker can fit 340 signatures

1:51:07

into a single DNS response, thereby causing 340

1:51:09

expensive cryptographic signature

1:51:20

validation operations in

1:51:22

the resolver until the

1:51:25

resolution finally fails by

1:51:28

returning serve fail response

1:51:30

to the client. That shouldn't take 16 hours. It

1:51:34

gets better. Oh, there's more. That's

1:51:37

linear, we're going to go quadratic in a

1:51:39

minute. Oh, good.

1:51:42

The SigJam attack is thus

1:51:44

constructed by leading the resolver

1:51:46

to validate many invalid signatures

1:51:48

on a DNS record using

1:51:50

one DNS key. Okay,

1:51:54

the lock cram attack

1:51:56

does the reverse using

1:51:59

many keys. and a

1:52:01

signal signature. They write,

1:52:03

Following the design of SIGGEM, we

1:52:05

develop an attack vector we dub

1:52:08

lock cram. It exploits

1:52:10

the fact that resolvers

1:52:12

are mandated to

1:52:14

try all keys available

1:52:17

for a signature until

1:52:20

one validates or all have

1:52:22

been tried. The lock cram

1:52:24

attack is thus constructed by

1:52:26

leading a resolver to validate

1:52:29

one signature over a

1:52:31

DNS record having many keys.

1:52:34

For this, the attacker places

1:52:36

multiple DNS keys in the

1:52:38

zone which are all referenced

1:52:41

by signature records having the same

1:52:43

triple name algorithm key

1:52:45

tag. This is not

1:52:47

trivial as resolvers can deduplicate identical

1:52:49

DNS key records and their key

1:52:52

tags need to be equal. A

1:52:55

resolver that tries to authenticate a DNS

1:52:57

record from the zone attempts

1:52:59

to validate its signature. To

1:53:01

achieve that, the resolver identifies

1:53:03

all the DNS keys for

1:53:05

validating the signature which if

1:53:07

correctly constructed conform to the

1:53:09

same key tag. An

1:53:12

RFC compliant resolver must

1:53:14

try all the keys

1:53:17

referred to by the invalid

1:53:19

signature before concluding the signature

1:53:22

is invalid for all

1:53:25

keys leading to numerous

1:53:27

expensive public key cryptography

1:53:29

operations in the resolver.

1:53:33

And the next attack, the

1:53:36

key SIGGTRAP, combines

1:53:38

the two previous attacks

1:53:40

by using multiple keys

1:53:43

and multiple signatures. They

1:53:46

say the key

1:53:48

SIGGTRAP attack combines the

1:53:50

many signatures of SIGGJAM

1:53:53

with the many colliding

1:53:55

DNS keys of lock

1:53:57

cram creating an attack

1:54:00

attack that leads to

1:54:02

a quadratic increase in

1:54:05

the number of validations compared to

1:54:07

the previous two attacks. The

1:54:18

attacker creates a zone with

1:54:20

many colliding keys and many

1:54:23

signatures matching those keys. When

1:54:25

the attacker now triggers resolution of

1:54:28

the DNS record with the many

1:54:30

signatures, the resolver

1:54:32

will first try the

1:54:34

first key to validate all

1:54:36

the signatures. After all

1:54:38

the signatures have been tried, the resolver will

1:54:41

move to the next key and again

1:54:43

attempt validation of all the

1:54:45

signatures. This continues

1:54:48

until all pairs of keys

1:54:50

and signatures have been tried.

1:54:53

Only after attempting validation

1:54:56

on all possible combinations

1:54:58

does the resolver conclude that

1:55:01

the record cannot be validated

1:55:04

and returns a serve fail to

1:55:06

the client. Okay, now

1:55:09

elsewhere in their report when

1:55:11

going into more detail about this

1:55:14

SIGKey trap, they explain that they

1:55:16

were able to set up a

1:55:18

zone file containing 582 colliding

1:55:20

DNS SEC keys and

1:55:27

the same 340 signatures that we saw in SIGGEM. Since

1:55:33

the poor DNS resolver that receives

1:55:35

this response will be forced to

1:55:37

test every key against every signature

1:55:39

that's 582 times 340 or 197,880

1:55:41

expensive and slow public key cryptographic

1:55:56

signature tests. And

1:55:59

that was caused by the by sending

1:56:01

that DNS server a single

1:56:04

DNS query

1:56:07

packet for

1:56:10

that domain. Now,

1:56:12

interestingly, the researchers discovered that

1:56:14

not all DNS servers were

1:56:18

as badly affected. For

1:56:20

some reason, several took the request

1:56:22

much harder. Upon investigating,

1:56:25

they discovered why. For

1:56:27

example, the unbound DNS

1:56:29

server is dossed

1:56:32

approximately six times longer than

1:56:34

some of the other resolvers.

1:56:37

The reason is the

1:56:40

default re-query behavior of

1:56:42

unbound. In its

1:56:44

default configuration, unbound

1:56:46

attempts to re-query the

1:56:48

name server that gave

1:56:50

it this malicious zone

1:56:53

five times after

1:56:56

failed validation of all

1:56:58

signatures. Therefore, unbound

1:57:01

validates all attacker signatures

1:57:04

six times before returning

1:57:06

a serve fail to the client.

1:57:09

Essentially, unbound is being penalized

1:57:11

for being a good citizen.

1:57:14

Disabling default re-queries brings

1:57:16

unbound back to parity

1:57:18

with the other servers.

1:57:21

But Bind, the

1:57:26

most used server on the internet, Bind,

1:57:28

in fact, that's where unbound got its

1:57:30

name, right? It's a play on Bind.

1:57:33

Bind was the worst. That's

1:57:36

the one with the 16-hour doss from

1:57:38

a single packet, and they explained why.

1:57:42

Following the cause for this observation, we

1:57:45

identified an inefficiency in

1:57:48

the code, triggered by

1:57:50

a large number of colliding

1:57:52

DNS set keys. The

1:57:55

routine responsible for identifying the

1:57:57

next DNS set key try

1:58:00

against a signature does

1:58:03

not implement an efficient

1:58:06

algorithm to select the next

1:58:08

key from the remaining keys.

1:58:11

Instead it reparses

1:58:15

all keys again until

1:58:17

it finds a key that has

1:58:19

not been tried yet. This

1:58:22

algorithm does not lead to inefficiencies

1:58:24

in normal operation where there might

1:58:27

be a small number of colliding

1:58:29

keys, but when many keys collide

1:58:31

the resolver spends a large amount

1:58:34

of time parsing and reparsing the

1:58:36

keys to select the next key

1:58:38

which extends the total response duration

1:58:40

of the DO of the DOS

1:58:43

to 16 hours. So

1:58:48

what all of this should make

1:58:50

clear is that these potential problems

1:58:52

arise due to DNS

1:58:54

text's deliberate, eager to

1:58:57

validate design. The

1:58:59

servers are trying as hard as

1:59:02

they can to find a valid

1:59:04

key and signature match. DNS

1:59:08

servers really want to

1:59:10

attempt all variations which

1:59:12

is exactly, unfortunately, what

1:59:15

gets them into trouble. The

1:59:17

only solution will be something heuristic.

1:59:20

We've talked about heuristics in the past. They

1:59:23

can be thought of as a rule of thumb

1:59:25

and they usually appear when exact

1:59:27

solutions are not available which

1:59:30

certainly is the case here. As

1:59:32

we might expect, the researchers have

1:59:35

an extensive section of their paper

1:59:37

devoted to what to do

1:59:39

about this mess and they

1:59:41

worked very closely for months with

1:59:43

all the major DNS system maintainers

1:59:45

to best answer that question. I'll

1:59:48

skip most of the blow by blow but here's

1:59:52

a bit that gives you a sense and a feeling for

1:59:56

it. Under the subhead,

1:59:58

limiting all validation. They

2:00:00

wrote, The first working

2:00:03

patch capable of protecting

2:00:05

against all variants of our

2:00:07

attack was implemented by

2:00:10

Akamai. In addition

2:00:12

to limiting key collisions to 4

2:00:15

and limiting cryptographic failures to

2:00:17

16, the

2:00:19

patch also limits total validations

2:00:22

in any requests to 8.

2:00:25

In other words, they basically just said

2:00:27

it is Akamai

2:00:30

patch their DNS to

2:00:32

say, okay, it's

2:00:35

unreasonable for anyone

2:00:37

to expect our server to work

2:00:39

this hard. There

2:00:41

are not really going to be that

2:00:44

many key collisions in

2:00:46

a DNS sec zone. There

2:00:49

aren't, you know, it's just not going to happen in real

2:00:51

life. And we're

2:00:54

not going to constantly be failing. So

2:00:57

let's just stop after we've

2:00:59

got 4 key collisions. We're just not, we're just

2:01:01

going to say, sorry, time's up. You know, we're

2:01:03

not going to go any further. And

2:01:05

let's just stop after we've had

2:01:07

16 cryptographic failures, no matter what

2:01:09

kind and what nature, that's all

2:01:12

we're going to do because it's

2:01:14

unreasonable for any valid DNS sec

2:01:16

zone to have more. And

2:01:19

we're also going to cap the

2:01:21

total limit of validation requests to

2:01:23

8. So

2:01:26

then they wrote, evaluating the efficiency

2:01:28

of the patch, we find

2:01:30

the patched resolver does

2:01:32

not lose any benign

2:01:34

request, meaning DOS is

2:01:37

avoided, even under

2:01:39

attack with greater than 10

2:01:41

attacking requests per second. In

2:01:43

other words, it doesn't

2:01:46

fix the problem, it just, it

2:01:49

mitigates it, it gets it under control. The

2:01:52

load on the resolver does not

2:01:54

increase to problematic levels under any

2:01:56

type of attack at

2:01:58

10 requests, 10 malicious. per

2:02:01

second, and the resolver does not

2:02:03

lose any benign traffic. It

2:02:05

thus appears that the patch successfully

2:02:08

protects against all variations of key

2:02:10

trap attacks. Nevertheless,

2:02:12

they said, although these patches

2:02:14

prevent packet loss, they

2:02:17

still do not fully mitigate

2:02:19

the increase in CPU instruction

2:02:21

load during the attack. You

2:02:23

know, it's still an attack. The

2:02:25

reason that the mitigations do not

2:02:27

fully prevent the effects of the

2:02:29

key trap attacks is rooted in

2:02:31

the design philosophy of DNS-Sec. Notice,

2:02:34

however, that we are still closely

2:02:36

working with the developers on testing

2:02:38

the patches and their performance during

2:02:41

attack and normal operation. Still

2:02:43

as in today still. Okay,

2:02:46

so the disclosure

2:02:48

timeline, you know,

2:02:50

the responsible disclosure timeline for

2:02:52

this is extra interesting, since

2:02:55

it provides a good sense for

2:02:57

the participants and the nature of

2:02:59

their efforts and interactions over time.

2:03:02

So in the following, they

2:03:04

wrote, we described the

2:03:07

timeline of disclosure to indicate

2:03:09

how the vulnerability was reported

2:03:11

and how we worked with

2:03:13

the experts from industries to

2:03:15

find solutions for the problems

2:03:18

we discovered. Okay, so

2:03:20

November 2nd is the

2:03:23

of 2023. November 2nd, 2023, the

2:03:26

initial disclosure to key

2:03:28

figures in the DNS

2:03:30

community. They said, both

2:03:32

confirm that

2:03:36

key trap is a severe

2:03:38

vulnerability requiring a group of

2:03:40

experts from industry to handle.

2:03:43

Five days go by, now we're at

2:03:45

November 7th. Confidential disclosure

2:03:47

to representatives of the largest

2:03:50

DNS deployments and resolver implementations,

2:03:52

including Quad9, Google Public DNS,

2:03:54

Bind9, and other Unbound,

2:04:01

PowerDNS, Not, and

2:04:04

Akamai. The group

2:04:06

of experts agreed that this is

2:04:08

a severe vulnerability that has the

2:04:10

potential to cut off internet access

2:04:13

to large parts of the internet

2:04:15

in case of malicious exploitation. A

2:04:18

confidential chat group is

2:04:20

established with stakeholders from

2:04:22

the DNS community, including

2:04:24

developers, deployment specialists, and

2:04:26

the authors. That is,

2:04:29

the researchers here. The

2:04:32

group is continuously expanded with

2:04:34

additional experts from the industry

2:04:36

to ensure every relevant

2:04:38

party is included in the disclosure.

2:04:42

Potential mitigations are discussed within the

2:04:44

group. Two

2:04:46

days later, November 9, we

2:04:49

share key trap zone

2:04:51

files to enable developers

2:04:53

to reproduce the attacks

2:04:55

locally, facilitating the development

2:04:57

of mitigations. November

2:05:00

13, after four days, Akamai

2:05:03

presents the first potential mitigation

2:05:05

of key trap by limiting

2:05:07

the total number of validation

2:05:09

failures to 32. That

2:05:13

doesn't work. November

2:05:15

23, 10 days later, Unbound

2:05:18

presents its first patch,

2:05:20

limiting cryptographic failures to

2:05:22

a maximum of 16

2:05:24

without limiting collisions. Also

2:05:27

a non-starter. Next

2:05:30

day, by 9 presents the

2:05:32

first iteration of a patch

2:05:34

that forbids any validation failures.

2:05:38

Now we jump to December 8, so a

2:05:40

couple weeks go by. The

2:05:43

CVE is assigned to the key

2:05:45

trap attacks, although nothing

2:05:47

is disclosed. Just an umbrella CVE

2:05:50

to encompass them all. Now

2:05:54

we move to January 2, 2024,

2:05:56

beginning of the year. After

2:06:00

discussions with the developers, we

2:06:03

find some have problems recreating

2:06:05

the attack in a local

2:06:07

setup. We thus provide them

2:06:09

an updated environment with

2:06:11

a DNS server to ease

2:06:14

local setup and further facilitate

2:06:16

testing of patches. In

2:06:18

other words, the researchers actually

2:06:21

put this live on

2:06:23

the internet so that

2:06:25

the DNS servers could

2:06:27

DOS their own servers. March

2:06:32

1st, a month goes by or two months

2:06:34

go by. No, no,

2:06:37

I'm sorry. The next day I'm getting the

2:06:39

dates are in an odd,

2:06:41

well, you know, day, month and year. So

2:06:45

January 3rd by 9 presents

2:06:48

the second iteration of a

2:06:50

patch limiting validation failures. Same

2:06:54

day, they said our newly implemented DS

2:06:57

hashing attack proves

2:07:00

successful against all

2:07:02

mitigations which do

2:07:04

not limit key collisions, including

2:07:07

by nine and unbound and is

2:07:09

disclosed to the group. So

2:07:12

whoops, patch again, everybody.

2:07:15

January 16th, our

2:07:17

any type attack circumvents

2:07:19

the protection from limiting

2:07:21

colliding keys and limiting

2:07:23

cryptographic failures. And

2:07:26

on January 24th, the first

2:07:29

working patch is presented by

2:07:31

Akamai. Other resolvers

2:07:33

are implementing derivatives of

2:07:36

the countermeasures to protect against the

2:07:38

attacks. And so now

2:07:41

we get to the reports and

2:07:43

the German discoverers of this final

2:07:46

conclusions. Stay right. Our

2:07:48

work revealed a fundamental

2:07:51

design problem with DNS

2:07:53

and DNSsec strictly applying

2:07:55

Postel's law to the

2:07:57

design of DNSsec. a

2:08:00

major and devastating vulnerability

2:08:02

in virtually all DNS

2:08:05

implementations. With just one

2:08:07

maliciously crafted DNS packet,

2:08:09

an attacker could stall

2:08:12

almost any resolver, for example the

2:08:14

most popular one by 9, for

2:08:17

as long as 16 hours. The impact

2:08:20

of Keytrap is far-reaching. DNS

2:08:23

evolved into a fundamental system

2:08:25

in the internet that underlies

2:08:27

a wide range of

2:08:30

applications and facilities, new and

2:08:32

emerging technologies. Measurements

2:08:35

by APNIC show that in December

2:08:37

of 2023, 31.47%

2:08:42

of web clients worldwide

2:08:45

were using DNSsec validating

2:08:47

resolvers. Therefore, our

2:08:50

key trap attacks have effects not

2:08:52

only on DNS itself but also

2:08:54

on any applications using it. An

2:08:58

unavailability of DNS may

2:09:00

not only prevent access

2:09:02

to content but risks

2:09:05

also disabling other security

2:09:07

mechanisms like anti-spam defenses,

2:09:09

public key infrastructure or

2:09:11

even inter-domain routing security

2:09:13

like RPKI or Rover.

2:09:16

Since the initial disclosure of the

2:09:19

vulnerabilities, we've been working with all

2:09:21

major vendors on mitigating the problems

2:09:23

in their implementations. But it seems

2:09:26

that completely preventing the attacks

2:09:29

will require fundamentally

2:09:31

reconsidering the underlying

2:09:34

design philosophy of

2:09:36

DNSsec, in other words

2:09:39

to revise the DNS

2:09:41

standards. So

2:09:44

as we can see, as I

2:09:46

titled this podcast, the internet really did

2:09:49

dodge a bullet. And

2:09:51

I got to say it's also terrific

2:09:53

to see that the technical

2:09:55

and operational level of

2:09:59

all of this At that

2:10:01

level, we have the ability to

2:10:03

quickly gather and work together to

2:10:05

resolve any serious trouble that may

2:10:07

arise. Fortunately, that

2:10:09

doesn't happen very often, but

2:10:12

it just did. It's

2:10:14

like, whew, exactly. Everybody

2:10:17

is breathing a deep sigh. It was never

2:10:19

exploited by anybody. Do we know? No.

2:10:22

No. As far as we know, had

2:10:24

it been, it would have been discovered. You would know. Yeah,

2:10:27

yeah. Basically, they reverse

2:10:30

engineered this from the

2:10:32

spec, kind of going, well, what

2:10:35

if we were to do this? And what

2:10:37

if we were to do that? And

2:10:40

when the servers went offline, they thought,

2:10:42

whoops. Yeah.

2:10:46

Yeah. What if we did that? Oh,

2:10:48

hello. Yeah. And how

2:10:51

can we make it worse? And how can we make it worse?

2:10:53

And how can we make it worse? Well,

2:10:55

good research. And let's

2:10:58

hope everybody fixes their, it's

2:11:00

ironic because there was a huge push to get

2:11:02

everybody to move to DNS Sec for

2:11:04

a while. We talked about it a lot. Yep.

2:11:08

Oh, well. Yep. Steve,

2:11:10

another fabulous show in the can,

2:11:13

as they say. Steve

2:11:15

Gibson is at grc.com. You didn't hear

2:11:17

anything about Spinrite in today's episode. No.

2:11:20

I actually think we found a bug

2:11:22

in Fritos, a little obscure bug. I

2:11:25

did get going on the documentation. I

2:11:27

have the source and I've already customized

2:11:30

the Fritos kernel to run

2:11:32

with drives of any type. So

2:11:35

this evening, I will finally be able to, this

2:11:37

just happened on Sunday evening before I had to

2:11:39

start working on the podcast. So I get back

2:11:41

to it tonight. Everything's going

2:11:43

great. I've got a good head, I got a

2:11:45

good start on the documentation. Here's the chance that

2:11:48

you have to get it. The

2:11:50

6.1, the minute it comes out, buying 6.0

2:11:52

now at grc.com. Spinrite's the world's

2:11:55

best mass storage maintenance and recovery

2:11:57

utility. If you have mass storage,

2:11:59

you need... Spinrite and

2:12:02

you can actually get 6.1 right now if

2:12:04

you want to be a beta tester. It's

2:12:06

available as well. All that's at grc.com along

2:12:08

with a copy of this show. Steve has

2:12:10

the usual 64 kilobit audio, kind

2:12:13

of the standard version, but there is also

2:12:15

on his site alone 16 kilobit

2:12:18

audio if you don't have a lot of bandwidth and

2:12:21

really well done transcripts by Elaine

2:12:23

Ferris. All of that

2:12:25

is at grc.com along with Spinrite, Shields

2:12:29

Up, Valadrive,

2:12:32

the DNS benchmark. Still

2:12:37

a big download right? A lot of people

2:12:39

download that. Number one utility

2:12:41

we've ever produced. I

2:12:43

think it's 8 million downloads. Wow! Well

2:12:46

I have it. That's for sure. I use it all the time. grc.com.

2:12:49

Now if you want a video of the

2:12:52

show we have that along with the 64

2:12:54

kilobit audio at our

2:12:56

website twit.tv slash SN.

2:12:59

There's also a YouTube channel devoted to security

2:13:01

now which makes it easy to share little

2:13:03

clips. And of course the best thing to do

2:13:05

is get a podcast client and subscribe

2:13:07

that way you'll get it automatically the minute

2:13:10

it is available. Steve

2:13:14

and I get together to do this show

2:13:16

right after Mac Break Weekly. That's Tuesdays, supposedly

2:13:19

1.30 Pacific although usually it's

2:13:21

sometime between 1.30 and 2 p.m. Pacific. That's

2:13:25

4.30 to 5 p.m. Eastern 21.30 UTC. If

2:13:28

you want to tune in and watch us as we do it live

2:13:34

we stream live on youtube.com/twit and

2:13:36

of course in our club twit discord.

2:13:39

After the fact though you can get the show here on

2:13:43

the website or Steve's website or

2:13:45

subscribe and listen at

2:13:47

your leisure. Steve,

2:13:49

thank you so much. Have a wonderful week

2:13:51

and we'll see you next time. Thank you

2:13:53

my friend. Yeah we got one more week

2:13:55

of where does February go? Wow. Yeah. uh...

2:14:00

virtual restore ever

Unlock more with Podchaser Pro

  • Audience Insights
  • Contact Information
  • Demographics
  • Charts
  • Sponsor History
  • and More!
Pro Features