Podchaser Logo
Home
Why do we think anything is secure, with Steve Weis

Why do we think anything is secure, with Steve Weis

Released Thursday, 29th June 2023
Good episode? Give it some love!
Why do we think anything is secure, with Steve Weis

Why do we think anything is secure, with Steve Weis

Why do we think anything is secure, with Steve Weis

Why do we think anything is secure, with Steve Weis

Thursday, 29th June 2023
Good episode? Give it some love!
Rate Episode

Episode Transcript

Transcripts are displayed as originally observed. Some content, including advertisements may have changed.

Use Ctrl + F to search

0:12

Hello, welcome to Security Cryptography Whatever.

0:15

I'm Deirdre.

0:16

I'm David.

0:17

and we have a very special day. Our, uh, guest, Steve Weis.

0:21

How are you?

0:22

Very good. Thank you.

0:23

Awesome. Um, we invited Steve on

0:24

today because there was, uh,

0:27

a little bit of discussion

0:27

prompted by an eprint that

0:31

went on the internet, that

0:31

we're not gonna describe

0:33

and we're not gonna link to

0:33

because it's hella broken,

0:36

but it was an interesting

0:36

claim in their abstract or

0:40

the beginning or whatever,

0:40

that their cryptosystem

0:44

proved that P does not equal

0:44

NP, because their crypto

0:49

algorithm was correct, but it

0:49

turns out it was not correct.

0:53

But even if it was correct, it

0:53

would be interesting to claim

0:56

that your cryptosystem proved

0:56

that P does not equal NP.

1:00

And this led to an interesting

1:00

conversation in some of

1:04

our little cryptography

1:04

nerd circles about like,

1:07

computational complexity,

1:07

hardness of cryptographic

1:10

problems, what the hell

1:10

do we base our assumptions

1:15

and understanding of the

1:15

security and hardness of

1:19

cryptography that we design

1:19

in general, uh, reduction—

1:23

like, all these things. And we realized that we'd

1:25

never really talked about these things kind of in a

1:27

little box, and that there

1:30

was sufficient discussion

1:30

and questioning in a

1:33

well-informed cryptographic

1:33

circle that we wanted to

1:36

talk to somebody about it. And Steve did a PhD

1:37

in cryptography,

1:39

and so did David. And so now I've invited

1:41

them on to ask them

1:44

questions about, just to

1:44

just put on the record

1:47

some of these questions. Can you tell us a little

1:48

bit about yourself, Steve?

1:51

Yeah, my name is Steve

1:51

Weis and I'm, uh, currently

1:53

a software engineer at a

1:53

company called Databricks.

1:55

And yeah, I did do a PhD in

1:55

cryptography, but I would

1:58

probably describe myself as a

1:58

lapsed cryptographer, but, uh,

2:01

I have not been practicing, in

2:01

a while, but I can read papers

2:05

and parse through them and understand what's going on.

2:07

Cool. Thanks. And David worked

2:09

on TLS, right?

2:12

I did remarkably

2:12

little math during my

2:14

PhD, so we're gonna lean

2:14

on Steve a lot here.

2:19

Okay, so Steve, can

2:19

you give us like an intro to

2:23

like, why the kind of broad

2:23

statement that, "I have a

2:27

cryptosystem, therefore P

2:27

does not equal NP", just

2:32

kind of jumps off the page

2:32

as like, whoa, there, what

2:36

are you talking about?

2:37

Well, the, the irony of

2:37

this paper is that the scheme

2:39

is completely just broken.

2:42

It's, you know, you look at

2:42

it, it's not even semantically

2:45

secure, the way we would

2:45

describe it; but in all

2:48

that mess, they do make a

2:48

statement that if we could

2:50

show that a cryptosystem

2:50

exists, that would prove

2:54

that P does not equal NP. Because if P were equal

2:55

to NP, you would not be

2:58

able to have cryptography

2:58

as we, we know it.

3:02

You would not be able to

3:02

have cryptography based on,

3:05

you know, heart problems

3:05

and uh, you know, the

3:07

assumptions that you need to

3:07

have symmetric cryptography.

3:10

That's interesting

3:10

because one, we have

3:12

cryptosystems, right, that are

3:12

not considered hella broken.

3:16

That's why we rely on them. So ignoring this paper,

3:17

say we just have regular

3:21

schmegular, elliptic curve, Diffie-Hellman, and ignore the, presence

3:22

of quantum computers,

3:25

that blah, blah, blah. So, first for a general

3:26

audience, what is P?

3:31

What is NP and why do

3:31

they not equal each other?

3:36

Great question, and um,

3:36

you know, I think that this,

3:39

this is, uh, probably gonna

3:39

not do justice to, for like

3:43

a general audience right now,

3:43

but P, you know, very broadly

3:46

is the set of problems that

3:46

we know are computable, with

3:49

polynomial time computers

3:49

and they can use randomness,

3:52

there's some different

3:52

variations about what they

3:54

are, but P is basically what we know how to compute. NP is the set of things

3:57

that we know how to

4:00

check in polynomial time.

4:02

So I might have a problem

4:02

where I can't find the

4:06

answer, but I can validate

4:06

an answer in polynomial time.

4:10

So this leads to a big

4:10

question of whether

4:14

these are the same. And this is, the

4:15

field is called computational complexity.

4:19

And these are just two of

4:19

many, many different, what

4:21

are called complexity classes. So we have P, NP,

4:23

BPP, BQP, P-SPACE,

4:27

goes on and on and on. Um, you can dig into this and

4:28

it can kind of be fascinating.

4:32

And they all have different

4:32

assumptions that are

4:34

basically like, you know, given a machine that can do this, and then, you

4:36

know, can you decide this?

4:40

And, uh, often people

4:40

call these languages,

4:42

like, you know, they're,

4:42

their, language is in

4:45

these complexity classes. So big, long, outstanding

4:47

problem in computer science

4:50

is whether P is equal to NP,

4:50

and, you know, a classical

4:54

NP-complete problem, that

4:54

what that means is that

4:58

you could reduce other

4:58

problems in the space to,

5:01

it is, um, called 3SAT.

5:04

That's satisfiability with

5:04

these, these terms that

5:07

have have three terms in it. And the idea is basically

5:08

you can write a circuit, using these 3SAT terms, uh,

5:10

a general circuit, and I

5:15

can plug in an iNPut, and

5:15

the problem you're trying

5:18

to solve is whether it

5:18

has a satisfiable iNPut.

5:21

And if you can do that,

5:21

then you can actually kind

5:24

of do a search to find the

5:24

iNPut that satisfies it.

5:26

So, I can set up something,

5:26

say, "I don't know the

5:28

answer, here's the problem,

5:28

can I find an answer?"

5:31

And it'll actually generate an answer. So, you know, kind of

5:33

really roughly in, in

5:36

the cryptography world, like that answer could be the secret key.

5:40

I could set up a SAT problem

5:40

that says, you know, find

5:42

me a value that, or is this

5:42

satisfiable that there's a

5:45

value that would decrypt this

5:45

thing to this known plaintext.

5:49

And you can crank that

5:49

through and, and do basically

5:52

search and find the solution,

5:52

which would be the key that

5:54

would satisfy that problem. And that's when, you know,

5:55

that's why when I say that, like if P equaled NP,

5:57

cryptography wouldn't exist,

6:01

like we would be in this

6:01

world where, not just that,

6:04

but all sorts of other things

6:04

would be, you know, implied.

6:07

Like you could, if you

6:07

could write a checkable

6:10

statement, you can find a solution for it. So it's kind of like

6:12

certain parts of math

6:15

are now like automated. You would, if you can write

6:16

down a problem, you can solve it, um, with a lot of caveats.

6:20

So, I mean, don't, don't quote me on that.

6:22

Mm-hmm. So why don't we do,

6:23

why don't we deploy

6:26

cryptosystems that are

6:26

reduced to the 3SAT problem?

6:29

If it sounds, so, this

6:29

NP-complete problem,

6:33

blah, blah, blah. Sounds pretty neat

6:34

and secure, right?

6:37

I mean, I think this is an interesting thing is that, when it comes to

6:38

public key crypto, we have

6:42

it reduced to problems that,

6:42

are not NP-Complete, are

6:45

factoring, um, finding

6:48

a discrete log problem. You know, tomorrow somebody

6:49

might be the next, you know,

6:52

Andrew Wiles and come out

6:52

their basement and say,

6:55

I have a new factoring

6:55

algorithm and it's poly time.

6:57

And we'd be like, all right, throw out RSA. But it doesn't collapse like

6:59

the polynomial hierarchy

7:02

or prove that P equals NP. And we know that these

7:04

problems that we mentioned

7:06

are solvable in a different

7:06

computation model, like we

7:09

mentioned, you know, put aside

7:09

quantum in the beginning, but

7:11

we know how to solve these if

7:11

we have a quantum computer,

7:14

which we don't have one that

7:14

can actually run on these yet.

7:17

So it's a great question.

7:19

And the reason these problems

7:19

were chosen is it's easy to

7:24

find hard or what we think

7:24

are hard instances of them,

7:27

uh, in like an average case. So we can basically

7:28

sample from a bunch of

7:31

different, you know, RSA

7:31

modulus and like they're

7:35

mostly hard to, to factor.

7:37

Got it. So to do one step back,

7:38

we just said that we

7:42

base cryptography systems

7:42

or cryptosystems that

7:45

reduce to these hard

7:45

mathematical problems.

7:49

We just mentioned factoring. We just mentioned discrete

7:51

log, which is elliptic

7:53

curves and the the prior

7:53

generation before elliptic

7:56

curves, and like say you

7:56

based a cryptosystem on 3SAT,

7:59

you would reduce to 3SAT. There's more to a cryptosystem

8:01

than just, "factor", right?

8:06

So when we say we reduce to

8:06

one of these computational

8:09

problems, what do you,

8:09

what do we mean by that?

8:11

That's a great question. It's probably like a term

8:12

that not everyone is familiar

8:14

with, and the idea is that,

8:14

let's say I have RSA, and

8:18

I have a black box that

8:18

can, basically solve RSA.

8:22

It doesn't necessarily need

8:22

to spit out the factors, but

8:25

you know, we can talk about

8:25

what this means in a minute,

8:27

but you know, let's say we

8:27

have a black box that can

8:30

distinguish RSA encryptions

8:30

or that can give you some

8:33

advantage in guessing,

8:33

you know, the content

8:35

of, of an RSA ciphertext.

8:38

If we have that, we can use

8:38

that black box, which we may

8:41

not even know what's inside

8:41

of it, we can use that to

8:43

factor numbers because we

8:43

would just put those numbers

8:46

as an RSA modulus and then

8:46

run it through this thing.

8:49

And then we could basically

8:49

iterate on this and,

8:51

and get out a factor. It doesn't necessarily work

8:52

the other way around, that if,

8:55

um, we had a black box that

8:55

could factor, we could, well

8:59

it does in this case, but not,

8:59

it doesn't necessarily always

9:01

work the other way around

9:01

with, with this problem.

9:04

I might have this backwards actually, so.

9:08

So that kind of

9:08

putting a little black box

9:11

around an oracle, that says

9:11

I can distinguish between—

9:15

the thing that we call in the

9:15

security proof game model of

9:21

trying to model, the thing

9:21

you're trying to build, a

9:24

cryptosystem that has specific

9:24

properties, say encryption

9:28

and decryption, and you're

9:28

trying to construct a mind

9:32

game, a theoretical game, and

9:32

what you think an adversary

9:37

can do and how it interacts

9:37

with your cryptosystem, your

9:40

encryption, decryption scheme,

9:40

or whatever, and, when we

9:45

say that we have some, like

9:45

a little black box that says

9:48

I can distinguish between

9:48

different, you know, RSA

9:51

encrypted or, or factor based

9:51

encrypted ciphertexts, we're

9:54

saying that in that game,

9:54

the adversary or anyone else,

9:58

has a way to distinguish

9:58

between these things and

10:01

use that when interacting

10:01

with encryption and

10:04

decryption, however it likes.

10:07

And often what we do when

10:07

we construct these sort of

10:10

security games is we kind

10:10

of do the, I forget the name

10:14

of it, but the, the, the,

10:14

the reverse, saying that,

10:17

say you can factor, say you

10:17

can have a, a black box that

10:21

distinguishes, uh, between

10:21

these different factoring

10:25

then it

10:25

means that the adversary

10:28

can determine which one is

10:28

encrypted by you and which

10:32

one is encrypted by David,

10:32

or, you know, whatever.

10:34

It breaks encryption. But then, you've shown

10:36

basically by showing that it

10:40

must rely on being able to

10:40

distinguish, you must rely on

10:45

being able to break factoring. You're saying that the

10:47

whole cryptosystem and

10:49

encryption and decryption

10:49

relies on en encrypt,

10:52

uh, factoring being hard. If the adversary cannot

10:54

factor, then your encryption,

10:57

decryption cryptosystem is

10:57

secure, at least in this

11:00

very limited game space. So that's kind of the big

11:03

context of the security

11:07

of encryption/decryption

11:07

reduces to the factoring

11:09

problem or whatever. It's because we have done

11:11

all this work to construct

11:15

this hypothetical and tried

11:15

to model all the pieces of

11:18

how this works and shown a

11:18

security proof, literally kind

11:22

of a, a math proof like you

11:22

might do in geometry class,

11:24

at least the way I did in

11:24

geometry class many years ago.

11:28

That says, If this thing

11:28

is true, then everything

11:32

else about my proof will

11:32

hold and the inverse

11:35

will, will also hold. Does that, did I

11:37

get that right?

11:38

I think that, you know,

11:38

we should probably talk about

11:41

when we say these games too. Cause like when, when kind

11:42

of in the public, you think

11:45

about breaking a cryptosystem, you think about, okay, I can just decrypt and

11:47

get the ciphertext out.

11:49

And these proofs give

11:49

the adversary like a

11:54

lot of power by making

11:54

it as easy as possible.

11:56

So, you know, an example one

11:56

is indistinguishability of

11:59

a chosen plaintext attack. I take two plaintexts,

12:01

encrypt them both, and

12:04

then I flip a coin and I

12:04

give you one of the two.

12:07

And if you could even get any

12:07

advantage in telling me which

12:10

one of the two, you've broken

12:10

the cryptosystem, you're

12:12

not actually decrypting it. It's like, okay, this

12:13

one looks like an encryption of zero.

12:16

You know, I have 0.001

12:16

advantage, I win.

12:18

So we basically make the

12:18

game as easy as possible.

12:21

And we want to give the

12:21

adversary as much power

12:23

as possible, and in that

12:23

case, you know, if there's

12:26

a black box that can do this

12:26

with, let's say, you know,

12:28

RSA, you could use that

12:28

same black box to factor.

12:32

I could take an arbitrary

12:32

number and make it into an

12:36

instance of an RSA modulus and

12:36

then plug it into this thing.

12:40

That's the idea, is that if somebody, you know, we don't know how it works.

12:43

There's just some algorithm that can solve this. We don't know what's going

12:45

on inside of it, but it runs in polynomial time.

12:48

I can use that thing to

12:48

solve this problem that a

12:50

bunch of smart people haven't figured out how to solve. So that, that's really

12:53

what it comes down to. And that's kind of like,

12:55

the best we have, and that's

12:57

in the public key world. In the symmetric world, we

12:59

don't even really have that.

13:02

Like, symmetric world is just,

13:02

we get a bunch of Belgians

13:05

to work on something and

13:05

then they put it up on the

13:08

internet for a few years,

13:08

and then in five years we

13:11

declare, all right, no one's

13:11

broken this, let's pick the

13:13

fastest scheme from, the

13:13

Belgians or whoever, whoever's

13:17

winning it that year.

13:18

Yeah. That's not the reason that

13:19

I don't, I'm not super into

13:22

symmetric, but it's also

13:22

like a, like this feels a

13:25

bit woozy and wibbly, whereas

13:25

asymmetric crypto, I, I

13:29

kind of like the structure

13:29

more and I kind of like

13:31

the, the provability notions

13:31

more, but that's just me.

13:35

Yeah, and, in asymmetric, you're basically saying like, if I have a black

13:38

box that can do anything from

13:42

like distinguished ciphertext

13:42

to fully decrypt things,

13:46

then therefore I could use

13:46

it to also quickly solve this

13:51

problem that we think is hard.

13:53

Therefore, because we think

13:53

this problem is hard, this

13:56

black box must not exist.

13:58

Yeah, and like I said,

13:58

like we usually try to make

14:00

these games like the— as easy

14:00

as possible, not like, recover

14:03

the entire plaintext and give

14:03

me exactly what's encrypted.

14:06

It's like, okay, there's

14:06

two distributions of output.

14:08

Like, can you tell the

14:08

difference between these

14:10

distributions, and can you

14:10

do it probabilistically?

14:13

So you don't have to win every time, you just need to win a little better

14:14

than flipping a coin.

14:17

And if you could do

14:17

that, we consider the,

14:19

the cryptosystem broken. So like, you know, the, the

14:20

idea is that you make the game

14:24

easy, you give the attacker

14:24

as much power as you can,

14:27

and then that determines your level of security. And there's different

14:29

flavors, like you might give the attacker the ability to

14:31

decrypt from an oracle except

14:34

for the challenge thing. So that's, uh, you know,

14:35

adaptive chosen ciphertext,

14:38

like that's a stronger

14:38

security model than, you know,

14:40

the first one we talked about,

14:40

which is, is chosen plaintext.

14:42

So this comes up a lot in

14:42

like the zero knowledge proof

14:45

world, where there's all

14:45

sorts of different powers

14:48

that an attacker could have. They could be, in a model

14:50

where concurrency is allowed,

14:52

where they can reset, the

14:52

parties in this thing, and

14:56

that one is, is strictly like

14:56

they're basically showing that

14:59

between a simulation and a

14:59

real transcript, the attacker

15:03

can't tell them apart. And so in that world, the

15:04

attacker might have the

15:07

ability to reset the real

15:07

transcript midway through.

15:10

And like if they can do that

15:10

a bunch of times and then, you

15:12

know, they get a real sample

15:12

of that, can they distinguish

15:15

that from something you can simulate yourselves?

15:17

Mm-hmm. We talked about kind of,

15:18

security games, and this is

15:21

like a way to do, security

15:21

proofs to try to model

15:26

both the capabilities of an

15:26

adversary, the environment

15:29

that it's operating in,

15:29

and then the, the scheme

15:32

that you're trying to

15:32

establish some sort of

15:34

security property about. There's the standard model.

15:39

I still don't really know what the standard model is, and this is not the,

15:41

the physics standard model.

15:43

This is just sort of like

15:43

the barest bones standard

15:46

model for security proofs.

15:48

There's also the

15:48

random oracle model.

15:51

Um, and it's, it's funny

15:51

because I see it so many

15:57

places in security proofs,

15:57

they're just sort of like, ah,

15:59

if random oracles exist, and

15:59

we can model them like so, and

16:02

then all of these things are

16:02

secure, and then at the same

16:04

time someone will chime in

16:04

and be like, it is absurd that

16:08

we've decided that having the

16:08

random oracle model is fine.

16:13

Can you, can you talk a

16:13

little bit about that?

16:15

Yeah, so I mean,

16:15

first off, standard model,

16:17

in my mind, I think it's just

16:17

these, math prom assumptions.

16:21

discrete log. If you're just assuming

16:22

discrete log and that it's hard to distinguish or you

16:24

know, it's, uh, you know,

16:26

computational Diffie-Hellman and like these things are kinda like these well

16:27

assumed, um, assumptions.

16:31

And then, you know, when it

16:31

comes to random oracle model,

16:34

the idea here is that when

16:34

you're doing these protocols,

16:36

you want to basically have,

16:36

you know, something that

16:40

you give it an iNPut and

16:40

deterministically returns a

16:43

specific output, and you kind

16:43

of assume that, and that you

16:48

can build different protocols

16:48

off of, and in practice you

16:51

would drop in, uh, you know, a

16:51

hash function, HMAC, whatever,

16:55

you can implement that. So this is, you know, great,

16:56

it makes proofs simpler

16:59

because you can kind of

16:59

say, you know, you have this

17:01

oracle you call it, and it

17:01

gives us back a random thing.

17:03

It's not predictable. Great. So I guess in the late

17:05

nineties, early two thousands,

17:08

um, there was a result, I

17:08

wanna say it's Ran Canetti,

17:11

but I might be wrong. They basically showed a

17:12

construction that is secure in

17:15

the random oracle model, which

17:15

was not secure, because it was

17:18

constructed in a way so that,

17:18

um, when you implemented it,

17:21

it actually let you break the

17:21

protocol and it was a really

17:24

artificial construction. So, the reason why people

17:25

are still using it today,

17:29

you know, kind of without

17:29

a lot of, justification is

17:33

that it's hard, it's hard to

17:33

actually come up with that

17:35

artificial construction. Like you had to actually

17:36

really like think and come up

17:39

with a really tricky way to do

17:39

this artificial construction.

17:42

So you know, the result of

17:42

that was like, yes, in the

17:45

random oracle model, just cuz you prove it there, it doesn't actually prove

17:46

it secure, but it kind of

17:49

hand— gets hand waived away as like, okay, well people using it naturally, which is

17:51

like, you know, you're not

17:54

constructing it on there. Um, that paper, is

17:55

kind of over my head.

17:59

I remember that I was like

17:59

early in grad school when that

18:01

came out and I tried to read

18:01

it and I still am like hand

18:04

waving what, how it works.

18:06

But, um, yeah, that was

18:06

a really interesting one

18:09

because it really, you

18:09

know, showed that this

18:11

thing that seemed like a really natural assumption. You're like, yeah, you

18:13

know, storm the, so we drop in a an HMAC or hash

18:15

function and, it'll just

18:19

work and it, it doesn't.

18:20

Mm-hmm. So one thing that I, kind

18:21

of blew my mind early in

18:27

my cryptography journey,

18:27

uh, was the fact that the,

18:32

so many things that we

18:32

reduce these problems to,

18:36

these cryptosystems to, so

18:36

we reduce to, factoring,

18:41

discrete log or you know, some

18:41

of these new theoretically

18:44

post quantum assumptions. We do all these

18:45

security proofs. We say we're in the random

18:47

oracle model, the standard model, or you know, whatever

18:49

we do, we do all this, you

18:52

know, scaffolding to prove all

18:52

these, like We give so many

18:57

powers to the adversary and

18:57

we're very conservative about

18:59

the security property we're

18:59

trying to prove, and we make

19:01

all these long proofs and we

19:01

make these implementations

19:04

and they're both very fast and

19:04

like, you know, they live for

19:06

years and they seem really,

19:06

really bulletproof, if you

19:09

do all these things well. But at the end of the day,

19:12

it reduces to a problem

19:16

like discrete log or

19:16

factoring or whatever, and

19:19

we don't have a proof that

19:19

it is in NP, or whatever.

19:25

It's not one of these

19:25

NP-complete problems and

19:28

we don't have any sort of

19:28

proof or solid foundation

19:32

that it is a secure problem. It's just, if it is a

19:34

hard problem, then our

19:39

cryptosystems are secure,

19:39

but like, it's all based

19:41

on a, we think it is.

19:44

We do not have results

19:44

that show it is, and it

19:49

hasn't broken yet, so,

19:49

let's keep going with it?

19:54

Yeah, pretty much. Like

19:56

we know it's

19:56

not NP-complete, right?

19:58

Actually, like we know that it's

19:59

so, yeah.

20:00

um, cause we

20:00

have algorithms for it.

20:02

They're just not,

20:02

there's just still bad.

20:05

It's just not officially

20:05

like NP-complete.

20:08

Right. And like even that was

20:09

like, so even if it's not

20:13

NP-complete, fine, whatever. We don't have for RSA, for

20:14

factoring, for discrete log,

20:19

we don't have, any result

20:19

that says it is at least

20:24

this hard to solve these

20:24

problems without a trap door

20:28

solution, without a key,

20:28

or knowing, already knowing

20:30

the answer or whatever, and

20:30

like, what the hell, man?

20:36

Yeah, I mean it's

20:36

like, like I said, this all

20:38

comes down to is smart people

20:38

have worked on this for a

20:40

long time and we know the

20:40

best that we can come up with

20:43

and that's how hard it is. And like, you know, we, we

20:44

have this concrete security

20:47

where, you basically plug

20:47

in the parameters to the

20:50

best known algorithm we

20:50

have, and that gives you

20:52

your level of security. And like if somebody comes

20:53

out with a better algorithm

20:55

tomorrow, you have to just rerun the numbers with the new algorithm.

20:58

And that's the new level of security and that that's happened.

21:01

That's happened re you know,

21:01

with some of the pairing-based

21:03

signatures where, oh, there's

21:03

new attack, this one's out.

21:07

Get a new one. We gotta bump the, the key length. And that's what we have.

21:12

Lattices have a little

21:12

bit stronger, stronger

21:16

basis because they reduce

21:16

to a problem that is,

21:18

is known to be NP-hard. And, it's, you know, it's

21:20

not known to be solvable by,

21:26

by these quantum and that's

21:26

why all a lot of the, uh,

21:28

post quantum crypto is all

21:28

based on lattices or, or

21:31

similar problems that are, are

21:31

outside that, that gray area.

21:35

So yeah, we do have, we

21:35

do have stronger, stronger

21:37

assumptions of what we could build things on.

21:40

Yeah, I would, I

21:40

would counter that lattices

21:42

are attractive independent

21:42

of that result, which is

21:46

that they are very efficient

21:46

to compute because of all

21:49

this linear algebra that you

21:49

have, you, you just have to

21:52

do, and that sort of thing. You can do a lot of

21:53

things with them.

21:56

Um, fully homomorphic,

21:56

blah, blah, blah, blah.

21:58

And then having the icing on

21:58

the cake, which is you don't

22:02

have it for any of these other

22:02

problems that we reduce to

22:05

that are, we've been building

22:05

real cryptosystems on, is

22:08

this worst case, um, result

22:08

for shortest vector problem?

22:13

Is that the one? And then, okay.

22:16

So yeah, that was the

22:16

big leap was reducing that

22:18

hard, shortest vector problem,

22:18

which we, you know, have, have

22:22

a worst case NP-hardness for,

22:22

and you can reduce that to a

22:26

average case hardness problem. And then that average case is

22:27

one that we can, can sample

22:31

lots of instances of and,

22:31

and basically use to build,

22:33

cryptosystems out, out of.

22:35

Is that learning with errors?

22:37

Uh, learning with errors is one of them. Yep.

22:38

Cool. Yeah. That's wild.

22:41

So can you explain a little

22:41

bit, uh, what it means that

22:45

the worst case instance

22:45

of a problem is NP hard

22:50

versus the average case?

22:53

Yeah. So, you know, the, there was

22:53

this result in '96 by Itai,

22:58

which was the, kinda the one that kicked this off. And, uh, one of the first

22:59

lattice-based cryptosystems

23:02

was based on this, and that

23:02

was a reduction of what's

23:04

called the short integer

23:04

solution, um, to this, uh,

23:10

this shortest vector problem. Shortest vector problem,

23:12

meaning that if you could

23:14

solve this average case, hard,

23:14

short integer solution, you

23:19

could solve this, worst case,

23:19

hard shortest vector problem,

23:22

which is known to be NP-hard. if I had this, you know,

23:24

the, the examples, if I had

23:26

a black box, I could solve

23:26

this shortest integer solution

23:29

problem, um, I could craft

23:29

a, take an iNPut, that would

23:34

be an instance of a hard

23:34

instance of the shortest

23:36

vector problem and solve it with that black box. That's my understanding.

23:40

And we basically don't have anything like that for, discrete log.

23:44

I don't know about factoring,

23:44

but basically it's, it's just

23:47

a, a smooth distribution. We don't have any of

23:48

these like reductions.

23:51

Yeah, I can't take like one of these hard shortest vector problems

23:53

and map it into an, uh,

23:55

a factoring problem. Like, I, I don't think that,

23:56

I think that's what's missing there is we can't solve,

23:58

you know, lattice problems

24:01

with factoring, can't solve,

24:01

discrete log with factoring.

24:05

yeah. Okay. Cool.

24:08

We've gone pretty deep into

24:08

the com, into the complexity

24:11

hole and, uh, these worst

24:11

case, average case solutions,

24:15

so kind of unwinding

24:15

the stack a little bit.

24:18

Alright, we know about

24:18

reductions to hard problems.

24:20

What's a hard problem? Cryptography that we

24:22

actually deploy isn't

24:24

actually built on, like

24:24

NP-hard stuff or NP-complete

24:27

stuff, for the most part. So, going back to kind of the

24:30

top, why is it kind of funny

24:36

that if someone says, ah,

24:36

my cryptosystem is secure,

24:39

therefore P does not equal NP.

24:42

I mean, if you

24:42

could prove it was secure

24:45

in our definition, secure,

24:45

then you'd be proving

24:47

that P is not equal NP. Like we don't know if

24:48

cryptography exists. That's where we're at today.

24:51

Like we don't, we all

24:51

might just be making

24:53

this up and tomorrow we're all out of a job.

24:56

I mean it kind of sounds that way, and yet, the next step is

24:57

that we consider, we pick

25:03

our security parameters and we consider things to be secure because people

25:05

keep trying to break them.

25:08

And they haven't yet, or

25:08

they try to attack the

25:11

hard problem underneath

25:11

and they haven't yet.

25:14

There are different instances

25:14

of, of cryptosystems that

25:17

are different but reduce

25:17

in their security proof

25:20

to the same hard problem.

25:22

And one might be broken,

25:22

one and one might not.

25:25

Um, for different reasons.

25:27

in reality, like

25:27

almost everyone suspects and

25:31

believes that there's evidence

25:31

that P is not equal NP.

25:34

Like nobody is really

25:34

expecting P to be NP.

25:37

Like, it could happen, but

25:37

you know, almost everyone

25:40

is kind of assuming,

25:40

yeah, P doesn't equal NP.

25:43

Think some people

25:43

expect it to be like

25:45

undecidable as well, that

25:45

like it's, despite the

25:47

fact that it sounds like a very well formed question, that it's not actually

25:49

a well formed question.

25:52

Yeah. And there may be like

25:52

refinements and these things

25:55

get very, you know, there's

25:55

a lot of different, like

25:57

little tweaked complexity

25:57

classes and you know, tons

26:01

of different ways you can formulate this because it's, you're just basically making

26:03

up like, a language and

26:06

saying, okay, this is the class that its belongs to. So you can put all, you know,

26:08

constant depth circuits,

26:11

you can do logarithmic

26:11

depth things and that's

26:15

its own like, class. I think one interesting thing

26:17

to call out is that, there

26:19

may be a world that exists

26:19

where we have like symmetric

26:22

encryption, but not today's

26:22

public key encryption.

26:26

Um, This kind of comes up

26:26

in some of the, the, you

26:30

know, papers talking about

26:30

this, and people call it

26:32

minicrypt, is the, is the

26:32

world where we would just

26:35

have one way functions. And the cool part is that if

26:36

we have one way functions,

26:39

um, then we can build a lot

26:39

of stuff on top of that.

26:42

Mm-hmm.

26:44

the, you know, the

26:44

world where basically like

26:47

RSA and discrete log get

26:47

solvable and are shown to

26:52

be in P or, you know, some

26:52

approximation of that.

26:57

There may still be cryptography in that world. Like we can have things

26:59

based on one way functions,

27:01

and that's kind of where

27:01

we have good evidence that

27:04

we're at least in that,

27:04

that minicrypt world.

27:07

Um, but you know, this

27:07

is all best guess.

27:13

Okay. A lot of what we've just

27:13

talked about is very, besides

27:17

the quantum computers, it's

27:17

very proofs and complexity

27:21

classes and theoretical

27:21

and, you know, average case,

27:25

worst case, blah, blah, blah. How about, 10 years from

27:27

now and I have a much

27:31

more powerful computer,

27:31

just, just presume that

27:33

Moore's Law continues. Just pretend.

27:35

It's not, but just

27:35

pretend it just continues.

27:38

Um, and I just have a way more

27:38

powerful computer and I decide

27:42

to try to factor RSA-2048

27:42

or a 4096 or, you know, an

27:48

elliptic curve group of 256

27:48

bits or, you know, whatever.

27:52

I bet you my same algorithm,

27:52

I'm implemented in Python,

27:57

say like dumb, super

27:57

dumb python or whatever.

28:00

It's gonna run faster on

28:00

like, 10 year, you know,

28:03

better computer against

28:03

the same cryptosystem and

28:07

parameter set than it does

28:07

today: how does that relate

28:11

to polynomial time and non

28:11

polynomial time attacks

28:15

against my cryptosystem? Because one of them is

28:17

going faster than the other

28:20

and nothing changed except

28:20

my computer got better.

28:23

Yeah, I mean, we can look at it this way. So, uh, the, let's take.

28:27

RSA, for example, there's

28:27

this team that's based outta

28:30

France and they run this

28:30

algorithm called Cato NFS.

28:34

Um,

28:35

mm-hmm. Number

28:36

look it up. We'll go. Um, but anyway, they, they've

28:37

been setting the records for factoring and that

28:39

that's the implementation,

28:41

that's not the team name. And it's a number field sieve.

28:43

That's basically the, the

28:43

fastest implementation.

28:46

And every couple of years

28:46

they'll spin up like thousands

28:49

of servers and crank them

28:49

and find like the next,

28:54

RSA challenge that has been

28:54

sitting out there for a while.

28:57

And you know, if you

28:57

think about how computers

28:59

get faster, let's say they, you know, double in speed every year.

29:02

It may not be happening, but

29:02

let's approximate Moore's law.

29:05

So you think about that, that

29:05

basically burns off like a

29:07

bit of your key every year,

29:07

and the algorithm may get

29:10

better, their implementation

29:10

may get better as well.

29:12

So you might be burning off

29:12

like, more bits, each year

29:16

and um, you know, you kinda

29:16

look at their track record

29:19

and they're, they're making like, you know, steady progress of, of bumping up

29:20

higher and higher in what

29:24

they're able to factor. It's been a couple years

29:25

now, but, they have

29:27

their records online. But yeah, there's, there's

29:28

progress going on there

29:30

and, I think to get back to

29:30

your original point is like,

29:33

let's say we don't have any

29:33

big discovery of like a poly

29:37

time algorithm to factor RSA.

29:40

or that we don't, you know,

29:40

have some complexity result

29:42

that we're just getting

29:42

better, you know, best

29:45

effort of what we have today. You know, just because it may

29:47

be exponential doesn't mean

29:50

that it's like, impossible.

29:53

So I think with RSA

29:53

especially, like,

29:56

It's not inconceivable

29:56

that RSA-1024 will be,

29:59

Mm-hmm.

30:00

factored, you

30:00

know, within my career.

30:03

Like, I, I wouldn't be surprised if somebody does that.

30:05

And I feel like today we're at

30:05

a point where you can throw a

30:07

lot of money at it and do it. So, yeah.

30:10

These things are, are constantly getting kind of chipped away by, the

30:12

state of the art of actual

30:15

hardware and implementations

30:15

and also just people

30:18

discovering better algorithms. Like there, might find

30:19

faster ways to do this.

30:22

And uh, you know, it could

30:22

always very well end up be

30:25

that we never have like a

30:25

poly time way to do it, but

30:27

we maybe have like something

30:27

that is, you know, exponential

30:31

that is growing very slowly.

30:33

So, in that world,

30:33

kind of the real world with

30:37

actually implemented attacks

30:37

on real hardware and you know,

30:42

the variables are changing. I have access to either a much

30:43

more powerful single computer.

30:47

I have access to

30:47

lots of compute.

30:49

I farm out my algorithm across

30:49

lots of compute that I buy

30:52

from Jeff Bezos or whomever.

30:55

in that world, are we still,

30:55

do we still quote care about

31:00

polynomial time parameterized

31:00

by a security parameter,

31:05

or is it more just I can

31:05

implement, an algorithm

31:09

and then execute it with

31:09

against this parameter set.

31:13

And it is getting faster, but

31:13

like the iNPuts are not just

31:16

algorithm security parameter. It's that with hardware, with

31:18

scale up, with, you know, the

31:23

latest and greatest, you know,

31:23

tsmc, micro architecture,

31:27

whatever, like in the concrete

31:27

world, do we care anymore

31:33

about complexity classes?

31:35

I mean, I think in the concrete world, we will take the best known attacks

31:37

and plug the parameters

31:40

in there, and that kind of tells us how secure it is. I think that's, that's

31:42

kinda the best we have for a lot of these cases.

31:45

Um, I haven't, surprisingly,

31:45

I haven't seen that kind of

31:48

like analysis of symmetric

31:48

primitives, like, you

31:52

know, I don't know what

31:52

the best attacks are other

31:56

than just brute forcing

31:56

and maybe there's some,

31:58

some smarter things to do. But, um, at least in the,

31:59

you know, factoring side

32:03

and, and discrete log side,

32:03

this is to me like the way

32:07

that you would kind of like

32:07

back out your, uh, your,

32:10

your security parameters.

32:11

Mm-hmm.

32:12

Cuz there were some improvements in discrete log about 10

32:14

years ago or something.

32:17

I'm kind of like refreshing my memory on it, but it was the, uh, had a funny name.

32:21

I'm, I'm forgetting it right now.

32:22

in terms of the

32:22

concrete world, we talked

32:26

about proofs and games and

32:26

models and standard model,

32:30

random oracle model, and

32:30

there's a bunch of other, uh,

32:33

you know, security game models

32:33

that we didn't even mention.

32:36

And how cryptography

32:36

might not exist, but we

32:39

know in one case that

32:39

it does in fact exist.

32:42

It's just really stupid. And can you explain

32:43

what that is?

32:45

Yeah. So I mean, one, one time pads,

32:45

you know, they will exist.

32:49

Like they're

32:49

information-ly, information

32:51

theoretically secure. And that will exist even if P

32:52

equals NP and you just have to

32:57

ship a key to somebody that's

32:57

as long as your plaintext.

33:00

And so, um, you can also

33:00

think of like code books

33:03

kinda like this too. Like we may pre agree

33:04

that like, you know, two, two lights.

33:08

It means one something and one light means another something. And like observing

33:10

that's not gonna let you break it unless you know

33:12

what's in the code book. So that, that's basically

33:13

the world we'd be in where

33:16

like everybody has code

33:16

books and they're sending

33:19

couriers around and you know,

33:19

we're basically like Johnny

33:22

Mnemonic world or something.

33:24

And can you

33:24

explain informationally,

33:27

theoretically secure? Because that's not a thing

33:28

we have for basically anything that we have

33:30

actually deployed.

33:33

Yeah, I mean basically

33:33

it's that the ciphertext

33:37

could conceivably be any

33:37

value, like it could be

33:40

encryption of any value. And there's no, like, I'm

33:41

probably completely butchering

33:46

this, but like, my stale

33:46

understanding of it is that

33:50

there's any possible key and

33:50

plaintext combination could

33:54

produce that result, which is basically, you know, you're XOR'ing things together.

33:56

So like, um, there's,

33:56

there's no way to know that

34:03

you've actually broken it

34:03

unless there's other kind of

34:05

like auxiliary info there.

34:08

So if you're like, yeah, I've

34:08

completely butchered that.

34:11

That's fine. No, no, that's good. So why don't we use this,

34:13

this very cool information,

34:16

informationally, theoretically

34:16

secure cryptosystem

34:20

in our real world. This sounds really neat.

34:23

I mean, every three

34:23

months somebody out there

34:26

puts out something saying

34:26

they're doing one-time pads

34:29

and um, yeah, it comes down

34:29

to like, you have to ship

34:33

your key material around. And so it's like chicken

34:34

and egg is like you've

34:36

never met somebody before. They're over on the

34:37

other side of the world. How do you get, a disk drive

34:38

full of bits to them and like

34:43

literally you'd be taking

34:43

physical material around cuz

34:48

you can't send it to 'em. I mean, let's say this is

34:49

the world where P equals

34:51

like, I want communicate

34:51

with somebody, I can't do

34:54

any sort of identity over

34:54

the internet because, you

34:57

know, signatures don't exist. Um, there's no way for

34:59

me to prove who I am over

35:01

the internet, except by

35:01

physical, properties.

35:04

So something I possess,

35:04

something you possess.

35:07

And, you know, we're in this

35:07

world, like, there's not even

35:10

any like symmetric encryption. Like we, we would just

35:11

basically be able to do one

35:14

time pads and have code books.

35:17

So I would've to send you

35:17

some binder and then you go

35:19

to page and like I send you

35:19

a picture of like a frog

35:21

and you, oh, that's a frog. Like what is it? Um, and that is obviously

35:23

subject to like, you

35:27

know, analysis too. Like if I keep sending you

35:28

pictures of frogs every day, somebody's gonna

35:30

figure out what that means. So

35:33

Or you say good morning at the beginning of every message or whatever.

35:37

I mean, this is

35:37

basically like all we would

35:41

have in, in that, that model.

35:42

So basically, it

35:42

is very hard to do well in

35:50

practice, not because the, uh,

35:50

fundamental properties of the

35:54

cryptosystem are, unattractive

35:54

or even inefficient.

35:59

It's the humans doing the

35:59

crypto cryptography that

36:03

have the problem here.

36:06

You have to send

36:06

someone a secret key that

36:08

is as long as the plaintext. And if you can secretly get

36:10

someone a secret key as long

36:12

as the plaintext, you may

36:12

as well just send them the

36:15

plaintext, but secretly, like,

36:17

I was gonna say something about like, well, what if you have a seed

36:18

and then you have a counter

36:20

and then you hash it and

36:20

then you use, you know,

36:22

stretching or whatever. But like we, we lept into the

36:23

world where we don't even have

36:26

those things, so nevermind.

36:28

So.

36:29

you're just describing

36:29

a symmetric cipher, a

36:31

I know, you like you, and then you start, you, then you have segments

36:33

of your message and then you

36:36

encrypt those and then you

36:36

chain them together in some

36:39

sort of block mode and it's

36:39

like a chain block mode.

36:43

Something like that. Oh, where have I heard

36:44

about this before? Uh, so cool.

36:51

So yeah, sorry, one time pad.

36:55

It doesn't work in practice,

36:57

I mean, it, it's funny how many times you've seen people, have shady

36:58

companies or cryptosystems.

37:03

They're like, this is a one time pad. And it's like basically

37:04

generating some stream

37:06

of things and it's like, yeah, it's like a crappy stream cipher.

37:10

But um, over and over again,

37:10

like this has come up.

37:14

Um, it's, you know, gone

37:14

down a little bit, but for

37:16

years as you like, see people

37:16

saying, oh, this is based

37:18

on one time pads and like,

37:18

same thing over and over.

37:22

Hmm, my theory

37:22

is that it is a, it is

37:27

a attractive, easy to

37:27

understand, basic security

37:31

thing and because they

37:31

understand it and it comes

37:35

with informationally,

37:35

theoretically secure,

37:37

it sounds really great. And then they're, because

37:38

they understand that part

37:41

and it sounds really great, they try to build on it and it's like, we are, literally

37:43

75 years past that, like we

37:49

we're way, way past that. Sorry, that you're, you're

37:51

just getting into the

37:54

crypto game, that's great. But please don't

37:56

ship it, please.

37:59

mean, if you think

37:59

about the properties you

38:01

need, you have to like,

38:01

have some channel with the

38:03

person ahead of time, and

38:03

you can only use it once.

38:06

So it's, it's

38:06

basically you're a spy.

38:09

You give them the book of codes and they go out in the field and you, you know,

38:10

rip off a page every day.

38:13

And that, that's

38:13

the one-time pad.

38:15

Like that's, that's literally why it's called a one-time pad.

38:17

So, um,

38:18

Mm-hmm.

38:19

you know, that's what spies did. That's, and they reused

38:21

them and they found the

38:23

pads in the garbage. And, that that's the

38:25

world we'd be in if,

38:27

if P happened to be NP.

38:28

Mm-hmm.

38:29

It is kind of funny that like, you know, Claude Shannon came up with all the,

38:31

the one time pad, theoretical

38:35

information security stuff,

38:35

but that was just like,

38:38

yeah, so clearly it's not

38:38

possible besides this.

38:40

I've solved it, I'm done, and just like moved on, didn't go into the rest

38:42

of the cryptography. It was just like, well,

38:44

this is as good as it gets.

38:48

My work here is done. Bye-bye.

38:52

Thanks, Claude. No, no really.

38:54

So it would've been interesting if he kept going.

38:57

It's too busy

38:57

inventing digital logic

38:59

as a master's thesis.

39:01

Oh my gosh. Yeah. One thing that, uh, I didn't

39:02

touch on that's part of

39:05

like the concrete world is

39:05

that we don't— how about

39:10

things like side channels?

39:13

Are there ways to include

39:13

that in our security

39:18

proof or security game,

39:18

or our model in some way?

39:22

And what about things

39:22

like, fault injection,

39:26

things are that kind of

39:26

start intruding on the

39:30

model of like a, a trusted

39:30

computing base of some sort.

39:35

Are there ways to model that

39:35

in the way we reason about

39:38

our cryptosystems and design

39:38

them and show security?

39:41

Or is that just sort of

39:41

like a, don't look under

39:43

the carpet sort of thing?

39:46

Yeah. The answer to that is both. It's been historically

39:47

don't look under the carpet.

39:49

And yes, there are ways to model it. And you know, the, the

39:51

idea here is that when we

39:54

set up these, you know,

39:54

simulations or these

39:56

game-based proofs, you're

39:56

basically saying exactly

40:00

what your adversary can do. You say what it's runtime

40:01

can be, what oracles it

40:04

has, um, you know, what

40:04

it can do with the system.

40:07

Like, query it, reset it. And side channels are

40:09

one of those powers.

40:12

And so if the proof doesn't

40:12

capture that, hey, you can

40:15

get, you know, random bits of

40:15

memory sampled, you know, um,

40:19

or that you can, you know,

40:19

cause an error and cause this

40:22

thing to abort midway through. And then like what does

40:24

that do to the protocol? So, there was a pretty,

40:26

you know, extensive

40:29

effort to, work on this,

40:29

and it's called leakage

40:32

resilient cryptography. And they, they basically are

40:34

trying to, capture the idea

40:37

that there'll be some sort

40:37

of leakage to an attacker.

40:39

And you know, I think the

40:39

challenge there is that side

40:42

channels are really hard to

40:42

model because they're almost,

40:46

by definition like surprising. Like, okay, so you define

40:47

some leakage channel that's

40:50

like, bits of memory. Well, okay, what about timing?

40:53

So then how do you model

40:53

that for an attacker?

40:55

So I think it can be done.

40:59

I think it's gonna be hard

40:59

to really list and, and model

41:03

all the different types of

41:03

side channels an attacker

41:06

could have in the real world. But yeah, there's been

41:08

attempts to do this and,

41:11

surprisingly you don't

41:11

see that as kind of like

41:14

a standard security model. I think it just becomes

41:15

like, really complicated to

41:18

prove things in it, that,

41:18

okay, this is secure and

41:22

you might leak, you know, a fifth of your, or, you know,

41:23

a fraction of your memory.

41:26

You have to probably do things

41:26

to make it secure, that would

41:29

make it like impractical

41:29

in a, in a lot of practice.

41:32

So, you know, I think in practice people are like, all right, you don't, you're

41:34

not, you have an isolated

41:36

compute environment. Like that's our assumption.

41:38

Yeah. Like there are standard

41:39

sort of like, yeah, buy-in

41:42

assumptions that we do

41:42

in the real world that we

41:46

can just sort of be like

41:46

assuming, you're air gaped,

41:49

assuming that you know,

41:49

whatever, it's not leaking

41:52

like crazy or someone doesn't

41:52

have a fucking microphone

41:54

up against your computer.

41:56

Assuming that all

41:56

inputs take the same amount

41:58

of time to process, like

41:58

works pretty well, cuz that's

42:01

then something that you

42:01

can kind of, I don't wanna

42:04

say like enforce in code

42:04

easily, but like something

42:08

that if you bang your head against the wall, you can get most computers to do.

42:12

I mean, another thing that comes up too is just composition of these things.

42:15

Like you might have two components that are each proven secure in

42:16

isolation, and then you

42:19

stick 'em together and all of a sudden it breaks. So it's like, you know, mac

42:20

before encrypt, encrypt before

42:24

mac, like, yeah, the order

42:24

matters and you know, the mac

42:27

is secure, the encryption is

42:27

secure, but if you do them

42:29

in the wrong order, all of a

42:29

sudden the, the scheme is not.

42:32

So that's the other thing

42:32

too, that I think is

42:34

complicated here, is that,

42:34

you know, you might have,

42:37

building blocks are all great and you just don't know the magic incantation

42:39

to put 'em together and

42:42

you, you do it wrong.

42:43

The Universal

42:43

Composability model!

42:47

That is another one that I like, tried to read and I was like, what is this?

42:51

Yeah. Uh, shout out to Ran Canetti.

42:56

yeah, it's two for two Ran Canetti. Like,

42:59

We're vaguely

42:59

laughing cuz universal,

43:02

universal composability is

43:02

like this other way that would

43:05

theoretically be really nice

43:05

to literally compose schemes

43:10

together and have a way to

43:10

show that their properties

43:14

either like, compose together

43:14

nicely or you know, whatever.

43:17

Yeah, but it's a, it's a very

43:17

different kind of way to prove

43:22

things than say, game-based

43:22

security, random oracle model,

43:27

standard model sort of thing. They're, they're kind

43:28

of, they're kind of

43:30

looking at each other, but never touching. And so if you, if you write

43:33

proofs in like, quote, the

43:37

regular way, or at least

43:37

what's now the regular

43:39

way, you don't really write

43:39

proofs in the universal

43:42

composability model. They're kind of, they're

43:44

kind of on, on other sides of the dance floor.

43:47

But it would be really nice. Universal composability

43:49

sounds really great, and

43:53

then you get into the proofs

43:53

and you're just like, ah,

43:56

Yeah, this is, this is

43:56

one of the ones where like,

43:58

I remember reading it and

43:58

like not really understanding

44:00

it and then kind of put it

44:00

on a shelf and like, I know

44:03

the name and I can refer

44:03

to it, but I can't do it.

44:06

I don't really know how it works under the hood, but I think there's been

44:08

some work that has like revised it and is trying to

44:10

like, you know, adapt it.

44:13

I saw something in the last couple years that was like, you know, UC

44:15

Revisited or something.

44:18

Cool. I am a fan of the newer

44:19

state separating proofs

44:23

kind of approach. It's very in the game-based

44:24

thing, and you, you know, do

44:26

to transitions of games, but

44:26

instead of kind of everything

44:29

being in the, environment or

44:29

whatever, if you're doing,

44:32

you know, proofs of, you

44:32

know, uh, knowledge in the

44:35

simulation or whatever,

44:35

you can like segment these

44:39

packages and stuff like that. It's, it's very nice if you

44:40

are like a programmer first or

44:44

an a software engineer first,

44:44

uh, versus a person who's

44:48

making proofs to model all

44:48

this crap, blah, blah, blah.

44:51

But it's brand new. Alright, so we

44:52

talked about a lot.

44:56

I think we've covered a lot of ground here.

44:58

All right. Does P equals NP?

45:01

No? Everything would break?

45:03

you think?

45:03

Definite no for

45:03

me, but, uh, yeah, one

45:06

of those ones I'd like to be proven wrong, but definitely no for me.

45:09

Yeah, I hope not. It would, it would be, I don't

45:11

wanna use the one time pad.

45:15

How about that? I hope P does not equal NP,

45:16

cuz I don't wanna have to

45:19

manage, code books and worry

45:19

about the one time pad.

45:24

I'm just gonna retire

45:24

if it is like I'm done,

45:26

I'm done with computers.

45:28

Yeah, everything

45:28

will be broken.

45:30

Okay. Thank you very much

45:31

for joining us.

45:33

Thank you All

45:33

All right. Bye.

45:41

Security Cryptography

45:41

Whatever is a side project

45:44

from Deirdre Connolly, Thomas

45:44

Ptacek and David Adrian.

45:47

Our editor is Netty Smith. You can find the podcast

45:48

on Twitter at scw pod

45:52

and the hosts on Twitter

45:52

@durumcrustulum, @tqbf,

45:55

and @davidcadrian. You can buy merchandise

45:57

at merch dot

46:00

securitycryptographywhatever

46:00

dot com.

46:02

Thank you for listening.

Rate

Join Podchaser to...

  • Rate podcasts and episodes
  • Follow podcasts and creators
  • Create podcast and episode lists
  • & much more

Episode Tags

Do you host or manage this podcast?
Claim and edit this page to your liking.
,

Unlock more with Podchaser Pro

  • Audience Insights
  • Contact Information
  • Demographics
  • Charts
  • Sponsor History
  • and More!
Pro Features