Podchaser Logo
Home
#139 — Alan Hájek on puzzles and paradoxes in probability and expected value

#139 — Alan Hájek on puzzles and paradoxes in probability and expected value

Released Friday, 28th October 2022
Good episode? Give it some love!
#139 — Alan Hájek on puzzles and paradoxes in probability and expected value

#139 — Alan Hájek on puzzles and paradoxes in probability and expected value

#139 — Alan Hájek on puzzles and paradoxes in probability and expected value

#139 — Alan Hájek on puzzles and paradoxes in probability and expected value

Friday, 28th October 2022
Good episode? Give it some love!
Rate Episode

Episode Transcript

Transcripts are displayed as originally observed. Some content, including advertisements may have changed.

Use Ctrl + F to search

0:00

listeners. This is the eighty thousand hours podcast

0:02

where we haven't usually end up conversations about the world's

0:04

most pressing problems, what I can do to solve them.

0:06

And whether if I were at least as handsome as Ryan

0:08

goes I'd be exactly as handsome as Ryan Gosling.

0:10

I'm Rob Woodland, head of research at

0:12

eighty thousand hours. Many people have asked

0:14

me to interview Allen Hayek over the years and this year

0:16

I finally got to do it when I went home for effective

0:19

autism global Australia back in July.

0:21

Alan is one of the most prominent philosophers

0:23

focused on resolving puzzles in probability and expected

0:25

value issues that, it turns out crop up

0:27

shockingly often when you try doing research

0:29

to figure out how to do the most good. We

0:31

did the first hour on at the conference, then another

0:34

few hours of recording back in Allan's office.

0:36

The first hour is very accessible, but I won't lie.

0:38

The later part of the conversation is among the

0:40

more advanced content that we've had on the show, so far.

0:43

To help you out, I've recorded a number of cut

0:45

ins to quickly define terms like Epicycle's

0:47

or Dutch books, which are familiar to

0:49

philosophers but not so familiar to the rest of

0:51

us. Katie Moore has also gone

0:53

through the transcript and added a lot of links that will

0:55

take you to articles that elaborate on concepts

0:58

and arguments as they come up, if you don't mind reading

1:00

interviews rather than listening to them. People

1:02

familiar with these topics should find this interview

1:04

super informative, while I hope those who aren't so

1:06

familiar will learn a lot even if they don't follow

1:08

every single point that's being made. Some of

1:10

the things we cover are simple tricks for doing philosophy

1:13

well, why fruquintism is misguided, what

1:15

probability fundamentally is, problems

1:17

with using expected value and whether we should use it

1:19

anyway, a fundamental problem with Pascal's

1:22

wager and why the dominant theory of kind of factual's

1:24

philosophy is unworkable. One

1:26

quick notice is that eighty thousand hours is

1:28

currently hiring a recruiter to help

1:30

grow our team with applications for that role closing

1:32

soon on November the second. I'll

1:35

say little more about that in the outro, but if

1:37

you'd like to know more about that recruiter role, you

1:39

can get a full job description on our job board

1:41

at eighty thousand hours dot org slash jobs.

1:43

Alright. With that further ado, I bring you. Alan Hayek.

1:48

Today, I'm speaking with Alan Hayek.

1:50

Alan is a professor of philosophy at the Australian

1:52

National University. Years ago, he

1:54

did his PhD at Princeton where he won

1:56

a fellowship for displaying in the judgment of the university

1:59

the highest scholarly excellence of all

2:01

graduate students across all of their disciplines.

2:03

These days, he has a broad ranging interest across

2:05

epistemology, philosophy of language and

2:07

philosophical methodology. His work

2:10

is of great interest to me and the effective

2:12

autism because he is one of the world's

2:14

top experts on the philosophy of probability,

2:16

bayesianism, decision theory, expected value

2:18

and counterfactuals. Some of his

2:21

more memorably titled papers include

2:23

fifteen arguments against frequentism, waging

2:25

war on Pascal's wager, vexing

2:27

expectations, most counterfactors are

2:29

false, and a follow-up paper titled, most manufacturers

2:31

are still false. He's currently working on

2:34

a book on Canifactual's title would work.

2:36

This is our first ever live recording of the show.

2:38

So for once, I should also welcome all of you

2:40

in the audience as well. And before we button

2:43

down for the recording, let's have a random applause

2:45

for Alan.

2:50

Thank you. Great to be here, Rob. Thanks so much

2:52

for having me on the show. Okay.

2:55

I hope we're gonna get to talk about whether

2:57

Besianism is the final big evolution

2:59

and probability. and why you think objective

3:01

consequentialism is even wronger than many people

3:03

can even imagine. But first, you started

3:05

out studying maths and which

3:07

are kinda practical fields in which you could have

3:09

been gainfully employed doing something of obvious

3:11

now you. Yes. What made you switch into philosophy

3:13

after you finished your undergrad degree? Yeah. I

3:16

was studying, as you say, math and statistics,

3:18

and I went to lots of courses on

3:20

probability. And my professors

3:23

would write equations on the board, p of

3:25

this equals p of that.

3:27

And I was wondering, what is this p? What does

3:30

probability mean, and I asked one of them,

3:32

what does probability mean? And he

3:34

looked at me like I needed medication. And

3:37

he said, well, that's a philosophical question.

3:40

And I began a master's in

3:42

statistics, and I got as far as photocopying

3:45

the article that my supervisor wanted

3:47

me to read and I sat down,

3:50

started reading it and I went, and

3:53

I don't know if you've seen pulp fiction, but there's

3:55

a moment where Jules the hitman

3:58

has a moment of clarity. Right. I

4:00

had a moment of clarity. I realized I didn't

4:02

want to go on with statistics -- Yeah. -- but

4:04

what to do instead traveled the world

4:06

hoping to find myself on the road.

4:09

And I did it happened in Western Ontario,

4:12

and a friend of mine who'd

4:14

had his moment of clarity was studying

4:16

philosophy there. He showed me

4:18

the courses that he was studying he

4:20

was looking at. And I

4:22

thought, wow. how cool is that? And

4:24

you could hear that Penny dropped several provinces

4:27

away. That was my second moment of

4:29

clarity. Yeah. That's what I wanna do,

4:31

philosophy. What what was it that bug I mean,

4:33

obviously, most statisticians kinda don't care

4:35

about these, deeper questions about what probability

4:37

is given that they can just, like, do operations on it.

4:39

And what what like, why couldn't you let it go?

4:42

Yeah. In a way, I was

4:44

less interested in the the sort of practical

4:46

payoffs. I I was just wondering,

4:49

where does probability fit into my of

4:51

conceptual scheme, you know -- Yeah. --

4:53

how does it relate to other things? What

4:55

does it really mean? When I say something

4:57

like the probability of rain

4:59

tomorrow is point three. What

5:01

have I said? I understand the stuff about rain,

5:04

but the probability stuff, I didn't understand.

5:06

And here I am, I'm still asking that question. What

5:08

does p mean? Yeah. Okay. Well, we'll

5:10

get to what p means later on.

5:12

But, like, what what are your main focuses at the

5:14

moment? I guess, so you're working on kind of actuals

5:16

primarily. Right? Yeah. That's right. And the

5:18

book would work. Thanks for the advertising.

5:21

And as always, I'm working on

5:23

probability. I still ask what

5:25

is pee. And that's

5:27

related to decision theory, something that I

5:29

think quite a lot about. That got me into

5:31

things like Pascale's wager, still

5:34

thinking about that. We'll probably

5:36

talk about my heuristics and

5:38

some of that comes up as well.

5:40

Yeah. Makes sense. So we're gonna

5:42

get into the details of probability and can affect

5:44

us later on. But first, I wanted to dive

5:46

into this, yeah, other passion of yours, which is

5:48

philosophical methodology. Mhmm.

5:50

Yeah. How did you first get into that topic?

5:52

Yeah. It began when I was a graduate

5:54

student at Princeton. I was

5:56

surrounded by these really good philosophers.

5:59

I wanted one day to be a good philosopher

6:01

or as good as I could be. And I

6:03

noticed there were these recurring patterns

6:05

of thought, sometimes in their work,

6:07

sometimes in conversations, sometimes

6:10

in Q and A. For example,

6:12

I would hear a really good question at

6:14

Q and A. Three weeks later, the same

6:16

person would ask a similar question.

6:18

I thought it worked last time. It

6:20

worked again. There's a recurring

6:22

pattern here. Yeah. I'll internalize

6:24

this. So I started to make a list of these

6:26

recurring patterns of thought, philosophical techniques

6:29

that seem to be fertile. And now

6:31

my list is hundreds long. Why

6:33

isn't this just really an obvious thing. You would think you would

6:35

kind of start of philosophy PhD just before,

6:37

like, here's all the tools in the toolkit.

6:39

Just like go and do this. Yeah. Yeah. It's

6:41

funny. I think that these are strangely

6:43

underexplored. We

6:46

teach our students logic, for example,

6:48

that's certainly one tool in the toolbox. Yeah.

6:50

I think we have all these other techniques and

6:52

we don't nearly discuss them

6:54

enough, think about them. And then I went on

6:56

to teaching at Celtic And

6:59

I had these very smart students for ten

7:01

weeks. How do I convey to them? How

7:03

philosophy is done? Of course, I had

7:05

them read the classics. They

7:07

cart and Hume and so on. But

7:09

along the way, I'd occasionally drop in these

7:11

philosophical heuristics, these

7:13

techniques,

7:14

the partly

7:15

just to show them how philosophy is done, but

7:17

also just it helps You

7:19

do philosophy. And, yeah,

7:21

philosophy is hard. Yeah. I think we we

7:23

could use all the help that we can get,

7:26

especially when you've just written

7:28

some philosophical paper,

7:30

some view of your own. It's curiously

7:32

hard to be your own critic, and

7:34

other people are lining up to

7:36

point out the problems with your view. Mhmm.

7:38

But these heuristics, I think, help

7:40

guide your mind to finding the problems before

7:43

others -- Others -- -- gleefully. -- before you point

7:45

them out. Yeah. That's right. Yeah. What are

7:47

your papers on on this topic? Well,

7:49

you sound extremely defensive in

7:51

a sense that you're you're like, worry

7:53

that other philosophers are gonna judge you daring

7:55

to write down a bunch of heuristics. Yeah. Yeah. Do

7:57

you wanna explain what what of what's going on with

7:59

that? Yes. It's as if I'm

8:01

like the magician who gives away the

8:03

the tricks of the trade.

8:05

And I find this a very strange

8:08

attitude. I mean, think of some other

8:10

area where there are heuristics like

8:12

chess. Mhmm. there's

8:14

no problem with the chess book actually giving

8:16

you some advice, Castle early

8:19

and often avoid isolated pawns

8:21

or in that there are various

8:23

fertile heuristics. But I

8:25

don't know, somehow in philosophy, some

8:27

people view it with a bit of suspicion

8:29

that this isn't really depth

8:32

this isn't really getting to the

8:34

profundity of philosophy. Mhmm.

8:36

And I'm not saying you should just

8:38

follow these heuristics mindlessly

8:41

just like when you're playing chess, you shouldn't

8:43

just mindlessly play

8:45

the things. Yeah. But I

8:48

think they help I think they get you

8:50

closer to your targets. And

8:53

I think they actually help

8:55

creativity too, that somehow, as

8:57

I say, philosophy is hard. So

8:59

These are just techniques to breaking down

9:01

a hard problem into easier sub

9:03

problems, and then you can make progress

9:06

on those. Okay. So, yeah, let's hear one

9:08

of these heuristics. What's what's one that really stands out

9:10

as as a special use one? Yeah. I I

9:12

like the one I call check

9:14

extreme cases. Okay? You're in

9:16

some domain. Extreme cases

9:18

are things like the first case or

9:20

the last case or the biggest or

9:22

the smallest or the

9:24

best or the worst or the smelliest

9:26

to or what have you. Yeah. Okay?

9:29

Now you've got this huge

9:31

search space. Someone gives a big

9:33

philosophical thesis. Suppose

9:35

you want to test it. You

9:37

know, stress test it. Are they counter

9:39

examples? Hard

9:40

problem. Somewhere

9:41

in the search space find

9:44

trouble. Find counter examples. Easier

9:46

subproblem. go to

9:48

the corner cases, go to the

9:50

extreme cases. Mhmm. And often the trouble

9:52

lurks there, if it looks

9:54

anywhere, and it's a smaller

9:57

search space. So that's the technique. I could

9:59

give you some examples. That looks like Yeah. Good

10:01

example to you. All right. Brandi

10:03

O's philosophical thesis every

10:06

event has a cause. Okay? Yeah.

10:08

Okay. And at first, you

10:10

might think, gee, I don't know. Is that

10:12

true or false? Is it it's kind of hard to

10:14

to tell? Alright.

10:15

Hard problem come up with

10:17

a counter example to every event has

10:19

a cause. Easier subproblem

10:22

consider extreme cases of

10:24

events. For example, the first

10:27

event. Call it the big bang.

10:29

Yeah. The big bang didn't have

10:31

a cause counter example, or

10:33

philosophers sometimes say that you should only

10:35

believe in entities that have

10:37

causal efficacy. they have

10:39

some poof. Yeah.

10:41

Like, that's maybe a reason to be suspicious

10:43

of numbers. Maybe numbers don't exist,

10:45

because they don't cause

10:47

anything. And then Lewis has

10:49

us imagine, well, what about the

10:52

entity which is the whole of history

10:54

There's causation. We never. But that

10:56

doesn't cause anything. Right. So according to this

10:58

principle, you shouldn't believe in

11:00

the whole of history. Okay. So

11:02

there the heuristics doing, I suppose,

11:04

negative work. It's -- Yeah. -- destructive

11:07

shooting down some position. But

11:09

I think it could also be constructive.

11:12

Right. Maybe just worth explaining a little bit

11:14

of, I guess, one of your theories of what philosophy

11:16

is. Yeah. Yeah. You want it? Oh, well,

11:18

I think you're thinking of that a lot

11:20

of philosophies, the demolition of common

11:22

sense, followed by damage control.

11:25

Yeah. I love that I love that question. Yeah.

11:27

And philosophy often comes up with

11:29

some radical claim like,

11:31

you know, we don't know anything.

11:33

Mhmm. And but then

11:35

we try to soften the blow a bit and we we

11:37

find some way. Maybe we know a little

11:39

bit. We know a little bit or we have to

11:41

understand knowledge the right way.

11:44

Anyway, so far the heuristics

11:46

have been this extreme cases heuristic

11:48

was somewhat negative. It

11:50

was pointing out a counter example to

11:52

some big thesis. I think

11:54

it could also be constructive.

11:56

I guess maybe long

11:58

termism could be thought of in this

12:00

way. Maybe

12:02

the thing that comes naturally really to us is to

12:04

focus on the short term consequences

12:06

of what we do and we think that's what

12:08

matters. Then you push that out a

12:10

bit And then that extreme case

12:12

would be, well, gosh, our actions

12:14

have consequences until the

12:16

end of time, you know, for the rest of history. So

12:19

maybe we should be more focused on

12:21

that. Yeah. And that's now the beginning of a

12:23

more positive movement.

12:26

Yeah. I guess, so a

12:28

simple question there might be like, for how long

12:30

should we consider the consequences of, like, what should be the

12:32

scope of our moral consideration? And here, say, well,

12:34

let's consider the extreme possibility. We should consider

12:36

all space and all time forever. That's

12:38

right. And then a related heuristic, so

12:40

I started with check extreme

12:43

cases. And sometimes you might just check near

12:45

extreme cases. So you back off a bit

12:47

and they're a little bit more plausible.

12:49

So maybe we don't need to look until the

12:51

end of time, but still look far

12:54

ahead and that

12:56

is still at some odds with

12:58

initial common sense. Yes, I

13:00

guess, people might often come back and say, sure and

13:02

they stream situation doesn't work. Like, lots of things

13:04

don't work in extremes. Like, it's it's more sensible

13:06

to focus on the on the middle cases. Yeah.

13:08

And so this isn't, like, actually, such a powerful

13:11

objection. What do you what do you think of that? I think for

13:13

that very reason that this is a fertile

13:15

heuristic because we spend our lives

13:17

mostly living among the normal cases.

13:19

So extreme cases don't come

13:21

so naturally do is even though they may

13:23

well be troubled for

13:25

some philosophical position. In fact, maybe

13:27

especially because they're extreme.

13:29

They're more troubled than

13:31

the middle cases. Yeah. I guess it I guess it

13:33

depends on whether the claim is like a more

13:35

pragmatic one about like how you ought to do things every day

13:37

or whether you're trying to claim that I've discovered

13:39

some like transcended fundamental truth, and you'll

13:41

be like, well, it doesn't work in this, like, one case.

13:43

That's it. You you said it. You you were claiming that this

13:45

was like something that should cover everything, and now

13:47

it doesn't. and philosophers often do that. They have

13:49

these grandiose claims. Every

13:51

event has a cause or what have you. Yeah.

13:53

And this is a good way of stress testing

13:55

such claims. Okay. Yeah. What's what's

13:57

another heuristic? Yeah. I

13:59

like to focus on the word

14:03

and I say see the word

14:05

in neon lights because

14:07

it typically comes with a presupposition.

14:09

VX typically

14:11

presupposes there's exactly one x.

14:14

There are two ways that could go

14:16

wrong. There could be multiple x's

14:18

or there could be none at

14:20

all. Right? So an example

14:22

of that. do the right thing.

14:25

Just rolls

14:27

off the tongue. Yeah. Rolls off the tongue.

14:30

Alright. The right thing.

14:33

Now that like there's exactly one right

14:36

thing to do. Well,

14:38

two ways we could challenge that. There could

14:40

be multiple right things.

14:42

Yeah. And maybe it's okay if you do any

14:44

one of them, but still that we're challenging

14:46

the pre supposition. There's exactly one.

14:48

This comes up by the way. I don't know if we'll get to

14:50

talk about Pascal's wager at

14:52

some point, but it turns out that there are

14:54

many ways to follow Pascal's

14:57

advice. Going in the other

14:59

direction, maybe there's no

15:01

right thing. Think of a moral

15:03

dilemma like Sophie's choice

15:05

you know, there's no right thing to do

15:07

or Sartre's case of the

15:09

student who's torn between

15:11

fighting in the war or

15:13

staying home with his mother. Yeah.

15:15

What's the right thing to do in this moral

15:17

dilemma? It's not clear that there is one.

15:19

Or at least if you if you start saying that

15:21

there is one thing, then you're making a claim, maybe,

15:23

without even realizing that you're making a claim, you feel

15:25

like you've slipped in an assumption -- Yeah. -- by

15:27

using the That that's right. Yeah. Any

15:29

other examples of that one? Of a of

15:31

a v or, like, people just assuming that there's one

15:34

answer. Yeah. Yes. I

15:36

I think

15:37

Yes, maximizing expected utility. We'll

15:39

probably talk about expected

15:41

utility later on and there is

15:43

– here we're being told do

15:46

the thing that maximizes expected

15:48

utility. If you hear it that way, then there are

15:50

problems on each side. There could be ties.

15:52

for expected utility. Yeah. Or there could be

15:55

nothing that maximizes it. Things just

15:57

get better and better without

15:59

end. It's slightly I guess it's not the same,

16:01

but it reminds me of this saying that in

16:03

philosophy, you can have, like, two two problems.

16:05

One is have, like, no answer to a question. Yeah.

16:07

And the other is to have like so many explanations.

16:09

You can't tell which one's the right one. That's right.

16:11

And this corresponds to the two

16:13

ways. And again, it's breaking down, so

16:15

to speak, harder problem into easier sub

16:17

problems. So you look on each

16:19

side, are there too many of these

16:21

things or are there not enough?

16:23

Yeah. Okay. Yeah. What's what's another

16:25

one? Yep. philosophers love to

16:27

talk about what's possible. They

16:29

love to say x is possible.

16:31

Yeah. I guess a classic case of that like,

16:33

the the zombie thing If I'm

16:35

a zombie yeah. That's right. Our zombie's

16:38

possible. Okay. But that is like a beings that would act

16:40

like people but have no conscious experience. That's

16:42

exactly right. Yeah. And

16:44

philosophers love to distinguish different

16:46

senses of possibility, logical,

16:49

metaphysical, no make what's

16:51

consistent with the laws of nature,

16:53

dogmatic, what's consistent with

16:55

one's beliefs, epistemic and so

16:57

on dontic. And

16:59

there are various techniques for

17:02

generating possibilities or

17:04

for for arguing that something

17:06

called an x is possible.

17:08

Yeah. And when you think about

17:10

it, there are two moving parts to that.

17:12

There's x and is

17:14

possible. possible

17:15

So focus on is possible first.

17:18

One kind of technique says,

17:21

look at some other property of

17:23

x. and it follows from x having this other property

17:25

that x is possible. Let

17:27

let's just do a few of those. Yeah. x

17:31

is actual Okay. So

17:33

suppose x is actual. So for a lot of

17:35

the modalities I just talked about, the

17:37

possibilities, that's a good way

17:39

to establish that x

17:41

is possible Hi, listeners. Rob here

17:43

with a definition to help you out.

17:45

Epistemic refers to

17:47

epistemology, which is the sub

17:49

field of philosophy that's studies,

17:51

the nature of knowledge, how do we know things, what

17:53

things do we know? Then we talked

17:55

about actual. Actual here refers

17:57

to things actually existing. So

17:59

actual is kind of a a

17:59

definite philosophy is used to mean something that's real in

18:02

this world. Okay. Back

18:03

to the show. Well, you gave a

18:04

good one. x is conceivable. That's

18:07

David Dave Charmers appeals to that one in

18:09

the zombie argument -- Yeah. -- x is

18:12

conceivable so it's possible so that it's

18:14

possible to imagine it. And therefore -- Yeah. --

18:16

you, like, start thing from that. Yeah. right.

18:18

Yeah. Yeah. Yeah. What so what what

18:20

are some examples of cases where people where philosophers make

18:22

this argument, I guess, we've got the zombie one. But

18:24

-- Yes. -- it seems like

18:26

like, it's maybe a slightly dubious style

18:29

of argument where you're saying, well, something can be imagined.

18:31

Therefore, like, I'm gonna conclude things from that.

18:33

Yeah. Yeah. That's right. And of course,

18:35

philosophers love thought experiments. And

18:37

I guess soon we'll be talking about some pretty

18:40

ricochet thought experiments. And even

18:42

if these things aren't actual,

18:44

like the Saint Petersburg game or something, still

18:46

we might say, look, we can conceive

18:48

of this. In that sense, it's possible and

18:50

we should take it seriously and

18:53

test our theories against it. And

18:55

that was, by the way, one just one way

18:57

to show the next is possible. Okay. Yeah. And

18:59

remember, is the other moving

19:01

part x as How about we look

19:03

at some other object, let's

19:05

call it, y, some other

19:08

thing, And why is

19:10

possible? We could all agree on

19:12

that. And X is appropriately related

19:15

to why and we

19:17

conclude that X must be possible too. So you're

19:19

related in what in what way.

19:21

So here are some ways. If y

19:24

is possible and y

19:26

entails x. Okay.

19:28

Then it seems pretty safe that x is

19:30

possible too. Yeah. If they're

19:32

compatible, then it seems that x is

19:34

possible too. We just use the term entail

19:36

or entailment. Intailment in

19:38

philosophy is a kind of a fancy

19:40

way of saying that something necessarily

19:42

implies something else So

19:44

if a and tails b, then

19:46

if a is true, then b is also

19:48

true. Okay. Back

19:49

to the show. Here's one

19:50

of my favorite ones. almost

19:54

x is possible. So let

19:56

y be almost x.

19:58

It's very similar to x in

20:00

some appropriate sense. Yeah.

20:02

And then you say, well, the small

20:04

difference between y,

20:06

which is almost x and x

20:08

won't make a difference. to possibility. An

20:10

example of that, according

20:12

to behaviorism, it's not

20:14

possible to have a

20:16

mental life and no

20:18

behavioral

20:19

manifestation of it whatsoever. Reply?

20:22

Yes, it is

20:23

possible because it's possible to

20:26

have almost no

20:28

behavioral manifestation of your mental life.

20:30

Think of Stephen Hawking, sadly

20:32

no longer with us. And

20:35

towards the end, I guess he had just very

20:38

minimal behavior movement of his

20:40

finger, not much more. Obviously,

20:42

he had a rich mental

20:44

life. And now imagine just just

20:46

that finger movement stopping, so he loses that

20:48

last bit of behavior. Yeah. It's

20:50

not like the lights suddenly go out

20:52

for Steven Hawkins. So it is possible.

20:54

Yeah. This is reminding me of another line of argument that

20:56

some people might have heard, which is, you know, some

20:59

people say, oh, you couldn't create a mechanical

21:01

mind or you couldn't out of machines. And you

21:03

say, well, let's take a human brain and let's just

21:05

replace one neuron with a mechanism that that

21:07

does what just what that neuron does. It'll be like, are

21:09

you not a person now? Are you not conscious? And I'll

21:11

be like, No. No. No. No. Really? So they're like,

21:13

what if we place another one? And then the other

21:15

person is either just forced to say, like, you're

21:17

becoming gradually, gradually less conscious. order

21:19

can see or or that there's some sharp cut

21:21

off, which also seems plausible. Excellent.

21:23

So this is another of the heuristics

21:25

of this kind. Let's call it extrapolation.

21:27

Mhmm. So you have a sequence of things and

21:29

you think that each of them is possible.

21:31

Well, now let's just go to the

21:34

next step. That should be possible too. Or

21:36

interpolation. Mhmm. Start with two

21:38

things that you know are possible, maybe

21:40

because they're actual. And now

21:42

interpolate on some whatever gradeable

21:45

scale, Hume's missing shade

21:47

of blue is like this. Okay?

21:49

Take two shades of

21:51

blue that are actual, therefore

21:54

possible. And now imagine a

21:56

missing shade of blue. It's not

21:58

actual, but it's

22:00

somehow between those two on some natural

22:02

scale. Yeah. Well, that seems to be possible

22:04

too. Okay. Yeah. I mean,

22:06

couldn't someone respond that it's not possible,

22:08

but actually too close. There's no shade in between.

22:10

They could say that. And you got to be careful

22:12

with your scales too, and

22:15

sometimes extrapolating or

22:17

interpolating will give you wrong

22:19

answers. Like think

22:21

of the sequence one over n,

22:23

you know, one half

22:25

third quarter, etcetera. Is it

22:28

possible for that sequence

22:30

to reach zero Well,

22:32

no. It gets as close as you like. So

22:35

almost reaching zero is possible, but

22:37

it never actually hits

22:40

zero. That's not possible for that sequence.

22:42

Well, what's an example of the thing we kind of repeat

22:44

the thing with the the sequence again and again and again. I

22:46

guess that's an example we are kind of strapolating. Are

22:48

there examples where you take some principal and then you

22:50

just like keep operating it and then you get some absurd conclusion?

22:53

How about maybe the

22:55

lottery paradox is I guess, you should think

22:57

about -- Yeah. -- or even better maybe the Preface

23:00

Paradox. Oh, I don't know that one. Yeah. Let's do

23:02

the Preface Paradox. This

23:04

puts pressure on the idea that

23:07

Belief is closed under

23:10

conjunction. Okay? You've just

23:12

written

23:12

a long book

23:13

and then you write the following

23:16

preface. I

23:16

believe everything

23:17

I say in this book, but

23:20

I'm confident that there's a

23:22

mistake somewhere in the book.

23:24

Yeah. Okay. So

23:27

I I did my best

23:29

to get sentence number one

23:31

right. I believe sentence number one.

23:33

I believe sentence number two. I

23:36

believe the final sentence of the book,

23:39

but I don't believe the conjunction of

23:41

the book on the contrary. I know that

23:43

in long books like this there's

23:45

always a mistake somewhere. Okay. So

23:47

this puts pressure on the idea that belief, even

23:49

rational belief is closed under

23:52

conjunction. Yeah. A different approach that

23:54

sometimes I apply is, like, trying to subdivide things,

23:56

like, super closely. Yeah. So so

23:58

every so often people say to me, batch

24:01

not possible to put a probability on, like,

24:03

things that have never happened before. So it's, like, fine

24:05

to say that the probability of a coin

24:07

flipping heads is fifty percent.

24:09

But if you say something like the probability of a

24:11

nuclear war in twenty twenty seven is

24:13

one point five percent, then this is just a bunch

24:15

of silliness and like because this never happened before.

24:17

then, of course, you can say, well, if you define

24:19

all events like sufficiently closely, then none of them

24:21

have ever happened before. Like, all events are unprecedented

24:24

once you, like, specify exactly what they are. That

24:27

that's right. If you specify the events too

24:29

precisely, then they're all unique. If

24:31

you specify them too

24:33

loosely, then, of course,

24:35

anything can be described as

24:37

something happens. Yeah. Okay. We have many

24:39

instances of that. Alright? And

24:41

now you want to find the right level of

24:43

grain that describes this

24:46

event in an informative way and

24:48

such that susceptible, let's go to

24:50

some probability. Now that I think about it, I mean, that

24:52

that potentially creates trouble for the, like, coin

24:54

being fifty percent. because you say, well, like,

24:56

each coin flip is unique in its own way. I mean,

24:58

like, exactly like whether in any

25:01

specific case, like, it depends on how they flipped it. And

25:03

so if you define and and if you like specify that, then

25:05

it's a hundred percent and zero percent. Yeah.

25:07

Maybe we'll soon be talking about frequentism,

25:09

but this is related to a famous problem

25:12

for the frequentist account of probability, the

25:14

problem of the single case. Right.

25:16

At some level of specification, every event

25:18

only happens once And then in

25:20

some events, that is quite a natural thing to

25:23

do. Like, this year's

25:25

NFL grand final That's

25:27

Australian rules football for the non Australian.

25:30

Okay. That seems to be a unique

25:33

event. And there may

25:35

be an issue, but how we assign probabilities

25:37

to it. But it seems as

25:39

far as frequencies go, it's a

25:41

one off thing. It's either one or zero for

25:43

say calling would my team winning. Yeah.

25:46

Yeah. Alright. Yeah. Are there any other any other

25:48

heuristics that people should keep in mind if they wanna kinda out

25:50

with their friends in conversation before we

25:52

move on? Yeah. 0II

25:53

like to think about

25:56

putting

25:56

contrastive stress on different

25:59

words in a claim and that that

26:01

makes you realize there's a there's a

26:03

contrast class this as opposed to

26:05

that. Let's take for

26:07

example, smoking a pack

26:09

a day causes lung

26:11

cancer. That's that seems

26:13

reasonable when you just say it like

26:15

that. Well, smoking as

26:17

opposed to other things you could do

26:19

with a cigarette like stroking it. Okay?

26:22

Yes. It seems like the smoking is relevant.

26:25

Yeah.

26:25

Smoking a pack

26:26

a day. Smoking one pack

26:29

a day Now as opposed to

26:31

none, yes, that seems

26:33

to cause lung cancer.

26:35

But what

26:35

about smoking one pack

26:38

a day as

26:38

opposed to three packs a

26:41

day. Now it's not so clear that

26:43

it's the one pack a day

26:45

that causes the

26:47

lung cancer as opposed to

26:49

three. And this now makes you

26:51

think that explanation must

26:53

be a contrastive matter.

26:55

Yeah. Isn't that and yeah. This thing

26:57

is opposed to lip band counterfactuals or like

27:00

an alternative. They're saying one is opposed to zero,

27:02

but we don't we don't say that most of the time. And

27:04

so we could eventually end up having a kind of

27:06

factual that you haven't really, like, properly thought through?

27:08

Yeah. Yeah. And, yes, this

27:10

makes you be honest. What am I

27:12

contrasting this thing with

27:14

maybe that was implicit and not

27:16

obvious. And this this technique where

27:18

you, I mentally, italicize different

27:20

words in a claim, and

27:22

and say, okay. In each case, this as opposed

27:24

to what. And then I realized, oh, okay.

27:26

I was making a presupposition or

27:28

something like that. Yeah. Okay. So

27:31

I suppose, like, all of these heuristics can

27:33

potentially provide kinda stress cases

27:35

or, like, I guess, considerations against

27:37

conclusions that people are putting forward. Yeah. In

27:39

general, do you think that there's very often, like,

27:41

decisive arguments against the position? Or or is it more

27:43

often that all of these weaknesses kind of

27:45

accumulate? And then you end up with kind of view that,

27:47

like, this many like little pro there's too many problems

27:49

here for this to be quite the right theory.

27:51

Yeah. This thing, like, most most theories you can throw

27:53

something at them and be like, it doesn't feel quite right in this

27:55

case. Yeah. Lewis famously

27:57

said, philosophical positions are

27:59

rarely refuted. Get

28:01

here may have done it. That's the justified

28:03

true belief, analysis of knowledge, Girdle

28:05

might have done it, but it's it's about

28:07

it. It's rare. Hey, listeners. We just

28:09

mentioned getier problems, which We

28:11

don't really need to understand in order to follow

28:13

the rest of the conversation, but I thought I'd add

28:16

in a little bit about that just because it's

28:18

quite fun. since the beginning of philosophy, that

28:20

being this question, what is

28:22

knowledge? And one of the most

28:24

common answers or one of the most popular

28:26

answers into the twentieth century going back to the

28:28

ancient Greek was that knowledge

28:30

is justifying true belief. So you have

28:32

to believe something. It's gotta be true and you

28:34

have to have good reason to think that

28:36

it's true. In nineteen sixty three, Edmund Gettya

28:38

published this paper that basically

28:40

to almost everyone's satisfaction kind of

28:42

demolished that definition of knowledge with a

28:44

really strong counter example. And the

28:46

cataract example is this. Imagine that I

28:48

walk into my colleague's

28:50

office say, and I see

28:52

them sitting at their desk. and I

28:54

formed the belief that my colleague, Neil,

28:56

say, he's in his

28:58

office. It'd be very natural to say I have a

29:00

justified true belief there because I've seen

29:02

Neil sitting at his desk. But

29:04

imagine that unbeknownst to

29:06

me, Neil has had an

29:08

amazingly realistic wax sculpture of

29:10

himself that he has placed in

29:12

his desk chair in his place, and I wasn't

29:14

looking very carefully. So I hadn't

29:16

realized that it wasn't actually Neil sitting at

29:18

the desk. But

29:19

also unbeknownst

29:20

to me, Neil is in his office. He's

29:22

hiding under the desk as part of a prank

29:24

involving this wax sculpture that he's

29:26

produced of himself.

29:27

Now, I believe that Neil was in his

29:30

room. He is

29:30

in his room. He's hiding under the desk. And I

29:32

had good reason to think that Neil was in his

29:34

office because I saw him there.

29:36

But

29:36

can I really be said to have known

29:39

that? Given that, what I saw

29:41

was the wax sculpture, so it's

29:43

only by pure coincidence that my belief

29:45

is true. There's not actually a real

29:47

connection between the justification that

29:49

I had for thinking that Neil was in the office and

29:51

the fact that he is. It's just luck that

29:53

he also happens to be hiding under his desk.

29:55

this is the case where it seems like one can have

29:57

a justified true belief, but it doesn't

29:59

really

29:59

feel like knowledge because the

30:02

truth of the claim is not connected to the

30:04

justification that I have in my mind.

30:06

That's a

30:06

getier case or a getier problem. It shows

30:08

up all over the place where one can think

30:10

the right thing, but for the wrong reason, Another

30:12

amazing thing is that Edmund Gettya published this paper,

30:15

his knowledge justified true belief in

30:17

nineteen sixty three, but

30:19

that's basically the only thing that he's known for doing. I I don't know

30:21

that he made any other notable contributions

30:23

to philosophy other than one of the most

30:25

famous arguments made in philosophy across the

30:27

entire twentieth century. So I

30:29

guess he's kind of the one hit one, the the the macarena of his fields,

30:32

so to speak. Alright. That's get your

30:34

problems. Let's go back to the

30:36

interview. I think there are some

30:38

killer problems for various positions

30:40

we might talk about some later.

30:42

Yeah. But yes, often it's more

30:44

a matter of accumulation. There are some

30:47

smaller problems, but they start adding up.

30:49

And overall, the

30:51

position doesn't

30:51

seem tenable. It doesn't seem to

30:54

best capture the data points. Yeah.

30:56

The people sometimes give

30:56

up on them because it feels like, okay. You you could

30:58

like patch the first three things kind of, but then just like

31:00

more and more cracks start appearing and you're, like, trying

31:02

to patch it up everyone. You're, like, it starts to feel

31:05

like more complicated. Just patch it. Yes. Epi

31:07

cycles. Right? Yeah. It feels like more complicated to patch the

31:09

theory than just to come up with a new one.

31:11

That's right. Some might say that that

31:13

happened to the justified true

31:15

belief account of knowledge that

31:17

that can't be quite right. It seems because of

31:19

get here and so on. And then we add an

31:21

Epicycle and another one and another one.

31:23

And if it goes too far, it starts to

31:25

feel like a degenerating research

31:28

program it's like, yeah, the explanation so long that it's lost.

31:30

It's original intuitive appeal. It's no longer

31:32

simple either. That's right. And even if

31:34

it turned out, alright, I

31:36

can't come up with any counter examples to

31:38

this tremendously long complicated

31:41

analysis, but do I really feel

31:43

it? Illuminate. Suppose that really

31:45

is knowledge is this

31:47

thing with many Epi cycles. Yeah.

31:49

Do I feel like I understand it better? Yeah.

31:51

Yeah. Not really. hey, listen.

31:53

Rob here with with another quick aside.

31:55

This one's

31:56

a fun one. We just use the term EpiCycles,

31:58

which

31:59

philosophy around quite a bit, but I think I

32:02

hadn't heard until I started talking to people

32:04

who'd done philosophy PhDs. That year

32:05

of EpiCycles, is basically you're adding

32:08

lots more complexity to a

32:10

theory in order to salvage it

32:12

from cases in which it doesn't match in reality or it

32:14

doesn't match your intuition. But really

32:16

what you want to do is throw out the theory

32:18

altogether and and then start again because it's the

32:20

wrong approach. It goes back to

32:22

how the ancients used to

32:24

predict the the the motions of planets

32:26

when they thought that everything was

32:28

going around the Earth one way or another.

32:31

Obviously, the planets weren't growing around the

32:33

Earth fundamentally. So how

32:35

did they manage to explain the apparent

32:37

motions of of the planets and then them coming

32:39

closer and then and then going further

32:41

away. Basically, they modeled the planets

32:43

traveling around the earth, but they were

32:45

also traveling in little circles

32:47

around the circle that they were traveling

32:49

around the Earth on. They were traveling at the same time on

32:51

two different circles. Then this allowed them

32:53

to match up the predictions of that model with their

32:56

observations of where the planets were. So the main

32:58

big circle that they were traveling on around the

33:00

Earth was called the deferrant or deferrant,

33:02

I'm not sure how it's pronounced. And

33:04

the little circles that they were traveling on around

33:06

that circle were called EpiCycles. I

33:08

think it I think it comes from Greek meaning,

33:10

yeah, circle moving on another circle.

33:13

anyway, so you can see how this has come to mean adding

33:15

complexity to a model in order to

33:17

salvage it when really it's it's the wrong model

33:20

to start. What they should done is that the

33:22

planets were moving in ellipses around the

33:24

sun rather than adding more and more circles

33:26

to try to match things up. Interestingly

33:28

though, by adding in enough Epicycle's like

33:30

this, they were able to make their predictions of

33:32

the planet's motions match almost exactly

33:34

that their observations because they

33:36

just had enough degrees of freedom in the model to allow

33:38

them to place the planets in in any particular place

33:41

at any particular point in

33:43

time. what can happen when you're you add a

33:45

lot of complexity to a model even if it's mistaken.

33:47

Okay. That's Epicycle's back to the show.

33:49

Okay.

33:49

Well, what what are some common moves that

33:52

philosophers make trying to debunk at a other

33:54

philosophers of ideas that you think are kind of overrated.

33:56

It's like not as strong an objection as as people

33:58

make out. Well, a whole

34:00

movement in philosophy see that I now distance

34:02

myself from. I think this speaks to your

34:04

question. Flossers love to

34:06

appeal to other worlds, and

34:08

in particular, similar

34:10

worlds. This may come up when we

34:12

talk about cataractials. The

34:14

standard analysis of cataractials

34:16

appeals to

34:18

most similar possible worlds, but not just

34:20

counterfactuals. Knowledge, we often

34:22

talk about so called safety and

34:26

sensitivity, And that's a a matter

34:28

of what's going on in nearby worlds where you believe something or where the

34:30

thing you believe is is true or not true.

34:34

And I I used to

34:36

just talk this way too. I used to love all this

34:38

talk of similar worlds

34:40

and and other worlds in general. And

34:42

now I've now i've jumped off

34:44

the bus and I think one should not be

34:46

casting everything in terms of

34:48

other worlds and similarity relations.

34:52

I prefer to do it in terms

34:54

of probability and stay in the actual world, probabilities anchored

34:56

here and it will give

35:00

my preferred accounts of things. It doesn't quite speak to your question, but I think

35:02

it's close. Okay? Yeah. Any

35:04

yeah. Any other advice or burning young philosophers in the

35:06

in the audience, but before you push on to concrete

35:09

concrete questions. But high value for

35:11

me in philosophy is clarity. I

35:14

really care about clarity. And by

35:16

the way, these heuristics, I think,

35:18

help in that mission as

35:20

well, trying to just really get clear on what a

35:22

position is and what possible

35:24

defects are.

35:26

And as I think SIRL you understand

35:28

what you're saying. Yeah. And I

35:30

would begin with that. Yes. I'm

35:32

not sure a hundred percent agree.

35:36

So philosophers do tend to just be absolutely obsessed with, like,

35:38

making sure that the thing applies to every case.

35:40

And, like, when you read philosophy papers, you really do

35:42

get this impression that it's like a long series

35:45

of like paragraphs added to, like, like, every objection that some

35:47

philosopher might raise, so there's, like, nothing that can be

35:49

said in response. So it's in, like, one sense that's

35:51

extremely clear. In another sense, it feels like maybe you're,

35:53

like, losing the phrase for freeze

35:55

sometimes. Yeah. Like, in considering off of, like, small object

35:58

like, clever, small object rather than, like, is

35:59

the is the core issue because of the

36:02

thesis sound. Yeah. Maybe being too

36:04

pedantic. Yeah. Yeah. yeah, come on. You know

36:06

roughly what I meant. But Yeah. Since some

36:08

degree, like, a philosophy is a kind of professional pedents.

36:10

That's kind of a The whole thing is to find the education.

36:12

That's right. And I'm sure

36:14

you know the way that philosophy is often right.

36:16

It's often in this very defensive

36:18

style because you're preparing yourself

36:20

for the most uncharitable objection because -- Yeah. -- you didn't

36:22

quite nail the point, you know,

36:24

given your wording, there's some counter

36:28

example. And, yeah, this should

36:30

be tempered with the principle of charity that you should try to give a charitable

36:32

interpretation of what's being said

36:34

and and look for more

36:37

interesting

36:37

more profound deeper objections.

36:40

Yeah. Yeah. It is the most challenging part

36:42

of, I think, interviewing philosophers is is that they

36:44

really wanna be right. And just like,

36:46

can you the love of God, say something wrong. Say something wrong, but less

36:48

words, please. Yeah. Right.

36:50

And, yeah, you you ask a question and, well,

36:52

let's not quite right the way you said it.

36:54

Yeah. But Yeah.

36:56

If if at least the the spirit of it's

36:58

conveyed that that is often good

37:00

enough. Yeah. I I suppose yeah. If you if

37:02

you get I guess if you get too sloppy in that direction, then you can't like do

37:04

philosophy so much anymore. But then you just become like

37:06

ordinary people who accept common sense and like

37:08

I'm not looking for like the little ways that

37:11

like end up you, like, investigate more deeply, it does actually demolish common

37:13

sense when you think about it properly. Yeah.

37:16

Okay. Let's push on to a topic that is

37:18

dear to all of our heart's probability. Yeah. But

37:20

kind of the core question I

37:22

really wanna ask here is whether

37:24

basicism as kind of as it's often practiced by people in

37:26

the audience, whether it's true in

37:28

some sense. Yeah. But before I

37:30

ask that, we should probably wind back a step. At

37:32

a high level, kinda what are the different

37:34

theories of what probability what

37:36

probability is? Like, what yeah. What what ideas do people

37:38

have about it? Yeah. This is what my

37:40

professor should have said when I asked

37:42

the question, what is

37:44

p? And he thought I

37:46

needed medication. this would be the

37:48

beginning of a reply. Let's

37:50

distinguish three main kinds of

37:52

probability. The first, let let's

37:54

call it objective chance. Out

37:57

there in the world, there's probability and it's mind dependent.

37:59

It doesn't care what

38:00

we think. So I think maybe of

38:04

radioactive decay. The

38:06

radium atom has a decay law. It's got

38:08

a half life of sixteen hundred years,

38:10

which is a probabilistic fact. It

38:13

doesn't care what anyone thinks. It's just

38:16

out there in the world. It's like mass and

38:18

charge. It's some property

38:20

out there. That's the first.

38:22

Yeah. Can I make sense? Yep. Yeah. Second one,

38:24

let's call it Oh, I guess, under that one, do we have

38:26

kind of quantum mechanics issues? It's like the

38:28

probability of an electron being here rather than

38:30

there. Is that that in the same

38:32

category? Yeah. And that's a big question in the interpretation

38:34

of quantum mechanics whether there really

38:37

are objective probabilities or

38:40

is in fact everything deterministic and really the

38:42

chances are all ones and zeros -- Okay.

38:44

-- on one view. Wow. But

38:47

Let's get that. Okay. Second

38:50

interpretation, let's call it

38:52

subjective

38:53

probability and That's

38:55

now more about rational agents, people like

38:57

us. Call it Credence,

39:00

if you

39:02

like. and it's degree of confidence. We

39:04

sometimes we outright believe

39:07

things, but it's often

39:10

our degrees of confidence are more nuanced than that.

39:12

We believe some things more than others as we

39:14

might say. We have degrees of belief, and that

39:16

that would be the second interpretation. But

39:20

he had not just anything goes, it seems these are rational degrees

39:22

of belief, and it'll

39:23

turn out according to Beijingism,

39:25

they're constrained by

39:28

probability

39:28

theory. So would there be other, like, the correct credence?

39:30

Well -- Yeah. -- excellent. This this this

39:32

will get us to a big issue in

39:36

Beijingism. at

39:37

one extreme, you've got, I guess, a

39:39

very permissive radical subjectivism,

39:42

where the only

39:44

constraint is a

39:46

bay the probability calculus. They'll shelter

39:48

bay, columnar of sections, something like

39:50

this. Like, everything's got a sum to one

39:53

or the probability something happening as one. That

39:55

that's right. probabilities are non negative. Yeah. They have a top

39:57

value of one they add in a certain

39:59

way. Yeah. Okay. Now that

40:02

seems very seems very permissive.

40:04

You know, I can assign

40:07

probability even one to,

40:10

you

40:10

know, a meteor quantum

40:14

tunneling through this lecture theater

40:16

before we're finished and going out the other side --

40:18

Yeah. -- as long as I get probability zero to it,

40:20

not doing so and so

40:22

on. And probability, very high probabilities to the

40:24

sun not rising tomorrow and

40:26

so on. And it

40:28

doesn't violate

40:30

any fundamental axioms are probably

40:32

That's right. Yeah. But it seems somehow

40:34

too unconstrained. Right. Now at

40:36

the other end of the spectrum, This

40:39

is the so called uniqueness thesis

40:41

that your

40:41

evidence uniquely

40:42

constrains you and there's

40:44

exactly one creams you should have and

40:48

in fact There was let's call it an Ere prior. No.

40:50

There was the correct

40:52

thing. Correct. Before you saw anything about it. probability

40:54

before any evidence, you should

40:58

start there. and just update it on your evidence as it comes

41:00

in. Now that seems too extreme

41:02

perhaps in the other

41:04

direction that given my

41:06

evidence, there's a sharp probability

41:08

I should give to it

41:10

raining tomorrow. And -- Yeah. --

41:12

if I said seventy one point two rather than

41:14

seventy one point one, I wrong to think of --

41:16

Yes. Yeah. -- that's right.

41:18

And you'd think that different people could

41:20

disagree to some extent,

41:22

but not according to this thesis, not if

41:24

they're rational. And then you've got all the

41:26

positions in between in this

41:28

spectrum, and we could then

41:30

have discussions

41:32

about just how much grievances are constrained.

41:34

Actually, this nicely brings us to the third

41:36

main interpretation. Let's call

41:40

it evidential probability. And

41:41

the thought is that

41:42

evidence does put

41:44

some constraints

41:46

on

41:47

on hypotheses or propositions

41:50

degrees of support. Let's

41:51

call them. Okay? And probabilities

41:53

measure these degrees

41:56

of support. So for example, maybe given your

41:58

evidence, it's very

41:58

probable that the

41:59

sun will rise tomorrow. Yeah.

42:02

And then we might say the evidential

42:04

probability of

42:06

Sunrise given your evidence is

42:07

high. So I haven't quite tracked that. Yeah. How is this

42:10

different than the previous

42:12

ones? Yeah. So the previous

42:14

one, we started with just subjective

42:16

probability. Yeah. And notice

42:18

how we

42:18

sort of morphed into the third one,

42:20

But first, we we said we had the

42:21

permissive radical subjectivism that

42:24

said anything It's it's up to

42:26

you. Yeah. Your choice. Leave whatever you

42:28

like. Yeah.

42:30

Your credence just run with

42:32

it and then update it appropriately. And that definitely

42:35

was not evidential probability.

42:38

And then probably when we get

42:40

to the other extreme, the

42:42

uniqueness thesis, then it does

42:43

seem that that would be some something very

42:46

much like evidential

42:47

probability that would would

42:49

constrain you. And then there are the positions

42:51

in between. So I I don't wanna make

42:53

a sharp division between

42:55

subjective

42:55

and evidential probability. Really,

42:58

it's a spectrum and this

43:00

corresponds to what you

43:01

might call subjective bayziness

43:03

or more objective basingism

43:05

and degrees of that as we move

43:07

down the spectrum? Yeah. Okay. So

43:09

so the spectrum between, like, the

43:11

subjective basicism and these

43:13

kind of evidential a reasonism. So, like, where you can

43:15

believe anything and where have to believe, like, one specific thing. I like both of them feel but I feel

43:18

pretty terrible in their own own

43:20

own way. And you're saying, like,

43:22

maybe the most plausible thing or the most intuitive thing.

43:24

It's gonna be somewhere in the middle where it's like, you could there's

43:26

a range of stuff that you can believe that's not stupid.

43:30

Yeah. Yes. but you can, like, go too far. And it's also not just like one specific thing, you know,

43:32

like an idiot. If you don't believe the exact exact

43:34

truth. Yeah. I guess, whenever you have a spectrum like

43:36

that, it's gonna feel a little

43:38

bit arbitrary. a point you draw on there.

43:40

So, like, how would we even begin to know, like, what what is the correct point along that spectrum? Yeah. Well,

43:42

we start adding some constraints

43:44

to to the radical

43:47

subjectivism. Maybe for

43:48

example, the principle of indifference. You might

43:51

think that's a kind of

43:54

evidential probability explain that? Yes. Okay. When your

43:56

evidence bears symmetrically

43:58

on a range of

43:59

possibilities, not favoring

44:02

any other then

44:04

you should give probability these cases. It's most

44:07

plausible in gambling cases

44:09

like Etorso coin head's

44:12

tails. My evidence doesn't favor heads over tails.

44:15

It seems I should give half

44:17

half to those possibilities. And

44:20

then we can complicate that and find

44:23

brain

44:23

some problem and then say, well,

44:25

either you have no

44:26

evidence So you should

44:29

give equal probability to the cases or you do have

44:31

evidence that

44:31

bears symmetrically on them and you

44:33

should give equal

44:34

probabilities. Now a

44:36

big discussion could be had. There there

44:39

meant to be various serious problems. There are

44:41

serious problems with the principle of indifference

44:43

as I just stated it. and

44:45

various people think it's bankrupt

44:48

because of these problems.

44:49

Yeah. But it's funny how

44:52

people often reach

44:53

for it intuitively, even

44:56

when they've disavowed it a minute

44:58

ago, for example, in the Monty Hall

45:00

problem, may maybe you know that. Yeah.

45:02

You know? behind one of

45:04

three doors, there's some prize, what probability

45:06

should you give to

45:09

it being behind a particular

45:11

door? And then the problem continues. Now

45:13

no one

45:14

says, oh, I give probability

45:17

one seventeenth to behind

45:20

door number one because I'm just unconstrained,

45:22

and that's what I feel like. And

45:24

one one seventeenth, you feel the

45:27

pull of the principal in the difference.

45:29

I should give third, if you've just a

45:31

moment ago, disavowed the

45:32

principle of indifference. So it seems like

45:34

in some restricted cases, at least,

45:38

Yeah. It

45:38

has some pull. But I suppose one we've been talking here

45:40

about, like, what you should believe or just this is, like, smuggling

45:42

in this idea that there is, like, anything that you

45:45

should believe. We're, like, would be, like, practically, brought

45:47

in, like, ethics before, like, some ethical considerations almost into, like, into what

45:49

you should believe. And and and maybe that'd be too much

45:51

to get into here. This we

45:54

may be getting into ethics later

45:56

too, which is not my area

45:58

of specialization, but I'm always happy

46:00

to try. Okay. With that kind of

46:02

scene setting about what probability might be out of

46:04

the way. Real quick, I wanna ask, you know, me and

46:06

my friends, when we're like hanging out, when

46:08

we're chatting, watching the news or talking about events in the world. You know, I might say something

46:10

like I think that's like an eighty percent chance that Boris

46:12

Johnson is gonna resign in the next week or

46:14

two. And someone else would say no,

46:16

I think seventy percent, and then we, like, go

46:18

and check the news and, like, moment as to resign, and we're, like, okay, I think it's, like, eighty five percent now.

46:20

Yeah. How are you doing? How

46:22

are you doing right now? Yeah. Okay.

46:26

But are we doing the right thing? Is is this like the correct thing to do?

46:28

Or are we just playing some game that's like fun

46:30

for us and is like not really any better

46:32

than any other approach? or or at

46:35

least like not like uniquely privileged as the correct approach. Yeah. So I am

46:37

a big fan of Beijingism. I guess

46:39

I

46:39

say on Mondays,

46:42

Wednesdays, Fridays, I

46:43

call myself a baysian, Tuesdays,

46:46

Thursdays, Saturdays, I'm not so

46:48

sure. Today's Saturday. Today's

46:50

Saturday. Okay. Maybe I'm not

46:52

so sure. But look, I think there's something right about

46:54

this. We have these

46:56

degrees of confidence. I think

46:58

we have to allow this. It's not just

46:59

all or nothing belief

47:02

It does seem there are better and worse ways to

47:04

manage these degrees of

47:06

confidence. There are various good reasons

47:09

to think that they should obey probability theory. We may talk

47:12

about that Dutch books, for

47:14

example. There are ways

47:14

that they should update, so called

47:18

conditionalization. And we can then have disputes about how much

47:20

constraints there should be on

47:22

the priors that you have we

47:25

we had a bit of that discussion before. Yeah. But it it

47:28

seems like a very good model and it's

47:30

very fertile too. Lots of

47:32

good things are closely

47:34

related to this. Maybe soon we'll talk about decision

47:36

theory, another thing that I'm fond

47:38

of, and probabilities figure

47:40

centrally in that. And so I

47:41

think it's a good good place to

47:44

start. Hey,

47:44

listeners. One more definition. The

47:46

term

47:46

Dutch book is from

47:49

gambling. It's a set of odds or bets that when created

47:51

by a bookmaker, ensure that the bookmaker is

47:53

going to make a profit. Now, how is

47:55

that relevant to decision procedures and

47:58

beliefs and and all of that kind of

48:00

thing. So

48:00

one weakness that set

48:02

of ideas about grievances or

48:05

decision making could have is

48:07

that someone could be presented with a series of bets that they could be

48:09

offered to make or a series of trades that they

48:11

could be offered to make. that

48:14

would cause them to just constantly get worse off and worse

48:16

off. Basically, they have less and less money until

48:18

they bankrupt. How could that happen?

48:20

how could that happen So

48:21

normally, in philosophy, a standard thing is

48:24

that with preferences say, is that you would

48:26

assume transitivity. So if you think a

48:28

is better than b, and you think b is

48:30

better than c, and you should also

48:32

think that a is better than c.

48:34

Now you might

48:34

want to say, I don't want to buy this

48:36

idea of transitivity. I want to think that

48:38

I could believe that a is better

48:40

than b that b is better than c,

48:42

but also that c is better than a. So

48:45

you have kind of a circle here where you'd be

48:47

willing to give up c in exchange for b.

48:49

you'd give up b in exchange for a, but you'd also

48:51

give up a in exchange for c. But one

48:53

of the reasons why giving up this

48:55

principle of transitivity isn't very

48:57

desirable. is that if you have circular preferences like

48:59

that, then it's possible for someone to just

49:01

keep trading with you. So if you think b

49:03

is better than c, then you'd be willing to

49:06

give up c and potentially pay

49:08

a bit of extra money in order to get

49:10

B. But you can see that someone could just

49:12

keep trading you, b for c, a

49:14

for b. c for a

49:16

and so on and so on and so on in a

49:18

circle until you were left with no money. So

49:20

you said you might end up with c again, but then you've

49:22

lost all the other things that you traded as we were

49:24

going through this circle. where each time you thought things were getting better, but then you just end

49:26

up back where you started. So when

49:28

philosophers say that a set of ideas would allow

49:30

you to be

49:32

Dutch booked, They're saying that they're inconsistent some someone

49:34

to in principle offer you a series of exchanges

49:36

that you would all accept because you

49:38

think that they are good

49:41

because individually, they look good, but then

49:43

collectively, they would clearly just leave you

49:45

better off, potentially just just bankrupt. Okay.

49:47

Back

49:47

to the show. Yeah. Go

49:49

go go go go go go go go go go go go go go go

49:51

go go go go go go go go go go go

49:53

go go go go go

49:55

go go go go go go But for some of

49:57

the week, you're not into vaziness -- Yeah. -- or not into like this approach. Yeah. Okay. Let let's

49:59

do that one first. Like, what what are the reservations

50:01

that you have? Yeah. We

50:04

start with some

50:06

excimatization of probability, and

50:08

Colmogorff is is the

50:10

standard. Okay? And

50:14

he excimatized unconditional probability first and

50:16

then conditional probabilities defined in

50:18

terms of unconditional probabilities. The

50:20

probability of a given b

50:22

is a

50:24

ratio probability

50:25

of A probability of B,

50:26

assuming that the probability of

50:28

B is positive. I actually

50:31

be other way around. I think the conditional

50:34

probability is the basic notion,

50:36

the fundamental notion. I think

50:38

there are problems

50:40

with Colmogorov's exmeritization,

50:42

as I've just put it, he did have

50:44

further subtleties, which I can't get into

50:48

here. But two kinds of problem. Remember I had this

50:50

proviso provided that the

50:52

bottom line of that ratio is positive. You

50:54

can't divide by zero. But here's

50:56

now another problematic

50:58

feature of the standard theory with

51:01

real valued probabilities. Yeah. It

51:03

seems that probabilities zero

51:05

events can happen

51:08

Now

51:08

that's not intuitive. We normally think if something

51:10

can happen, it

51:11

must have positive probability. Yeah. Or

51:13

what what's an

51:16

example? Yeah. Okay? Think, for example, of the

51:18

radio atom

51:19

decaying exactly

51:21

at noon tomorrow,

51:24

say, what's the probability

51:26

of that? On the standard theory

51:28

where it's real valued probabilities,

51:30

in this case, an exponential

51:34

decay law, The

51:34

probability of it decaying exactly at

51:36

noon is zero. Now

51:38

it decaying in

51:39

some infinitesimal region

51:42

around noon, that could be something more. But

51:45

noon itself, that

51:47

point gets probability

51:48

probability zero

51:49

or or throw a dart at a dartboard.

51:52

What's the probability that it hits

51:54

randomly? It hits the

51:56

exact center

51:58

Yeah. Okay. So this really didn't do much math at university. This

52:00

is this is this is this classic thing. So imagine

52:02

that we're gonna randomly choose a real number

52:04

or, let's say, any number between zero and

52:06

one. the probability of any

52:08

being any specific one of them -- Yep.

52:10

-- any specific number is zero because there's,

52:12

like, uncountably infinite any numbers. Indeed,

52:14

I think the probability of you picking like

52:16

randomly picking a number that could ever even be written down is zero.

52:19

Right? Yeah. Well, you think it's computable? Yeah. And

52:21

it's not computable. Right? Yeah. And there are only

52:23

cannibalizing many of those and Okay.

52:25

Okay. So you're saying each of these, like, specific

52:27

numbers has a probability of being chosen of zero or

52:29

any any specific example that you pick a zero. And yet, we know

52:31

that one of them will chosen. Yes. Okay.

52:34

So yeah. Okay. So it's a bit like the lottery

52:36

paradigm. It is. It's like an uncountable

52:38

lottery paradigm. And this is a

52:40

puzzling feature. And

52:42

there is this this rather strong intuition.

52:44

Come on. If if it can happen,

52:46

it should get some positive probability.

52:48

We want to distinguish it from

52:50

an impossible event and

52:52

it seems the way to do that is, well, it's got greater probability. And

52:54

actually, this is another way in

52:57

which we might go

52:58

away from this

53:00

numerical call megal effect immunization. You

53:03

might say more fundamental is

53:06

comparative probability. And now the

53:08

primitive is something like x is

53:10

more

53:10

probable than y or maybe

53:12

better still

53:13

w given x

53:15

is more probable

53:16

than y given z or

53:18

at least is probable and now

53:20

make that the fundamental notion

53:23

and then

53:23

we can maybe recover the numerical

53:26

theory out of certain axioms and so

53:28

on on those fundamental

53:30

comparative probabilities. Yeah. hey, here

53:32

with another definition. We just use the term

53:34

primitive, a philosophical primitive.

53:36

In philosophy,

53:37

a primitive notion is

53:39

a concept that is kind of fundamental

53:42

and can't be broken down and defined in terms

53:44

of other previously defined

53:46

concepts.

53:46

In geometry, for example, a

53:48

point or a line or contains or

53:50

potentially space might be primitive notions that

53:53

one can't really break down and define in terms

53:55

of other notions. They're kind of the

53:58

starting point. Okay. Back

53:58

to the show. I'm just thinking about the the

53:59

the, like, choosing a specific number case. Okay. So so it's

54:02

it's actually quite a bad situation because

54:04

you're saying, So normally,

54:06

with basicism, we're we're thinking we got a prior, we

54:08

got like a prior probability that any number be chosen,

54:10

and then we update a lot based on the evidence.

54:12

Yeah. But because our prior and any any number is gonna

54:14

be zero. And when you, like, multiply anything by

54:17

zero, it's still zero. So so

54:19

random number gets chosen, you're like, you have

54:21

your prior you observe the and you still think

54:23

it wasn't chosen because it would be it was impossible for any number to be chosen,

54:25

whatever it was. Well, it's like, quite bad. Impossible

54:27

in this probabilistic sense, and that's

54:29

the trouble that and it

54:31

seems that probability, so to speak,

54:34

has blurry vision. It does not

54:36

distinguish between the genuine impossibilities

54:38

-- Yeah. -- and these very

54:40

unlikely, but in some good

54:42

sense possible cases. And this

54:44

is related to my

54:46

worry about the ratio formula, again, you divide

54:48

by zero. Now what happens

54:50

if you learn in this

54:53

experiment where you randomly

54:55

choose a real number from say zero

54:57

one. You learn its value, but it's something

54:59

you antecedently gave

55:02

probability zero. If you just

55:04

looked at the ratio

55:06

as standardly understood as

55:09

involving

55:09

a ratio of real

55:11

numbers, It was zero divided

55:12

by zero, it is zero that you'd get that particular

55:14

number. Now how do you update on

55:16

that? How do you, as we say, conditionalize

55:18

on something you originally gave?

55:21

probability zero. Now I have to

55:23

say that this quickly gets us into

55:25

very sophisticated mathematical refinements.

55:28

Colmogorff himself was well aware of this problem

55:30

and he had sophistication. People bring on

55:32

hyperreal numbers. They bring on

55:33

richer number systems that are meant

55:35

to preserve this

55:38

intuition that if something's possible, it'd it'd better have positive

55:40

probability -- Yeah. -- and we can

55:42

let that debate unfold. But it's

55:44

it's

55:44

just to say, now this is Saturday,

55:47

you know, I'm saying, I'm just

55:49

trying to bring out there are various problems

55:51

with just this simple statement. straightforward.

55:54

It seems like. Just

55:56

follow Colmogorff probabilism says

55:57

your probabilities should

55:59

obey these axioms well. It's

56:02

more complicated. should

56:03

we take conditional probability as primitive?

56:06

Do we have to worry about the probability

56:08

zeros? Do we enrich the

56:10

number system beyond

56:11

the reels, which will take comparative

56:14

probabilities as basic lots of

56:16

debates to be had. Yeah. That this might be

56:18

a stupid frivolous question. But is it is it

56:20

maybe that I guess it not

56:22

possible. Is it possible to have a

56:24

probability that's like a negative

56:26

probability or like a probability above one

56:28

or like Well, I'm just thinking, you know, obviously, we have imaginary

56:30

numbers now, like, off the real number line. Yes.

56:32

Like, is is any of this coherent?

56:34

Yeah. Well, some people

56:36

think so Feynman thought they could be

56:38

negative probabilities.

56:40

And, of course, you're not obeying

56:43

on the Gaurav. the chromogor effect

56:45

amortization or thou shalt have non

56:47

negative probabilities. There

56:49

are some thoughts

56:52

along these lines that there's something that behaves probability

56:54

like some function that seems

56:56

to have probabilistic properties. And

57:00

then you see that it sometimes goes negative. There are meant to be some

57:02

physical examples of this. And then you

57:04

say, well, those are the negative

57:07

probabilities. There are some problems with

57:09

this,

57:09

just to take one,

57:12

consider the

57:13

usual formula for independence. Two

57:15

events A and

57:15

B are independent, just in

57:18

case the probability of A and

57:20

B equals the probability of A

57:22

times the probability

57:24

of B. now

57:24

suppose you have independent negative probability

57:27

events, their

57:29

product, something negative times,

57:31

something negative. Yeah. is

57:34

positive. Alright. Yeah. That seems to be

57:36

a problem if if you think the event

57:38

itself and maybe the conjunction of

57:40

it with something else has negative

57:42

probability. How does independents pan out?

57:44

Yeah. Huge, you know,

57:46

issue we could get into there too. Yeah. Are

57:48

there any other downsides to

57:50

bayzunism that important or or or way on you at all? Here's

57:52

one thing that just

57:53

bothers me a bit, and I'll throw

57:55

it out there. As a

57:57

slogan, I'll say, subjective bay

57:59

zionism is anchoring

57:59

and adjustment, and I

58:01

need to explain what I mean

58:03

by that. Anchoring and

58:05

adjustments are heuristic that

58:07

often use when estimating some

58:10

quantity, they're given the so called

58:12

anchor some starting point for

58:14

thinking about the value of

58:16

that quantity and then they adjust until they reach an estimate

58:18

that they find plausible. The trouble

58:20

is that sometimes the

58:21

anchor is in highly

58:24

irrelevant to the quantity, and it just

58:26

should be ignored, yet it

58:28

still influences the

58:30

final estimate. the adjustment

58:32

is insufficient. And there are a couple of

58:34

classic examples, which I can give you. To

58:36

our skin condiment, had a famous study.

58:39

They asked people to watch the

58:41

spin of a roulette wheel, which was rigged to

58:43

land

58:43

on either ten or

58:46

sixty five. And

58:48

then they were asked whether the percentage of African countries

58:50

in the United Nations was

58:53

higher or lower

58:56

than

58:56

the number that they saw. And then they were asked

58:58

to estimate the percentage. Okay.

59:00

Those who saw a low number

59:03

tended

59:03

to give substantially lower estimates

59:06

for the percentage than those who

59:08

saw a high number.

59:10

And look, of course, they knew

59:12

that the RULET number, the anchor

59:14

provided no information whatsoever

59:16

about the percentage, yet it still

59:18

influenced their estimate. And

59:20

that

59:20

just

59:22

seems absurd. That's just seems crazy.

59:24

There's another famous

59:25

study. It was Arieli and Co.

59:27

They asked NBA students

59:30

at MIT to write

59:32

down the last two digits of

59:34

their Social Security number. And

59:36

then they were

59:36

asked whether they would pay this

59:38

number of dollars for some product

59:40

say, a bottle of wine box of fancy

59:43

chocolates and so

59:43

on. And then they were asked, what

59:46

was the maximum amount they were willing

59:48

to pay for

59:50

the product. Those who wrote down higher

59:53

two digit numbers were

59:54

willing to pay substantially more

59:58

And of course, they knew that their Social Security number

59:59

was completely uninformative

1:00:02

about the value of the product, but still

1:00:04

they anchored on it and

1:00:07

it influenced their final valuation. Okay. So the

1:00:09

idea is that the residue of the

1:00:11

anchor remained even after the

1:00:13

adjustment of thinking well,

1:00:15

how valuable is this product really? Okay.

1:00:18

Now these seem to be paradigm cases

1:00:20

of irrationality. Okay. But

1:00:22

now consider consider a putative

1:00:24

paradigm of rationality,

1:00:27

subjective

1:00:27

bayziness. Okay? Here

1:00:28

you start with a prayer That's

1:00:32

your initial probability distribution

1:00:34

before you get any information. And

1:00:36

the only constraint on this

1:00:38

is that it abaze the probability calculus.

1:00:41

That that's the version

1:00:43

I'm thinking of.

1:00:44

Okay. That's your anchor. Your

1:00:45

prayer is your anchor. And

1:00:47

then you get some information and you

1:00:50

update by conditionalizing on

1:00:52

it, as we say. So your

1:00:54

new probabilities are your old

1:00:56

probabilities conditional on that

1:00:58

information? That's your

1:00:59

adjustment. But the trouble

1:01:01

is that your prior

1:01:03

has no evidential value. It's it's

1:01:06

not based on any

1:01:08

information and you know this. That's what

1:01:10

makes it a prior. and its

1:01:12

residue remains even

1:01:14

after the adjustment often.

1:01:16

Now we can imagine that your

1:01:19

prior was even determined by the spin

1:01:21

of a roulette wheel or by your Social Security

1:01:23

number. As long as it obeys

1:01:25

the probability calculus, and

1:01:28

still it influences your final probabilities, your posterior

1:01:31

probabilities as we say. And now the

1:01:33

worry is, why isn't that

1:01:36

just as absurd as before. We were laughing

1:01:38

at the people in the

1:01:40

African United Nations experiment or

1:01:42

the wine and chocolate experiment.

1:01:46

what's the relevant difference? And look, there are

1:01:48

things that one can say, but I

1:01:50

just put that out there as something

1:01:52

that that needs some attention. I

1:01:55

think

1:01:55

in theory, if you get a lot of empirical

1:01:58

information over time, then the

1:01:59

influence of your

1:02:02

starting point. becomes more and more relatively irrelevant over time as this

1:02:04

kind of washed out by all these updates you're

1:02:06

making, you know, conditionalizing on things that you've

1:02:08

observed. So there's there's one thing, at least

1:02:10

if you you're around long enough and

1:02:12

collecting enough evidence to to move away

1:02:14

from the very first priority you started

1:02:16

with. I guess another difference that that I

1:02:18

imagine is that I think in theory you're meant to

1:02:20

start with something like an uninformed prior or I'm sorry. Which was

1:02:22

just to say a prior that is extremely

1:02:24

agnostic. I mean, like, is really pretending to

1:02:26

know all that much.

1:02:28

Now it's think it's a bit

1:02:30

hard to define exactly what is the appropriate, uninformed prior, but the hope is that

1:02:32

your views are gonna be very flexible

1:02:34

initially because it would be

1:02:38

presumptions, it would be foolish to think that before you look to any evidence, you

1:02:40

should have a really strong view about things. So maybe that's

1:02:42

one way in which the prior case

1:02:45

is well, the priors should hopefully be doing not that

1:02:47

much work. Even if we do concede that it

1:02:49

is largely arbitrary. Yep.

1:02:50

Excellent. Both good replies. The

1:02:53

first one was the washing

1:02:55

out of the prize in the long run. And

1:02:57

there are these famous convergence

1:03:00

theorems for Beijingism that

1:03:02

in the limit, under certain

1:03:04

appropriate conditions, effectively the

1:03:07

prior is completely washed out.

1:03:09

And that's good. But

1:03:10

that then

1:03:11

as Kain said in the long run, we'll all

1:03:13

be dead. And at

1:03:16

any point in our

1:03:18

finite lives, we will

1:03:20

not have reached that limit.

1:03:22

And the worry is that

1:03:24

the residue of the prior will remain

1:03:27

at all of these finite stages, which

1:03:29

is all we ever have.

1:03:31

Okay? So it's nice to know that there are

1:03:33

these theorems that we'll get there in the end,

1:03:35

but the problem is we're never at the

1:03:37

end. And regarding the second reply, and that that's certainly a good one.

1:03:40

Yes. I think actually that's the way to go that you

1:03:42

want some sort

1:03:44

of constraints beyond the

1:03:46

probability calculus on the

1:03:48

priors. And as you said,

1:03:51

maybe uninformative priors as we might

1:03:54

say, maybe there are some

1:03:56

constraints on what the price should be,

1:03:58

not just anything goes. And in fact,

1:04:00

there's the so called uniqueness thesis that

1:04:02

says, no, there's even exactly one.

1:04:04

There's only one rational starting

1:04:06

point, which is somehow, you know,

1:04:09

maximally uninformative, you might say,

1:04:11

For example, the principle of indifference might

1:04:13

kick in at that point. And now we

1:04:15

get into a good debate about

1:04:17

just how constrained prize should

1:04:19

be. So anyway, I raised anchoring and adjustment worry

1:04:22

as a concern for that completely

1:04:26

unconstrained version of

1:04:28

Beijingism that just said, thou shalt obey

1:04:30

the probability calculus. But beyond

1:04:32

that, it's okay, whatever you do.

1:04:35

it seems that you need to be a

1:04:37

bit more constrained than that. And then as

1:04:39

you say, the analogy to

1:04:41

anchoring an adjustment starts to to go away.

1:04:43

For example, we can't just spin a roulette

1:04:46

wheel or we we can't just

1:04:48

get your Social Security number and

1:04:50

make that

1:04:52

your prior. That's not the right starting

1:04:54

point. We haven't got five minutes left

1:04:56

on stage. So I wanted to give you a chance to

1:04:59

for the audience a few criticisms of fruquentism, which is

1:05:01

always. So firstly, like, what is fruquentism? And

1:05:03

secondly, like, what are some of the problems that we haven't yet mentioned

1:05:05

with it? Okay. Frequentism.

1:05:08

Remember my taxonomy of

1:05:10

objective probability, subjective probability,

1:05:13

evidential probability, frequentism would be

1:05:15

a version of objective. probability

1:05:17

out there in the world, mind

1:05:20

independence, there are these

1:05:22

probabilities. What are they? They're

1:05:24

relative

1:05:24

frequencies. Mhmm. To take a simple case

1:05:27

what's the

1:05:28

objective probability

1:05:29

that the coin lands heads,

1:05:31

let's suppose it's a fair

1:05:33

coin? Well, you toss

1:05:35

the coin a number see it lands and count the

1:05:37

number of heads over

1:05:39

the grand total. if

1:05:42

that number turns out to be a half,

1:05:44

then Bingo, you've got a chance

1:05:46

of a half for heads. Yeah. And

1:05:48

and now we generalize that. So

1:05:51

in general, the

1:05:52

probability of some, let's

1:05:54

call

1:05:55

it, an attribute relative

1:05:57

to some reference class So

1:05:59

attribute

1:05:59

a relative to reference class

1:06:01

r is the relative

1:06:04

frequency of a's

1:06:06

in that reference class. makes

1:06:09

sense, I think. And I think it may be still the most popular

1:06:11

account is among scientists.

1:06:14

Yeah. It's always scientist,

1:06:16

isn't it? They're the ones that They need to

1:06:18

talk to philosophers. Yeah. Absolutely. on so

1:06:20

many things. Yeah. Sorry. Okay. So this is kind of

1:06:22

a this has an intuitive appeal, and it's a bit like how

1:06:24

you're taught about probability. high school or something.

1:06:26

Yes, that's right. Yes. And we all know

1:06:28

that there's got to be some close

1:06:30

connection between probability

1:06:31

and frequency

1:06:34

and

1:06:34

frequentism posits the tightest connection of all

1:06:36

identity. And now we could distinguish

1:06:38

two kinds of frequentism.

1:06:40

You might call it actual frequentism,

1:06:43

you just look at what actually happens. You just,

1:06:45

in fact, toss the coin some

1:06:46

number of times -- Yeah. -- count the

1:06:49

number of heads divide

1:06:51

by the total of trials done. And then we

1:06:53

could

1:06:53

go hypothetical and

1:06:54

say, well, no, I really

1:06:56

meant some long run of heads

1:07:00

maybe I didn't get a long run,

1:07:01

so I go hypothetical, I imagine

1:07:04

counterfactually a long run

1:07:06

of trials and relative

1:07:08

frequency I would get maybe in the

1:07:10

limit. If I have infinitely many trials,

1:07:12

we have two versions. And

1:07:13

you probably want to ask why I don't

1:07:15

like -- Yeah. --

1:07:17

frequentism. I think version

1:07:18

of frequentism has the following kind of

1:07:20

problem. I think it's just built

1:07:22

into the very notion of probability

1:07:25

that fixing

1:07:26

the probability of something is compatible with

1:07:29

just any pattern and

1:07:31

any frequency of the corresponding

1:07:34

outcomes. Let's let's do it. Yeah. Yeah. Let's

1:07:36

do it for the coin toss. Let's suppose we

1:07:38

have a fair coin by which

1:07:40

I

1:07:40

mean the chance of heads as a half.

1:07:42

I

1:07:43

say that's compatible with any distribution of

1:07:45

heads and tails, including heads on

1:07:47

every toss, tails on

1:07:50

every toss, and everything in

1:07:52

between. And I can even tell you what the

1:07:54

probabilities of those outcomes are.

1:07:56

You know, it's one on two to the end

1:07:58

for end trials for each

1:08:00

exact sequence. But

1:08:02

it seems you can't say that

1:08:04

if you're a frequentist because

1:08:06

if the coin lands heads every

1:08:08

time, then that's probability one according to

1:08:10

frequentism. So there are various problems. We we

1:08:12

talked about the the problem of the single

1:08:15

case -- Yeah. -- earlier. there's

1:08:17

the problem of the double case. If I

1:08:19

toss a coin twice, then I can only have probabilities

1:08:22

of zero half

1:08:25

And one, off hand, I would have thought there could be other biases. And

1:08:27

as I like to

1:08:30

point out, it turns

1:08:32

out that

1:08:34

you cannot toss a fair coin

1:08:36

an odd number of times

1:08:39

according to frequentism, at

1:08:41

least this actual frequentism. because just

1:08:43

by definition according to them, if it's

1:08:45

an odd number of times, you can't

1:08:47

have exactly a

1:08:50

ratio of half of heads. Yeah. That

1:08:52

just doesn't seem – it seems pretty

1:08:54

weird. That seems pretty, pretty weird.

1:08:57

You'll also have I guess, a version of the the

1:09:00

gambler's fallacy. Suppose you

1:09:02

somehow know that

1:09:03

the

1:09:04

chance of heads is

1:09:06

a half of this coin and you

1:09:09

start tossing the coin and you

1:09:11

see a surprisingly long run

1:09:13

of heads as could happen. Yeah.

1:09:15

But if you know that somehow God tells you that

1:09:17

the chance of heads is a half,

1:09:20

you know that tails must be coming.

1:09:22

Oh, right. because they've got to make

1:09:24

up the run of

1:09:26

heads that's happened so far. Again, that was -- Yeah. -- actual frequentism. Now let's go

1:09:28

to the other kind,

1:09:31

the hypothetical frequentism. Yeah. But

1:09:35

now we have some counterfactual. This is not what actually happens. If

1:09:37

you were to toss the coin some large

1:09:39

number of times, maybe

1:09:42

infinitely many times, this

1:09:43

is what would happen. I think

1:09:45

for that, this is a

1:09:47

very weird counterfactual.

1:09:49

Imagine tossing an actual

1:09:51

coin like the

1:09:51

twenty cent coin in my pocket infinitely

1:09:54

many times. What would that even mean,

1:09:57

you know, that this coin would would

1:09:59

disintegrate before

1:09:59

then. So so we'd have to violate the

1:10:02

laws of nature so that

1:10:03

it would survive infinitely

1:10:06

long. I think it's very strange. I think there's a fact of

1:10:08

the matter of exactly how the the

1:10:10

queen would land in this infinite

1:10:14

sequence and even what it's frequencies

1:10:15

would be? Yeah. So you're trying to make this

1:10:17

like

1:10:17

very practical theory or that's just kind of a it's

1:10:20

virtue, but then it

1:10:22

seems to involve bizarre scenario that could never happen

1:10:24

to to back it up. Yeah.

1:10:26

Exactly. And I'll give actual frequentism

1:10:28

credit. At least, it was I

1:10:30

suppose practical anchored in the

1:10:33

world, gave you numbers that you

1:10:35

could ascertain. Yeah. Now when when the

1:10:36

problems start piling up and I gave

1:10:39

you a few of them, You start retreating

1:10:40

to this other version, the hypothetical frequencies, but

1:10:43

this seems to have other problems. It's

1:10:45

not practical. It's

1:10:48

not ascertainable. I

1:10:48

can't make sense of these counterfactuals. They seem

1:10:51

to be about the wrong thing about some idealized coin that would

1:10:53

never disintegrate and

1:10:55

what have you

1:10:56

what have you and

1:10:57

so put it all together and I

1:10:59

I frequentism is pretty bad. know there are lots

1:11:02

of frequentists out there read

1:11:05

my papers, the fifteen arguments against finite frequentism, finite fifteen arguments against

1:11:08

hypothetical -- Yeah. -- frequentism. I

1:11:10

I think it's in a bit of

1:11:12

trouble. Alright.

1:11:14

Well, we have to go upstairs because other people need the stage here. But,

1:11:17

yeah, we're gonna go chat about ways of

1:11:19

breaking expected value, counterfactuals, and why and

1:11:21

your your your many objections to

1:11:24

objective utilitarianism. But for now,

1:11:26

can everyone give random applause to to Alan? Thanks so much.

1:11:34

Alright. We're we're we're back in your

1:11:36

in your office, Alan. So I I was just

1:11:38

saying before we started recording that. In general,

1:11:40

I have a prejudice against doing interviews

1:11:43

in front of live audiences. because my my experience

1:11:45

listening to other other shows is that guests tend to kind of panned it to the crowd

1:11:47

a little bit, especially if they're talking about, like, charge issues. They

1:11:49

don't wanna have the courage to

1:11:51

say things of things wouldn't like. I

1:11:53

hope I didn't pander too much. No. No. I think Fortunately, we're we're mostly safe on this topic, except

1:11:55

perhaps for quizzes, although -- Right. -- you

1:11:58

probably have, like, sympathetic and unusually sympathetic

1:12:00

crowd. criticisms

1:12:02

or fruquintism. Hopefully, yes. Cool. But now

1:12:04

we can do the really hard hitting critical challenging

1:12:07

stuff. That was very good trouble. I'm

1:12:09

ready. Oh, okay. So before we came in

1:12:11

here, we were chatting about criticisms of fruquintism.

1:12:13

I guess one that you didn't bring

1:12:15

up, that jumps to mind for me. is

1:12:17

it it it seems like very odd that for Quenchism, it's saying like, okay, we've got like

1:12:19

one particular coin and we're saying

1:12:22

like, what is the

1:12:24

chance? that this coin is gonna land

1:12:26

heads or tails. It seems like it depends then on like all these other coins or like the same

1:12:28

coin and like

1:12:31

other points in time why

1:12:33

should the properties of this coin or, like, our knowledge of them,

1:12:35

like, depend on things far away in space and time.

1:12:37

It's very odd

1:12:39

in that respect. Yep. This radium

1:12:42

atom point to a particular atom -- Yeah. -- the case with probability a half

1:12:45

in sixteen

1:12:48

hundred years that seems to be an

1:12:50

intrinsic property of this atom. It seems

1:12:51

a little odd that its probability depends

1:12:53

on how all

1:12:56

these other atoms maybe very far away in space and

1:12:58

time happen to go decaying or not. Yeah. But I guess

1:13:00

the atom case sharpens it because with a

1:13:02

coin, you can flip it many times, but

1:13:04

each radium atom

1:13:06

can only decay once. That's it. And so you can imagine a scenario where what if there was there was only one radium or

1:13:08

like what if there was lots of

1:13:10

radium atoms and then you've got some frequency

1:13:14

then you, like, shrink it down such that now there's only one left. Yep. I

1:13:16

guess the fruquintis has to say now there's no

1:13:19

probability left anymore because there's just

1:13:21

one or zero. Yeah. Exactly. At that point, Yeah. It

1:13:23

seems very strange that probability depends on these

1:13:25

very extraneous facts. You'd think

1:13:28

it's just the the protagonist

1:13:30

is right here and now it's this Adam,

1:13:32

we're talking about -- Yeah. --

1:13:34

reminds me a bit of Hume's theory of causation about constant conjunction. And

1:13:38

take a paradigm case of causation like

1:13:40

I I put my hand

1:13:42

in a flame by accident and

1:13:46

I feel pain it seems somewhat odd to me to say,

1:13:48

well, what makes that causal claim

1:13:50

true is a fact about,

1:13:53

the you know,

1:13:54

these very disparate events across space and time, whether

1:13:56

putting hands in flames

1:13:58

were followed by

1:13:59

pains across space

1:14:02

and time? No. It seems like the protagonist Protagonist are

1:14:05

right here and now. Hey, listeners. Alan just

1:14:07

used the term constant conjunction. I

1:14:09

didn't know what that meant. But since that constant conjunction

1:14:11

is a relationship between two events where

1:14:14

one event is always invariably

1:14:16

followed by the other. So if

1:14:17

the occurrence of a is always

1:14:19

followed by b, and a and b are said

1:14:21

to be constantly conjoined. So in this case, I guess, you got the putting your hand in the

1:14:23

fire is always associated with

1:14:26

it with it being burned. And

1:14:28

I think cubes suggested that causation is just a

1:14:30

matter of constant conjunction. Okay. Back to the show. I see. So

1:14:32

so this is

1:14:34

an account of causality where

1:14:36

saying, the flame

1:14:38

causes the pain -- Yeah. -- if insufficiently, like, many cases or, like, hypothetical cases, the flame

1:14:41

and the

1:14:44

pain occur related really strongly. Yeah. Well So

1:14:46

saying, like, what does that got to do with it? Yeah. That's yeah. So who had a version of causation like that,

1:14:48

this account involving constant

1:14:50

conjunction? I think that frequentism

1:14:54

actual

1:14:54

frequentism, especially, is quite a lot like that.

1:14:56

Mhmm. And

1:14:57

I have the same reaction just

1:14:59

as you did that, look,

1:15:01

we should just be

1:15:03

staring at my poor hand and

1:15:05

the flame or the the radium atom, it seems odd

1:15:08

that its its chance

1:15:10

depends on

1:15:10

these maybe very distant

1:15:13

maybe very distant

1:15:14

facts and health, they they pan out. Yeah. Okay.

1:15:16

I guess causality is its own

1:15:19

With buddulamax. So that's a

1:15:21

big topic. Let's go back to probability.

1:15:23

Yeah. I think that's that's probably enough of fruquintas and people can go and check out

1:15:25

your your papers demolishing it. Yep. demolishing it, I

1:15:27

guess, thirty different ways and

1:15:29

maybe some extra my original paper was called thirty

1:15:32

arguments against fricotism. Mhmm. And I

1:15:34

I sent it in and I was

1:15:36

told it was it was a good

1:15:38

paper, but it was much too long. In fact, it twice as long as what could publish. So they

1:15:40

said that you need to

1:15:42

cut it. It was easy. fifteen

1:15:46

arguments against finite frequentism, fifteen arguments

1:15:48

against hypothetical frequentism -- Right. -- going to

1:15:51

okay. So if people people will often

1:15:53

cite both, you get double the citation. Absolutely.

1:15:55

Yeah. I should've got a mobile that's some more

1:15:58

I actually well, apparently, this was a big

1:16:00

problem in genetics. where you would have people

1:16:02

who were doing like whole genome studies and they decided to they they realized that they could that the studies of,

1:16:04

like, the effects

1:16:07

of different genes have break it,

1:16:09

like, in one paper for every chromosome. So they potentially get, like, thirty

1:16:11

different papers out of basically, like, exactly the same study. Yeah. That

1:16:15

that's how you you pump up your citation, your

1:16:17

various markers of

1:16:20

productivity. Yeah. Yeah. Okay. So what

1:16:22

you got from fruquintisum? I'm wondering

1:16:24

like Are there

1:16:26

other ways that, like, I could in practice start, like

1:16:28

reasoning differently? You know, start, like, in my daily life, thinking about the

1:16:30

probability of Boris Johnson being deposed in in different ways. I guess,

1:16:34

The adjustment that I'm, like, most familiar with with sync, we will make, is going from these points, estimates, probabilities to,

1:16:36

like, ranges and so on.

1:16:38

Yeah. Yeah. I'll I'll be glad

1:16:42

talk about that. That's right. Then you might say

1:16:45

this sharp

1:16:46

probabilism where you're

1:16:47

supposed to assign a

1:16:50

sharp real number is just psychologically

1:16:52

implausible. It's not something that we could do. Take

1:16:54

again the case of my probability of rain tomorrow. It's

1:16:57

not sharp to infinitely

1:16:59

many decimal places. what

1:17:02

do I say? What's the probability of

1:17:04

rain? Well,

1:17:05

it's zero point six

1:17:07

ish. It's in the region

1:17:09

of zero point six Now the

1:17:11

first move you might make is to say, well,

1:17:13

the probabilities should be intervals,

1:17:15

maybe point five to point seven

1:17:17

in my case. Yeah. But in

1:17:20

a way, that just makes the problem of

1:17:22

precision worse. So now you've got two precise numbers, the endpoints of the interval -- Right. -- point

1:17:25

50000

1:17:28

and point 70000 Yeah. That doesn't seem

1:17:30

to quite And that's the problem. Yeah. I suppose I suppose if

1:17:32

you wanted to defend it, you

1:17:35

would say, well, you've got two two numbers now

1:17:37

that you've chosen, but, like, they're not as important somehow. Yeah. Is that no.

1:17:39

Yeah. Not not look at convinced. Well, you you could say

1:17:41

that or or you could say

1:17:43

these probabilities get determined

1:17:46

by something else. Like, maybe there

1:17:48

are

1:17:48

the judgments that I make that I've

1:17:50

been frozen would say. Yes. And now

1:17:52

you just look at all of

1:17:54

the probability functions that

1:17:56

respect my judgments, and they

1:17:59

will form

1:17:59

a set, probably not just a

1:18:02

singleton set. but

1:18:03

the set itself will have some sharp boundaries and

1:18:05

it could well work out, but that

1:18:07

set has a boundary

1:18:09

point five and point seven and we we

1:18:11

represent my credence with in principle. Yeah.

1:18:14

And, yeah, I I may

1:18:16

not so easily access these

1:18:18

probabilities too. It may be unbadtered

1:18:20

in prospecting as Williamson would say, maybe my

1:18:22

credences are not luminous -- Right. -- to myself.

1:18:24

But I might

1:18:27

still have them and

1:18:28

they they might have these forms like intervals

1:18:30

or or sets. Yeah. So it gets something that's

1:18:35

appealing about saying, you know, the probability of raining camera tomorrow is between

1:18:37

point five and point seven on a in

1:18:39

a practical sense is that it

1:18:41

helps to indicate your level

1:18:44

of uncertainty to other people, whereas if you just say point

1:18:46

it's point six. Mhmm. Well, I mean, first of all, someone laughed at you because it sounds so precise. It

1:18:48

sounds like ridiculous to be to be so

1:18:50

sure. I mean, they're less like they're laughed

1:18:53

you if if you give a range. Yeah. I guess,

1:18:55

actually, it it does potentially indicate something technical, which is like how quickly you

1:18:58

would update your beliefs as

1:19:00

you get new information. If you have like a very wide range and

1:19:02

you're saying, well, a wide range of point estimates would be plausible and reasonable.

1:19:04

And so, you know, as I see the weather

1:19:06

report, I'm gonna, like, shift a lot. really

1:19:10

feel like it's also it's interesting. If you say it's like point six, in some sense, you're making a claim that you're like a hundred percent sure that it's point

1:19:12

six. And then you should say, well, I and you'll just

1:19:14

never change your mind about that even after it rains or

1:19:16

doesn't. I'm

1:19:19

not sure committed to that. I think it's okay.

1:19:22

You've got the point six

1:19:25

credence initially and then you conditionalize, as we

1:19:27

say, you update that, as the evidence comes in, like,

1:19:30

like, maybe you see the

1:19:31

rain for example. So, yeah,

1:19:33

that becomes a new

1:19:35

sharp value of one,

1:19:37

that's that's okay. But I guess it would be a

1:19:39

misunderstanding to

1:19:39

interpret someone who's saying the probability

1:19:42

of it raining in camera tomorrow

1:19:44

is sixty

1:19:46

percent with one hundred percent probability. That's like not

1:19:49

what's being claimed. Yeah. Well, that brings

1:19:51

us to another

1:19:52

issue with another

1:19:54

choice point, by the way, for varieties of

1:19:57

bayesianism, whether you allow

1:19:59

higher order

1:19:59

probabilities, do you have, for example,

1:20:02

credences of credence probabilities or probabilities in

1:20:04

general. And you could say no, that

1:20:05

doesn't make sense or that maybe they

1:20:08

collapse down to

1:20:10

just the first order probabilities

1:20:12

But you could say, no, I

1:20:14

have probabilities about various ways the world could be, including

1:20:18

what

1:20:18

my credence is r. Yeah.

1:20:20

Because that's part of the world. Why not have

1:20:22

intermediate credences for that too? Alright.

1:20:23

So maybe the thing that it feels

1:20:25

more like you're doing when

1:20:27

you introspect and try to, like, give

1:20:29

these ranges to actual questions. Is your inspect you're like you you pull like in

1:20:31

different numbers in your mind and then

1:20:34

see how much the mind

1:20:36

revolves at that number. So if you say,

1:20:38

like, the probability of rain tomorrow is one percent, then you're, like, no, that's, like, that's too crazy. And then you,

1:20:40

like, kind of, have a,

1:20:42

like, level of, like, craziness all

1:20:45

of these different numbers. And then that gives you some

1:20:47

sort of distribution like, numbers plausible. of what you think you ought to

1:20:49

believe, that's kind of maybe what you're measuring.

1:20:51

Yeah. And maybe we

1:20:55

could

1:20:55

compare your credences to say Lotteries. What do

1:20:57

you think is more probable? Rain

1:20:59

tomorrow or I, you

1:21:01

know, say, a hundred

1:21:04

ticket lottery tickets

1:21:04

one to sixty. One of

1:21:06

those is the winner. Yeah, it feels

1:21:09

maybe slightly higher

1:21:09

than that, but now I

1:21:12

make it one

1:21:14

to seventy or maybe slightly lower than that

1:21:16

and maybe I can somehow hone in

1:21:19

on what my probability is. That's a

1:21:21

way of a listing -- Yeah. --

1:21:23

means from myself. Yeah. Okay. So, yeah, a

1:21:25

philosopher's working on any adjustments to how we do these things that could

1:21:27

possibly affect how I reason about uncertain events,

1:21:29

like on a day to day basis,

1:21:31

that might like actually

1:21:34

help me live better. Starting I suppose with the imprecise credences, okay, maybe it's too much

1:21:36

to ask of you

1:21:38

to have the sharp probabilities

1:21:43

and update by conditionalizing all the time. But this

1:21:45

was meant to be a

1:21:47

friendly amendment. Jeffrey calls

1:21:49

it bayesianism with a

1:21:52

human face. And

1:21:53

it seems more psychologically plausible that you've got these

1:21:55

ranges of probabilities, you know, intervals

1:21:56

or sets. So that's

1:21:59

one kind of

1:22:00

humanizing. guess

1:22:02

so so you were saying but but you objected to that or you

1:22:05

offered the objection that well, now you've just chosen

1:22:07

two numbers. You've made it, like, you now

1:22:09

you've got double double the problem. Yeah. Is is that a

1:22:11

good objection interview? Well, I I think I think it has to be taken

1:22:13

seriously, but but now maybe things

1:22:15

start getting worse. 0II

1:22:17

don't want to have this

1:22:19

exact sharp interval from

1:22:21

point five to point seven in

1:22:23

my example. Maybe what I should have is

1:22:26

a

1:22:26

probability distribution over numbers that's not smooth.

1:22:28

It's smooth and

1:22:30

maybe it hits a peak somewhere in the middle near point six and it tapers off towards the edges and

1:22:32

maybe doesn't just stop it sharply

1:22:34

at point five and point seven.

1:22:39

but now we just raise the problem again. So, really,

1:22:41

I've got this sharp exactly

1:22:43

that sharp probability function over the

1:22:45

-- Yeah. -- range of values.

1:22:47

Usually, you're on you're unsure about the

1:22:49

value of the probability that you should give to each probability. And so -- Yeah. -- and and we have

1:22:51

us -- Okay. -- uncertainty all all the

1:22:54

way up. That's right or all the way

1:22:56

down. And

1:22:58

then one reply might be, well, look,

1:23:01

we're

1:23:01

just representing things. We're

1:23:03

providing models. So don't reify

1:23:05

this so much. Don't take this all literally that

1:23:07

this has to be in your head. But

1:23:09

these models, you know, with

1:23:11

various levels of sophistication,

1:23:13

may better or

1:23:15

worse represent what things

1:23:16

in what Going any My brain

1:23:18

is doing some Right. Right. Right. Hey, listeners. Rob here with another definition. We just

1:23:20

use

1:23:23

the term Raytheying something or ratification is

1:23:25

the mistake of imagining or representing that something

1:23:28

that is

1:23:30

really just an abstraction. making the mistake of thinking that it's material

1:23:32

or concrete thing that's that's actually real,

1:23:34

where that isn't. I mean, so

1:23:38

inflow recursion actually not always a problem. Right? You could just say, well, there is just uncertainty

1:23:40

distributions all the way up. Yeah.

1:23:42

Right. And maybe that's actually fine.

1:23:46

And it like caches out. And I suppose you could, in fact,

1:23:48

represent those uncertainties, like, as far as

1:23:50

you like. And so they they could all

1:23:52

cash out to this point estimate. if you like, you

1:23:54

can go one level up and represent uncertainty there, and you can go another one. Yep. At some point, it doesn't feel like it's adding value to you to

1:23:56

do it any further. Yeah. If it

1:23:59

if it does point first, maybe

1:24:02

things stop at a fixed point or

1:24:05

maybe you get a bit more

1:24:07

information at each level you

1:24:09

go back. This is another of the heuristics, by

1:24:11

the way, you asked for

1:24:13

some more infinite regresses, a

1:24:16

technique often used. And often

1:24:18

infinite

1:24:18

regress is thought to be a bad thing and it's a fatal problem for a view if faces such

1:24:21

a regress. But

1:24:24

that's not always

1:24:26

clear, some some notions

1:24:28

seem to be well understood in

1:24:30

terms of infinite hierarchies like that.

1:24:32

Lewis' notion of convention is

1:24:34

like that in in terms

1:24:36

of common knowledge or common belief, which is

1:24:38

a recursive thing

1:24:39

about knowing of each other. what

1:24:43

they believe and so on for higher orders. And we don't

1:24:45

just say, well, that can't be right because we

1:24:47

have an infinite regress. I

1:24:50

mean, especially if the

1:24:51

if the regress converges on something where it's just like it just

1:24:53

becomes the same every time you're just like forever. Oh,

1:24:55

the fixed point. Okay. That's right. And then you

1:24:57

argued it's like, well, that's just fine. Yeah.

1:24:59

That's right. Yeah. Yeah. Okay. Interesting. Yeah. Okay. So

1:25:01

we've got these, like what are these calling the the range of probabilities? What's the term for that?

1:25:03

Yeah. Well, sometimes

1:25:07

more distribution on the probability? So

1:25:09

sometimes we call it the the presenter. Okay. Representative is the set of

1:25:12

probability functions. that

1:25:15

represent your grievances, and they're meant

1:25:17

to be

1:25:18

all of the

1:25:19

precipifications of

1:25:23

your imprecise. credence, but that are faithful to it, but then

1:25:25

fill in in all of the precise ways.

1:25:27

Yeah. So this really does feel

1:25:29

like a friendly amendment to me. I feel

1:25:31

like -- Yeah. -- this is kinda it's the same spirit, and we're just gonna, like, get a little bit

1:25:33

better or, like, represent high levels of uncertainty. All

1:25:35

good. That that's right. And and

1:25:37

again, this takes me back to

1:25:39

earlier when I was saying that I'm

1:25:41

Amazon on Mondays, Wednesdays, Friday and so on. Look, so many things deserve the

1:25:43

name, Amazon,

1:25:44

and

1:25:48

I I shouldn't really say that I've

1:25:50

jumped ship on on Tuesdays and today, Saturday just because I

1:25:52

I make certain choices in

1:25:54

that tree of choice points. Yeah.

1:25:57

Yeah. Yeah. It makes sense. Are

1:25:59

there any more radical departures? Oh, I suppose, are there people who are

1:25:59

probability nihilists? yes,

1:26:05

I suppose you could say that all there is is

1:26:07

just these all or nothing states for start when

1:26:10

it comes to credences, beliefs, as

1:26:12

we might say. Look,

1:26:13

there's I mean, some of the kind of

1:26:15

people

1:26:15

who who are just

1:26:17

like, look, there is what is actual

1:26:19

probability else probability zero. Yep. And that's kind of all that there is

1:26:22

to say about probability. And and you should believe the thing

1:26:24

that will happen, another thing that is true.

1:26:26

Yeah. You shouldn't believe the thing that is

1:26:28

false. and that's kind

1:26:30

of the end of the story. And that

1:26:32

just shows you that objective norms are sometimes tough to

1:26:34

live up to. Yeah. Right. Not not easy.

1:26:37

your your ought to be amnescient. So -- Yeah. -- it's

1:26:39

it's tough to be amnescient, but so that that's not the fault of the

1:26:41

norm. It just says the the norm

1:26:43

is is hard to to

1:26:47

to live up to. Well, that

1:26:48

does seem a little extreme, I

1:26:50

guess. It's certainly not

1:26:52

it's it's it's unhelpful even

1:26:54

if it's it's not giving you good advice, for example. Okay.

1:26:56

So that is quite a lot on

1:26:58

probability. I guess, let's move down the

1:27:00

track now to, I guess, an application

1:27:03

of probability estimates, which is expected

1:27:05

value. Yeah. We've got a whole cluster of questions around that. I guess to start

1:27:07

though, what is expected value? It's it's it's I turned that we throw around a

1:27:09

lot, but I think it's actually like

1:27:11

basically not used among

1:27:14

the general how I was like, it it is actually

1:27:16

quite technical in a way. Yeah. That's right.

1:27:18

And they'll quickly turn into a discussion

1:27:21

of expected utility. But Mhmm. Alright. It comes

1:27:23

up to especially in decision theory,

1:27:26

expected utility theory. And

1:27:28

you

1:27:28

have a choice among certain

1:27:30

options. What should you do?

1:27:33

And

1:27:33

let's assume that the world could be in various ways. They're very states of

1:27:35

the world, and you don't have control of of

1:27:37

what they are, but you

1:27:39

assign probabilities to the

1:27:42

various states. Yeah. And the

1:27:44

combination of a particular

1:27:46

action of yours and the

1:27:48

state of the world together

1:27:50

that

1:27:51

determines an outcome and you can value

1:27:53

the outcomes more or less. Yeah. And we

1:27:55

could put numbers, which measure

1:27:57

how much you

1:28:00

value Now

1:28:00

the expected value is a

1:28:01

weighted average of these values where the

1:28:04

weights are

1:28:06

the probabilities. So that turns out to be a

1:28:09

sum of products. You

1:28:11

take probability multiplied by

1:28:14

the value and add across all

1:28:16

of the states. And that weighted

1:28:18

average

1:28:18

is the expected value. And

1:28:22

now think of that as like a figure of merit.

1:28:24

That's how choice worthy

1:28:25

each of your actions. Hey,

1:28:28

listen. One quick definition.

1:28:30

Choice

1:28:30

worthy knows. is this philosophy jargon term where choiceworthiness

1:28:32

corresponds to the strength of the

1:28:34

reasons for choosing a given option.

1:28:37

How good choices is

1:28:39

this for us? Okay. back to the show. She

1:28:41

was Trust where he was another one of these. So he said no

1:28:43

no no no no more non philosophy. But would that That's

1:28:46

exactly where I've been. Yeah.

1:28:49

And and now you should

1:28:51

maximize that quantity, do whatever action or actions because

1:28:53

maybe

1:28:54

it's not one. We'll

1:28:56

they will yield

1:28:58

the highest value along that score. Cool. So I I guess, like yeah. To to make that concrete

1:29:00

really

1:29:01

simple, if you

1:29:03

had a bat where

1:29:06

you got like one dollar if coin heads, comes it's fair

1:29:09

coin, then the

1:29:12

expected value I

1:29:14

guess, in dollars in this case is one dollar and

1:29:16

a half or 111 dollar fifty. That's right.

1:29:18

And of course, so normally, we'd we'd rather talk

1:29:20

about utility or well-being or something because we don't value

1:29:23

dollars the same as as as especially as the amounts become larger or maybe you

1:29:25

don't value dollars at all. So utility is the

1:29:27

thing where it ultimately caches out. That

1:29:29

that's right. And very soon, we'll

1:29:32

we'll talk about some motivations, I think, for for

1:29:34

why we should

1:29:34

really shift to, you know, payoffs just measured in terms

1:29:36

of dollars -- Yeah. -- to

1:29:38

to this other thing, utility. Yeah.

1:29:42

So,

1:29:42

yeah, expected value is just

1:29:44

super important to the project of effective altruism

1:29:46

and doing good because like any project you

1:29:48

take on is gonna have a very wide

1:29:50

range of possible outcomes. At least if you're doing anything remotely interesting -- Yes. -- that, you know, it could include, like, bad outcomes, potentially

1:29:53

lots of neutral outcomes,

1:29:55

possibly, like, things go wells.

1:29:58

You have a really good outcome. And you're just gonna

1:30:00

be absolutely screwed, absolutely paralyzed. If you

1:30:03

don't have a way of making

1:30:05

decisions under that uncertainty where you're able to

1:30:07

weigh up like, what should I do given that

1:30:09

things might go badly or might go well or

1:30:11

might might be neutral? And expected value provides this

1:30:13

extremely natural way of weighing those things up where

1:30:16

it says, well, something is twice as likely. It's twice

1:30:18

as important. Something that's good. If it's twice as likely, it's twice as good and that should get, like, twice as much weight in

1:30:22

your in your decision. So in every in every day situations like where they go,

1:30:25

you know, what to buy at the shops, like, what

1:30:27

to watch on Netflix, the kind of expected

1:30:29

value approach produces like extremely natural

1:30:31

answers that's in sensible basically

1:30:33

everyone, at least in principle, even though of course, we never

1:30:35

actually well, we never consciously almost done consciously calculating

1:30:39

expected value, but actually heard from

1:30:41

from neuroscientists that they've likely doing research on like how we make decisions. And apparently, there is like a process in

1:30:43

the brain that

1:30:48

effectively represents expected value, basically. Well, like, particular neurons

1:30:50

will fire based on, like, their expected reward. And then, like, the one that fires most frequently

1:30:52

or, like, or harder in some way,

1:30:54

like, basically, ends up winning the decision.

1:30:58

some like choice point in the brain. Anyway, I probably

1:31:00

just said something that's completely wrong. Well, no.

1:31:02

In fact, I think it goes even further. I

1:31:04

think it's not just people, I think, nay. Even

1:31:06

bees are Absolutely. thought to do this too. I mean, it kind of maybe it maybe makes sense that

1:31:08

it has to be this way

1:31:10

because evolution's gonna push us towards

1:31:14

doing things that, like, get us the most food or whatever. And, yeah,

1:31:17

obviously, when you have, like, different options on the table,

1:31:19

yeah, the brain's gotta choose the

1:31:21

bigger one. Yep. And you will be selected again if

1:31:23

you don't if you don't make the right choices that roughly -- Yeah. --

1:31:25

that maximized

1:31:25

expected value or

1:31:26

close to it. Yeah. So maybe

1:31:29

we we can say, like, expected value is, like,

1:31:31

probably deeply embedded in how we're kind of wired in in

1:31:33

some sense, even though we're not actually doing the math most of

1:31:35

the time, it's embedded in the like, in our

1:31:37

instincts about what what risks to take and which

1:31:39

one's not to. But, yeah, philosophers have looked at a

1:31:41

whole bunch of much stranger situations to see if they can make expected value break, whether they

1:31:44

can, like, make it stop working

1:31:46

or at least, like, arguably, stop

1:31:48

working. I guess one of

1:31:50

these that you're really into is the Saint Petersburg Paradox,

1:31:52

which some people have heard of, but other people won't have. Yeah. Can

1:31:54

you can you layout what the Saint Petersburg Paradox is? Absolutely.

1:31:58

A fair coin will be

1:31:59

tossed repeatedly until

1:32:00

it lands heads for the first time,

1:32:03

and you will get

1:32:05

escalating rewards the longer it

1:32:07

takes, the better for you, as

1:32:08

follows. If the

1:32:10

coin lands heads immediately,

1:32:13

you'll get two dollars. If

1:32:15

it takes two two trials for

1:32:17

the first head, so it's tail and

1:32:19

then heads, you'll get four dollars.

1:32:21

If it takes three trials,

1:32:23

you'll get eight dollars In general, if it

1:32:25

takes N trials for the first heads, you'll get two to

1:32:27

the N dollars. Okay?

1:32:29

And

1:32:30

so on. All right. How good is

1:32:32

that at least in

1:32:34

terms of expected dollar amount?

1:32:36

Yes. Well, let's

1:32:39

do the calculation. with probability half, you

1:32:42

get two dollars and multiply those. Half times two is one.

1:32:44

With

1:32:45

probability quarter,

1:32:46

you get four dollars

1:32:50

quarter

1:32:50

times four is

1:32:52

one. And now we keep adding terms

1:32:54

like that. It's one plus one plus

1:32:56

one, one over two to the

1:32:58

n times two to the is one

1:33:00

forever.

1:33:01

One plus one plus one added up

1:33:03

forever is infinity. So the

1:33:06

expected value of this

1:33:08

game in terms of dollar amount

1:33:10

is infinity. So prima facie, it looks like you should be

1:33:12

prepared to

1:33:14

pay any finite amount to

1:33:17

play the game just once -- Mhmm. -- and you should think you're

1:33:19

getting a fantastic deal. You're just just a

1:33:24

finite amount, but that's very

1:33:26

unintuitive because very probably you'll

1:33:27

make a rather

1:33:28

very probably you make a rather

1:33:30

small amount small amount for example,

1:33:32

most people wouldn't pay a hundred dollars

1:33:34

to to play this game. It seems because very probably they

1:33:39

will make a lot less than Yeah. And

1:33:40

therein lies a paradox. They're various

1:33:42

paradoxical aspects of this game. And

1:33:44

it is paradoxical that every

1:33:47

possible outcome is finite. Okay?

1:33:50

You know you'll get a finite outcome, finite payoff. And yes, you value

1:33:52

it in instantly.

1:33:55

How did that happen? Yeah.

1:33:58

Yeah. Something's gone wrong

1:33:59

here, it seems. Yeah.

1:34:02

It reminds me, Garrison

1:34:04

Kaeluses, is it

1:34:05

like Woebe Gornham, all of the

1:34:07

children are above average. Yeah. And yeah.

1:34:09

So in in this case, all

1:34:11

of the payoffs are below average, so

1:34:13

to speak. I see. Right. Right. Okay.

1:34:15

a variety of the paradox that, like, most numbers are small or, like actually, almost all

1:34:18

numbers are small. Right? Right. Because, like, no

1:34:20

matter where the no matter what cutoff

1:34:22

you choose for, like, numbers being small below

1:34:24

that, most numbers aren't larger

1:34:26

than that. Okay. So sympathetic paradox. Yeah. Seems seems weird. Seems like we should evaluate this thing internally,

1:34:28

but we do not. What what like,

1:34:30

how how can we fix it? Absolutely.

1:34:35

just look at the the moving parts in expected

1:34:37

value theory,

1:34:38

and we could tweak

1:34:40

each of them. So I think

1:34:42

there are five different ways of replying

1:34:44

to the Saint Petersburg Paradox. We

1:34:46

could do something to the probabilities. So don't

1:34:48

just stick in

1:34:49

the probabilities as they

1:34:52

were before. maybe some function of

1:34:54

them. And Lara Bushak has a very nice theory, a risk a risk weighted utility

1:34:56

theory, which will

1:34:59

tweak the probabilities. Now

1:35:02

look at the payoffs, look at these rewards, which were dollar Well,

1:35:04

maybe

1:35:04

we should

1:35:06

tweak that, and this

1:35:08

was Bernoulli

1:35:11

solution, actually, that how much

1:35:13

you value things is not just

1:35:15

a linear function

1:35:15

in the in this

1:35:18

case, dollar amount. there's what we call

1:35:20

diminishing marginal utility. The richer you are,

1:35:22

the less you know, we'll value some

1:35:24

incremental amount,

1:35:26

like an extra dollar Okay? And in fact, he thought that

1:35:28

how much you really value

1:35:29

the money goes more like by

1:35:31

the logarithm of

1:35:34

the dollar amount rather than by the face value

1:35:36

-- Mhmm. -- the dollars themselves. And

1:35:38

it turns out if you replace the

1:35:40

dollar amounts by the

1:35:43

logarithm of their amounts, then

1:35:44

you get a convergent series, you get a

1:35:46

sum that is finite and Yeah. Okay. Then we can

1:35:48

tweak

1:35:49

the the formula

1:35:50

the combination rule So

1:35:54

previously, was this weighted average? Or

1:35:56

maybe we could do something to

1:35:58

that. Well, there

1:35:59

would be

1:36:00

more radical departures like maximin.

1:36:02

It's got nothing to

1:36:04

do with expected value, but

1:36:06

maybe that's

1:36:07

the decision rule we

1:36:10

should follow. An alternative that I like

1:36:12

was introduced by an

1:36:14

economist too and studied and

1:36:17

developed further by Chris

1:36:19

Bottomley. former AIU student. Yeah. And a former student

1:36:21

of mine, Tim Williamson, not

1:36:23

not the famous

1:36:27

epistemologist at Oxford, but funnily the younger

1:36:28

Tim Williams, and he's also with Oxford

1:36:30

down and I'm sure he'll be famous

1:36:34

soon too. And they have been working on

1:36:37

a theory called weighted linear

1:36:39

utility theory. Oh, yeah. So what

1:36:41

are

1:36:41

what are the other options on

1:36:43

the Now hold fixed

1:36:45

the combination rule, which was this

1:36:47

sum of products and

1:36:50

have some different operation and what you do to previously,

1:36:52

it was maximize that

1:36:54

thing,

1:36:55

that expectation.

1:36:56

Well, maybe we

1:36:58

don't have to maximize it.

1:37:00

it Well, an

1:37:01

alternative would be to minimize it. That would be that would be a

1:37:03

pretty crazy theory. Yeah. But more plausible would we

1:37:07

we satisfy So it's

1:37:09

it's

1:37:09

just good enough to

1:37:10

to get a sufficiently high expected

1:37:13

value. You don't have

1:37:15

to literally maximize it.

1:37:17

So, I mean, I can

1:37:19

see the appeal of stratospising. So, stratospising

1:37:21

would be like maximizing the probability

1:37:23

of being above some particular

1:37:25

value. Is that right? Well, may not necessarily maximize

1:37:27

it, but just provided your maybe

1:37:29

sufficiently high up in the

1:37:32

ordering of your

1:37:34

act actions. Oh, that's good enough. I

1:37:36

said then you stop valuing any improvement beyond

1:37:38

that. Yeah. Something something like that. So is

1:37:40

this the same as kind of having

1:37:42

bounded utility in a way? Well, So so bounded

1:37:44

utility would be, say, well, my well-being can't go above some particular

1:37:47

level. So any any money beyond that would

1:37:50

be worthless. So I was thinking it was different that maybe

1:37:52

you were allowing unbounded utility,

1:37:54

but

1:37:55

you're not insisting

1:37:56

on maximizing this

1:37:58

overall quantity. You

1:37:59

just say, you know, near enough

1:38:02

is is good enough. Benson and Hedges cigarettes used to have this slogan

1:38:04

when

1:38:07

only the best will do.

1:38:09

And Satisfising says, oh, no, you don't need the best.

1:38:11

And in fact, voltaire

1:38:15

said, that the best is the enemy of the good, and maybe you don't

1:38:17

always have to strive for the best. Yeah. So this

1:38:19

will get us out of the

1:38:21

Saint Petersburg Paradox because at

1:38:24

some point, rather than just adding plus one,

1:38:26

plus one, plus one, you'd say, well, beyond this point, I would be above my satisfying level. And now I don't

1:38:28

value any any additional wins. Yeah.

1:38:30

And I, you know, if if I

1:38:34

instead

1:38:34

just received thirty,

1:38:36

dollars maybe that would be good

1:38:39

enough. And I

1:38:40

don't have to play the

1:38:42

game. Okay. So changing the decision rule. What what other options are there? And I've left for last, the one that I've

1:38:45

actually argued for

1:38:48

in print So I'd

1:38:50

better say something for it, which is biting the bullet. Yeah. And how do you do that? Well,

1:38:52

maybe it's not so

1:38:54

crazy after all to value

1:38:58

for

1:38:58

some petersburg game infinitely. And here's an

1:39:01

argument

1:39:01

for that. By the way, I've made

1:39:03

a switch to expected utility

1:39:05

theory where I'm now replacing the dollar

1:39:07

amounts, the values with how much you value

1:39:09

them in the utility sense. And that's

1:39:11

the quantity

1:39:12

and expected utility

1:39:15

that you're maximizing. Well, Let's

1:39:16

agree that expected utility theory is plausible

1:39:19

at least for the finite cases and somehow it

1:39:21

went wrong, it seems in

1:39:23

the infinite case. Well,

1:39:26

did it. Imagine

1:39:27

various truncations of the

1:39:30

Saint Petersburg game. For example,

1:39:32

if the first heads does

1:39:34

not happen by the tenth toss. We call the game off and we go home. That's the

1:39:38

end

1:39:39

of the game. Well, it

1:39:41

seems that the value of that game is ten.

1:39:43

Alright? Yeah. Now truncated the

1:39:44

eleventh toss. It

1:39:45

seems the value

1:39:47

of that's eleven. truncated

1:39:50

the twelfth toss, the values twelve

1:39:52

and so on. Now the Saint Petersburg

1:39:54

game is strictly better all of these.

1:39:56

This is so called dominance reasoning. Yeah.

1:39:59

Come what

1:39:59

may the Saint Petersburg game

1:40:01

is at least as good as each

1:40:03

of these

1:40:04

and with

1:40:04

some probability

1:40:07

it's genuinely better. Okay. So the Saint Petersburg

1:40:09

game, it seems it's better than ten. It's better than eleven. It's better than

1:40:11

twelve. Don't don't don't keep going. Yeah.

1:40:14

It's

1:40:14

better than all. Better than any inter

1:40:17

better than any any interject. So that seems what's left.

1:40:19

A reason to value it infinitely. And that's

1:40:21

that's a way of biting

1:40:24

the bullet. but

1:40:26

there's a

1:40:27

revenge problem. Okay. Maybe

1:40:29

we've solved the original Saint

1:40:32

Petersburg problem. But

1:40:34

there's a Saint Petersburg like game that we can easily introduce, which

1:40:37

will thwart even

1:40:40

this proposal Now

1:40:42

let the payoffs go up not exponentially as they did in Saint Petersburg.

1:40:45

Let them

1:40:48

go up super exponentially. The payoffs

1:40:50

are not two to the end. They're two to the two to the end. Now

1:40:52

take logs as Bernoulli

1:40:55

would have us do. take

1:40:58

the

1:40:58

log of two to the two to the end and you get to the end,

1:41:01

you get you're right back where you

1:41:03

started. Yeah. And so now it's

1:41:05

a sequence again of, like, adding up yeah, one plus one

1:41:07

plus one plus one utility this time. Yes. Yeah.

1:41:09

Yeah. That that's right. So it's

1:41:11

not enough just to have diminishing

1:41:14

marginal utility. It seems really what

1:41:16

this solution is asking

1:41:17

for is bounded utility. So I can't just keep

1:41:19

on ramping up the payoff

1:41:22

sufficiently to get arbitrarily high

1:41:26

utility. So that's the solution

1:41:28

now to to be discussed. Should

1:41:30

utility be bounded and arrow

1:41:33

and hard and and almond and

1:41:35

and various luminaries

1:41:36

have advocated this solution. And

1:41:38

it's

1:41:39

actually implicit in various

1:41:41

theories too that really utilities bounded.

1:41:43

Yeah. So this seems like a super compelling response.

1:41:45

Right? So that we don't value each dollar

1:41:47

equally as much. Yeah. This, like,

1:41:50

captures, I think, the intuition for

1:41:52

why people don't wanna pay so much because they're

1:41:54

like, well, even if there's some, like, infinitesimal probability of winning, like, an enormous amount of money,

1:41:56

I I just don't

1:41:59

value the

1:41:59

money enough to make it to make it worth it because

1:42:02

but but then I'm so unfathomably rich, more money is, like, not worth very much to me. And that

1:42:06

may be true of humans and we probably have saturation point and

1:42:08

there's only so much we

1:42:11

can value things given

1:42:13

our finite heads

1:42:15

and so on. but there's still

1:42:17

I think an in principle problem. You might ask why should utility

1:42:19

be bounded in various

1:42:24

other quantities are not bounded. Like,

1:42:26

length is not bounded. Volume is not bounded time.

1:42:28

Space time curvature.

1:42:30

Mhmm.

1:42:30

Various things are not

1:42:33

bounded.

1:42:33

They're unbounded. And why should utility

1:42:35

be? And normally, when you do

1:42:37

have a bounded

1:42:39

quantity, you can say why

1:42:41

it is and you can say what the

1:42:44

bound is, like think of say angle. And

1:42:46

if you think of it one way angle

1:42:48

is bounded

1:42:49

by zero to three hundred and sixty degrees, and

1:42:51

it's easy to explain that.

1:42:52

greens

1:42:53

it's easy to explain that probabilities

1:42:55

bounded. Yeah. to

1:42:57

one. So value of one, bottom value of zero. Not so easy to say it

1:42:59

in the case of utility. And the

1:43:03

problem gets worse if

1:43:06

we make utility depend

1:43:08

on one of these other unbounded quantities

1:43:10

in a what seems to be

1:43:12

an unbounded way, his appointed

1:43:14

example, I I had a student, Celtic, who hated

1:43:16

i had

1:43:17

his hometown

1:43:20

so much that

1:43:23

he

1:43:23

said that for him, utility was

1:43:25

distance from his hometown.

1:43:27

The further away, the

1:43:29

better. And then length being unbounded

1:43:31

than gave him unbounded utility. He was he

1:43:33

was joking, but you see the point

1:43:36

that -- Yeah. -- in principle,

1:43:38

you you there could be these

1:43:40

relationships between utility and some unbounded

1:43:42

quantity that would yield unbounded utility. Also, in

1:43:45

this case,

1:43:45

we're talking

1:43:46

about the rationality of individual

1:43:50

action, you know, what should

1:43:52

you do? But fairly

1:43:53

soon expected utility

1:43:54

like reasoning applies to say population

1:43:59

ethics. And you can imagine a Saint

1:43:59

Petersburg like Paradox where

1:44:02

the

1:44:02

Lokaiya

1:44:03

value different people in a population

1:44:06

-- Mhmm. -- and we we can

1:44:08

run a

1:44:09

paradox for that. And

1:44:11

now the analogical

1:44:11

replies don't seem so

1:44:14

good diminishing marginal value of

1:44:18

people doesn't sound so good. You

1:44:20

want each new person to

1:44:22

count equally.

1:44:23

Yeah. Yeah. And you certainly don't

1:44:25

wanna bound. It seems Yeah. The the total

1:44:27

value across people as as the population

1:44:30

grows.

1:44:30

Yeah. I mean, to start with

1:44:32

that, has the counterintuitive

1:44:34

conclusion that, like, how valuable it might be to create an extra person

1:44:36

on Earth might depend on, like, how many aliens there

1:44:38

are or something like that? Or is it like,

1:44:41

you have to know how many beings there

1:44:43

are in the entire universe in order to tell how

1:44:45

much it's it's how good it is to add an extra allowance. Yeah.

1:44:47

Seems odd. That's that's it. So for all

1:44:50

these reasons, I I don't

1:44:51

like the bounding utility solution. That's why I

1:44:53

I look elsewhere -- Okay. --

1:44:55

and even bite

1:44:58

the ultimate bullet at least in

1:44:59

print that's maybe that verdict from expected

1:45:01

utility theory is

1:45:02

not so bad. Yeah.

1:45:05

Okay. So tend towards just like buying the bullet

1:45:07

and saying, well, we should maybe just accept

1:45:09

this. But it doesn't seem like in an

1:45:12

actual practical situation. Like,

1:45:14

would you pay an infinite amount? Would you

1:45:16

pay any any finite amount if someone actually came up

1:45:18

and offered this to you? Like, it doesn't seem very

1:45:20

action guiding to bite the bullet in a sense.

1:45:22

Yeah. And yet, I found something attractive about that dominance --

1:45:25

Yeah. -- reasoning that was iterated. It makes

1:45:27

sense. Yeah. I see the argument.

1:45:30

Yeah. Yeah. So So I guess where did it leave us? So if if if you're willing

1:45:32

to bite the bullet on something like that. Uh-huh. Yeah.

1:45:34

So there's one thing is you might say,

1:45:36

well, in any practical situation

1:45:39

where someone came up and offered you this game,

1:45:41

you wouldn't believe them. No. No. It's a pascara slugging, I think. Yeah. So

1:45:43

there's such to approach this, like, other thing, which are some some

1:45:45

people in the audience who'll be familiar with this

1:45:47

pascara slugging situation. But

1:45:50

this might be an escape or like a way of reconciling the

1:45:52

bullet with, like, not actually playing the game in practice.

1:45:54

Is that who would say, well, sure. If I

1:45:56

was, like, a percent sure, that I was playing

1:45:58

the Saint Peter's Boat Paradox game -- Yeah. -- with a particular kind of setup. And, yes, I would. Valley were infinite dollars,

1:46:04

but because I don't believe that I ever

1:46:06

am because it's like not even possible. I don't even think it's possible in the universe to someone to deliver these

1:46:08

gains, then I'm not actually compelled

1:46:10

in a real situation to take it.

1:46:14

Yeah.

1:46:14

Let me make the Saint Petersburg

1:46:16

game

1:46:16

more paradoxically. So in

1:46:19

the

1:46:19

first telling of

1:46:20

the story, you just

1:46:23

genuinely believe this offer and you you think

1:46:25

you

1:46:25

should take

1:46:26

it at any price. Richard Jeffrey

1:46:28

famously says anyone

1:46:29

who offers you the

1:46:31

Saint Petersburg game is

1:46:34

a liar -- Mhmm. -- because they're

1:46:36

pretending to have an indefinitely large bank account. And that

1:46:38

I suppose that's true. That's where

1:46:39

we we would think someone

1:46:42

who offers you this game as a liar. But

1:46:44

I don't think that gets us

1:46:46

out of the paradox so so

1:46:49

easily. Okay? because

1:46:49

the paradox hits you with its full force as long as you assign just some

1:46:52

positive

1:46:53

probability

1:46:56

to

1:46:56

the offer being genuine. Okay. Suppose

1:46:59

someone comes up to you and

1:47:00

offers you

1:47:02

the Saint Petersburg game and

1:47:05

you give probability point 00000000001

1:47:07

to

1:47:10

them telling the truth.

1:47:12

Okay? Yeah. As we would ordinarily

1:47:14

say, you think they're a liar. You give overwhelming probability that they're

1:47:18

not telling

1:47:19

the truth. But

1:47:21

now think about that point 00001 The paradox

1:47:23

hits you with its full force because you now

1:47:25

you multiply

1:47:27

that by infinity And

1:47:29

even that extremely low probability of the Saint Petersburg

1:47:31

game has expected infinite

1:47:34

expected value for you

1:47:37

given

1:47:37

your assigning positive credence however small. One in the Googleplex -- Yeah. -- will

1:47:40

still will still keep

1:47:42

the game alive enough that

1:47:46

you should value that prospect infinitely.

1:47:48

So it seems the only way

1:47:50

to escape the paradox is

1:47:53

to genuinely give probabilities

1:47:55

zero to it. This brings us back to

1:47:58

our earlier discussion that probability zero should be reserved for impossibility and

1:47:59

that anything that's

1:48:03

possible should get positive

1:48:05

probability, perhaps one in the Googleplex, but but but something

1:48:07

positive. I guess, some people might be inclined to

1:48:09

say, well,

1:48:12

an infinite pay off or infinite value isn't possible in

1:48:14

the universe as it is, but we don't know that. It it could it may maybe it's impossible,

1:48:18

but that that's right. are you so sure that you

1:48:20

really can zero it out? And now the way

1:48:22

I like to put this, it's like

1:48:25

a dilemma, you

1:48:27

can either be so

1:48:30

to

1:48:30

speak practically irrational or

1:48:33

theoretically irrational. Mhmm. Now

1:48:35

if you assign any

1:48:37

positive probability whatsoever to the Saint Petersburg gamble to the game,

1:48:39

then the infinity

1:48:42

then

1:48:42

clubbers you and you

1:48:45

think

1:48:45

you're already enjoying infinite expected utility right

1:48:48

now. And

1:48:52

that seems practically,

1:48:53

irrational and you'd be prepared to you think

1:48:55

that actually, you'll think

1:48:55

pretty much anything you do

1:48:58

is infinitely good because There's

1:49:01

some prospect that you'll be playing the

1:49:03

Saint Petersburg game at the end. In terms of expected value, it's infinitely

1:49:04

good. That's a

1:49:06

worry on

1:49:07

the practical side. But

1:49:09

now solution, I'm imagining is you you give probabilities zero. But now

1:49:11

the worry is that that's theoretically

1:49:13

irrational because

1:49:14

your evidence is not

1:49:18

so decisive. I mean, can you really with

1:49:21

such

1:49:21

confidence -- Yeah. -- rule

1:49:23

it out. Namely, it it

1:49:25

has the same probability as

1:49:27

a genuine impossibility Yeah. It

1:49:28

seems well. It's a contingent matter. Yeah. You've you've

1:49:31

got all this evidence Inceivable. Yeah. It's conceivable. And it's

1:49:33

you've got all this

1:49:36

evidence against the

1:49:37

Saint Petersburg game, but

1:49:39

not so decisive it seems that it just absolutely rules it out. Yeah.

1:49:42

It feels like structurally

1:49:44

improve like structurally this

1:49:46

whole, like, this path down to

1:49:48

solving some Petersburg Paradox has become extremely similar

1:49:50

to Pascal's wager. Absolutely. Oh, okay. Right.

1:49:52

Right. Yeah. So I guess, Many people will

1:49:54

have heard of Pascal's wager at the same stage, but maybe

1:49:56

do you wanna, yeah, remember, repeat it, give everyone a refresher?

1:49:59

Delighted too, because I've thought

1:49:59

a lot

1:50:02

about Pascal's wager. This is Pascale's argument for

1:50:03

why you should believe in God or

1:50:06

cultivate belief in God. And just

1:50:08

to locate it

1:50:11

historically, we should contrast Pascale's

1:50:13

wager to predecessors, which

1:50:14

purported to establish the existence

1:50:17

of god, prove the

1:50:19

existence of god, And

1:50:21

I'm thinking of things like

1:50:24

the ontological argument, St. Dan Selm and

1:50:26

Descartes had one.

1:50:26

Thomas Aquinas had five ways

1:50:28

aquinas had five ways Daycart

1:50:30

had a cosmological argument and

1:50:33

there the conclusion was,

1:50:35

yeah, God exists. Pascal won't

1:50:37

have a bar of this. He

1:50:39

says, reason can decide nothing here.

1:50:42

Okay? You can't just buy some clever

1:50:44

proof established the existence

1:50:46

of God. But he turned his attention to the attitude we should have to, the

1:50:48

existence of God. Should you

1:50:50

believe in God or not?

1:50:54

in particular. And

1:50:56

that's now a decision problem. And that that's

1:50:58

why it's relevant to our discussion about

1:51:01

decision theory. And

1:51:02

he argued that you should believe

1:51:04

in God or at least wager for God

1:51:06

as he said. Think of that as

1:51:09

cultivate belief in God. short

1:51:10

version of the argument is because it's

1:51:12

the best bet. And

1:51:14

in fact, hacking writes that

1:51:16

this was the first ever exercise

1:51:19

of decision theory -- Alright. -- which is

1:51:21

interesting -- Uh-huh. -- because of

1:51:23

all the cases, this is such a

1:51:25

problematic -- Right. -- case for this

1:51:27

in theory. I see. Which is ironic. Yeah. opening with, like, the with

1:51:29

the paradox. Basically, we're opening with the

1:51:31

paradox. Anyway, Lifetosh

1:51:34

said that every research program

1:51:36

was born, refuted, and maybe you you could

1:51:38

say that of this very case, decision theory, was born, refuted with

1:51:41

a problematic case. Here's

1:51:43

how the argument goes. There

1:51:46

are

1:51:46

two ways the world could be.

1:51:48

God exists. God does not exist. Two things

1:51:50

you could choose to do. Believe in God,

1:51:54

or not believe or as Pascal says,

1:51:56

wager for God

1:51:57

and wager against God.

1:51:59

And here are the payoffs. If

1:52:02

God exists and you infinite

1:52:05

reward,

1:52:08

infinite utility, and

1:52:10

infinity of infinitely happy lives as

1:52:12

Pascal says. And now

1:52:13

in every other case, God

1:52:15

does not exist or you don't

1:52:17

Believe in God, you get

1:52:20

some finite payoff. And

1:52:21

there's some controversy about the

1:52:24

case where God

1:52:25

does exist and you don't believe maybe

1:52:28

you get negative infinity, maybe you have infinite damnation.

1:52:30

But I think actually detection What's zero, I

1:52:32

guess, So

1:52:34

is it it's sufficient to put zero there?

1:52:36

Well, isn't it? Oh, no. III

1:52:38

think Pascal himself in the text

1:52:41

is telling us that really that's a

1:52:43

finite term that wager against God or don't

1:52:44

believe in God. God exists

1:52:47

as

1:52:47

only finite in in

1:52:49

utility, not not infinitely

1:52:52

bad.

1:52:52

Okay. So

1:52:52

that's the first premise. That's

1:52:55

the decision matrix as we say. Those are the utilities, Infinity,

1:52:59

the war for believe

1:53:00

in God, God exists or wager

1:53:02

for God, God exists, finite everywhere else. Then

1:53:06

then the premise about the probability. The probability

1:53:08

that God exists should be positive.

1:53:10

So your credence, as we would

1:53:12

say, should be positive.

1:53:14

Non zero. Non zero?

1:53:17

as

1:53:17

we might say, it's possible that God exists.

1:53:19

So you should respect that by giving a positive probability. Okay? This theme

1:53:22

keeps coming

1:53:23

up. And now

1:53:25

PASCAL does what we

1:53:27

recognize as an expected

1:53:29

utility calculation and just

1:53:31

do the sum,

1:53:33

you've got infinity

1:53:34

times some positive probability, plus some finite stuff,

1:53:36

add

1:53:37

it

1:53:38

up, you get infinity.

1:53:40

up you get infinity So

1:53:42

it looks like wagering for

1:53:44

god, believing in god has infinite

1:53:46

expected utility. Mhmm. And wagering against god

1:53:49

not believing in God. The expected value was some finite stuff plus some

1:53:51

finite stuff which is finite. Infinity

1:53:56

beats finite. Therefore,

1:53:58

you should believe in

1:53:59

God. That's

1:54:00

Pascal's wager.

1:54:02

I see. So is this

1:54:05

actually analogous to the St. Petersburg paradox

1:54:07

as as you're biting the bullet on it? Or are are are there any differences here there? It's

1:54:12

it's structurally similar

1:54:14

in that infinite utility is is what you get in in

1:54:15

the punch line. Yeah. Notice we got to it

1:54:18

in a different way in Pascal's

1:54:20

wager from

1:54:23

Saint Petersburg. In Saint Petersburg, we

1:54:24

were adding finite terms. Every

1:54:27

possible payoff was finite, but

1:54:29

just because of

1:54:30

the the way they are summed,

1:54:33

you get infinity in Pascal's wager.

1:54:34

You you you just to get a this single hit of infinity, it's

1:54:39

this this one possible outcome that

1:54:42

just gets you the infinite utility in in one shot. Okay? So that's, I guess, structural

1:54:44

difference, but but

1:54:45

I think there are

1:54:47

other parallels. Yeah. Okay.

1:54:50

And I guess in both cases, the infinity is like

1:54:52

really messing us up here because even like a

1:54:54

tiny possibility of an infinity just like

1:54:57

swamps everything else. and -- Absolutely. -- unless

1:54:59

you unless you literally assign zero probability to it then

1:55:01

-- Exactly. -- it's basically the only thing that matters in

1:55:03

the entire process. Yeah. Yeah. Yeah. Now,

1:55:06

let's let's do it. A strict

1:55:09

atheist who gives probability zero

1:55:11

to God's existence, I

1:55:13

guess, is not going to be moved

1:55:15

by Pascal's wager, but that may seem overly

1:55:18

dogmatic, you know, surely you should give at

1:55:20

least

1:55:20

some probability to

1:55:22

God's existence may be extremely small. As as you might say, maybe one in the Googleplex, but not

1:55:28

that

1:55:28

small. And as soon as you

1:55:30

give it positive probability, you toast. You toast. Or you you have to or you have to -- Yeah.

1:55:33

-- wager for

1:55:36

God. Yeah. But

1:55:37

I think there is now a

1:55:39

a revenge problem for Pascal. Okay. Yeah. As

1:55:39

I like

1:55:43

to point out, Pascal's

1:55:44

wager is invalid. Let's grant him

1:55:46

his premises. So we grant him the

1:55:49

matrix of utilities, as

1:55:51

I said before. grant

1:55:54

him you should get positive

1:55:56

probability to God's existence. Grant him

1:55:58

that you should maximize expected

1:55:59

utility. Fine.

1:56:01

fine does

1:56:02

not follow that you should wager

1:56:04

for God by those lights.

1:56:06

Why not? What's that?

1:56:08

Yeah. He's an

1:56:10

alternative strategy. and Pascal's definitely not recommending this.

1:56:12

Toss a coin,

1:56:13

heads you believe

1:56:15

in God tells

1:56:17

you don't. k?

1:56:18

Yeah. What's the expected utility

1:56:20

by Pascal's

1:56:21

lights for that? But with

1:56:23

probability half the Queensland

1:56:25

heads, and then you get

1:56:27

the infinite expectation that Pascal was talking

1:56:30

about. With probability half,

1:56:30

you get some finite expectation. the

1:56:35

expectation of

1:56:35

this mixed strategy, as we say, is

1:56:38

still Infinity. So

1:56:40

by the lights of expected utility theory,

1:56:42

this is equally good. This is another

1:56:44

way get infinite expected value.

1:56:46

Question on his own baton. Well, that's it. That's right. And and I've just started. Okay.

1:56:48

Yeah. Well, I was gonna say, won't you

1:56:50

have a problem? It's like, okay. So

1:56:54

flip the coin and you get whatever result says that you shouldn't believe in

1:56:56

God. Yeah. Now it seems like you wanna flip the

1:56:58

coin again. Yeah. Okay. And that's often a

1:57:01

problem with mixed strategies. Maybe you don't like. the way

1:57:03

the point lands, you want to have

1:57:05

another shot. But it is a

1:57:07

strategy that is alternative to

1:57:09

Pascal. He was certainly not recognizing the

1:57:11

same. This has the same expected

1:57:13

utility. And

1:57:14

now we just

1:57:16

run down the slippery

1:57:19

slope okay, now suppose you wager for god

1:57:21

if and only if your lottery

1:57:23

ticket wins

1:57:23

in in the next

1:57:25

lottery and let's suppose there's

1:57:28

a billion tickets in the lottery, one

1:57:30

in a billion times infinity, still infinity. And so do the calculation, infinite expectation

1:57:32

for that

1:57:36

strategy -- Yeah. -- the

1:57:38

lottery ticket. I

1:57:39

wait to see whether a meteor quantum

1:57:44

tunnels through this room before the

1:57:46

end of our interview, some tiny probability of this happening, I don't know, one in the Googleplex,

1:57:48

call it, multiply

1:57:51

that by infinity, that by infinity

1:57:54

And

1:57:54

I

1:57:55

have infinite expected utility for this

1:57:57

strategy, wager for God, if and

1:57:59

only

1:57:59

if the material happens. And now it

1:58:02

starts to look like whatever I do -- Yeah. -- there's some

1:58:03

positive probability that

1:58:06

I will

1:58:07

get

1:58:08

the -- Yeah.

1:58:10

-- info payoffs. Into the

1:58:12

payoff. And and that has infinite expected utility. Even

1:58:14

if I try to avoid belief in God, there's some probability that

1:58:16

I'll fail. Yeah. You'll accidentally

1:58:18

believe in God. And then I

1:58:22

get the infinite expected utility at

1:58:24

that point. And so

1:58:25

my conclusion was not only as

1:58:28

Pascal's wager invalid,

1:58:30

namely the conclusion that you should wait for God does not follow from the premises.

1:58:32

It's invalid in, so

1:58:34

to speak, the worst possible

1:58:36

way

1:58:36

in so to speak the worst possible

1:58:38

way namely, recommends everything equally. Everything

1:58:40

by those lights should be

1:58:42

equally good. Whatever you do

1:58:45

has infinite expected

1:58:46

utility? Yeah. It is it's it's interesting

1:58:48

that the the power of the infinity to create

1:58:51

the paradox here is so powerful that

1:58:53

it also destroys the paradox or Just, like,

1:58:55

ends up producing some garbage result. Like, everything is

1:58:57

permissible. Yeah. That's it. So the the

1:58:59

great strength of Pescale's wager was

1:59:01

you didn't need to worry about

1:59:03

the probability that you were assigned to God's existence. Infinity

1:59:06

just swamps it -- Yeah. -- as long as it's

1:59:08

positive. And

1:59:10

now the revenge problem is

1:59:13

well, that's swamping effect of infinity.

1:59:15

Now it affects just anything

1:59:17

you choose to. There'll be

1:59:19

some

1:59:19

probability that you wind up wagering for God and away we

1:59:21

go. Yeah. I guess you don't even have to

1:59:23

say

1:59:23

that you necessarily get the

1:59:26

infinite payoff via the original, like,

1:59:28

envisaged believing a God. It

1:59:30

could be like just anything you do might pay off in infinity. Right? That's that's right. There's all kinds of different ways that happen. Yeah.

1:59:32

That's it. Yeah. No. Okay. Anderson Psberg

1:59:34

gay maybe waiting down the road. Exactly.

1:59:39

Yeah. You you you go past the church, go to

1:59:41

the casino, and yeah. Yeah. Yeah.

1:59:43

Okay. It seems like there's a

1:59:45

very natural response here, which is

1:59:48

to say, well, under a in

1:59:50

in situations where your you have, like, different possible paths to infinite positive

1:59:52

values. Yes. Wouldn't it

1:59:54

be very natural to choose

1:59:56

as a tie breaker, the one that has the highest probability of Yeah. So if

1:59:58

you're, like, now I'm, like, weighing up I'm weighing up the casino with this a bit of a game in the

2:00:01

church, and then I kind of

2:00:03

have to make a decision one

2:00:05

has a higher probability of getting me getting me

2:00:07

infinite value. That's right. Yeah. Yep. Very natural reply. And a few people

2:00:09

have made it Schlessinger, for example, has has

2:00:11

a version of that. and

2:00:15

it's certainly very plausible. Well

2:00:17

first point, notice you've

2:00:18

added an extra premise, maybe that

2:00:20

you could put it this way.

2:00:22

It's like a lexical ordering. When some

2:00:25

action uniquely maximizes expected

2:00:28

utility, do that.

2:00:30

When you have ties,

2:00:32

you

2:00:33

need a tie breaker and you

2:00:35

offered a tie breaker that now look at the probabilities and maximize the

2:00:36

probability of, in

2:00:38

this case, the insured payoff.

2:00:42

That's

2:00:42

okay. That's fine. But

2:00:44

I think my

2:00:45

point still stood from before that

2:00:47

the argument as stated

2:00:49

was

2:00:50

invalid and it needed help from

2:00:52

this extra premise, this tie breaker -- Yeah.

2:00:54

-- when you have the ties. A lexical decision making

2:00:56

procedure

2:00:56

is one that

2:00:59

has multiple steps where you only proceed

2:01:01

to the later steps if there's a draw on the first process. It's a generalization actually, I think,

2:01:03

from lexical ordering

2:01:07

being alphabetical ordering, where when you're ordering

2:01:09

things alphabetically, you look at the first letter and you only bother to look at the second letter and then

2:01:11

order it based on the second letter

2:01:14

if the first letter is the

2:01:16

same. So with alphabetical

2:01:18

ordering, of course, you look at the first letter and you bring all of the a's together and then all of the b's together, only then do

2:01:20

you look at the second letter and rank them

2:01:22

by the rank them by the second letter?

2:01:27

So so decision making process where you the first step and then a second

2:01:29

step. Okay. Back to the show. This takes

2:01:31

us back to

2:01:33

our discussion of

2:01:35

heuristics before and you've got multiple ways of

2:01:37

realizing, in

2:01:38

this case, the maximal expected value.

2:01:40

And just to say

2:01:42

a bit more about this reply.

2:01:44

It's it's very intuitive what you said.

2:01:46

It'd be nice to to generalize what really

2:01:49

is the rule. So

2:01:51

in this case, we had something of

2:01:53

infinite utility, but what what's the general rule that we're applying?

2:01:56

Because it seems like so,

2:01:58

yeah,

2:01:58

you said lexical, which is kinda

2:02:00

like indicating there's two

2:02:02

steps here. It seems like the rule is if you if infinite utility is possible, maximize the probability

2:02:04

of infinite -- Yeah.

2:02:06

-- payoff. If it's not,

2:02:09

then do normal expected value. Yeah. I guess a trouble here is gonna be that infinite value is on the

2:02:11

table. We said we assign it, like, non

2:02:14

zero value possibility in, like, any

2:02:16

action and

2:02:19

say that actually we never get to the second step. Normally expected value never

2:02:21

arises because we're always stuck on the

2:02:23

first step. Just trying to

2:02:26

maximize the probability of infinite

2:02:28

payoff. Well, and then there are so

2:02:30

many routes to getting this infinite expected utility. And it

2:02:32

seems like you're going to be reaching for

2:02:34

the tie breaker all the time now. I'll

2:02:36

be there

2:02:38

were just too many ties. And

2:02:40

it'd be

2:02:40

nice to to really clarify what

2:02:43

the rule is, this lexical

2:02:45

priority rule, it

2:02:46

made sense in Pascal's wager as you said

2:02:48

it, but that seems to be a special case of some

2:02:50

more general thing and it would be nice to

2:02:54

give that some foundations. And utility theory

2:02:57

has it seems

2:03:00

some

2:03:01

firm

2:03:01

foundations in

2:03:04

terms of preference axioms and so called representation

2:03:06

theorem, which a lot of people appeal

2:03:08

to. It'd be nice to

2:03:10

have something parallel to that for

2:03:14

this enhanced decision theory where you have this lexical rule. And

2:03:16

I'll just put

2:03:19

in a plug for Paul

2:03:21

Barthor who has offered something like that, I I think he calls it relative

2:03:23

utility theory -- Okay. -- which

2:03:26

is kind of a generalization

2:03:30

of

2:03:30

the

2:03:31

familiar expected utility theory. It

2:03:33

involves ratios -- Uh-huh. -- utility ratios.

2:03:35

I see. It it's a

2:03:37

generalization into the infinite cases. That's right.

2:03:39

It's robust to them. Yes. So he he he,

2:03:41

I think, can deliver your intuition

2:03:43

in Schlessinger's rule Yeah.

2:03:46

And it will fall out, which is

2:03:49

good because now it's giving a

2:03:51

more foundational support

2:03:52

for this rule. Okay.

2:03:53

So zooming out for a minute. We

2:03:56

got the Saint Petersburg game. We got the Pascal's wager, which, like,

2:03:58

they're both introducing infinities by different passages, and then they seem

2:03:59

to, like, just

2:04:03

really create an awful lot

2:04:03

of trouble for expected value. I guess, so

2:04:06

I wanna go out tonight and, like,

2:04:09

choose what movie to watch and, like, make

2:04:11

decisions based on expected value. And so,

2:04:13

like, waiting things by their probability,

2:04:15

clearly. Yeah. I don't wanna feel like I'm

2:04:17

doing the right thing here, but those people are coming up and

2:04:20

saying, hey, I've got these paradoxes for expected

2:04:22

value that produce, like, garbage results or at

2:04:24

least, like, require

2:04:26

totally rethinking it. Yeah. How comfortable should I

2:04:28

feel when I use expected value to

2:04:30

make decisions in in life? Yeah.

2:04:32

Are these are these, like, wacky

2:04:34

cases with convergences to infinities and putting infinities and things. Are

2:04:37

they, like, fundamentally a problem? Or are

2:04:39

they just more curiosities? Yeah.

2:04:42

Well, one one solution is I guess I mentioned it earlier, you

2:04:44

really do just zero out these crazy

2:04:46

cases. Yeah. You don't even give them

2:04:48

one in the Google plex, I

2:04:50

see. Credence. Yeah. And that that would

2:04:53

certainly quarantine them. Yes, I have made versions of

2:04:55

this worry in

2:04:55

a few places

2:05:00

how even

2:05:01

just everyday decisions seem to be contaminated

2:05:03

by, in this case, Infinity, I've also talked about it

2:05:05

in relation to a

2:05:07

game that has no

2:05:10

expectation at all, the so

2:05:12

called Pasadena game.

2:05:13

And the game

2:05:14

itself may seem pathological than

2:05:17

But if you give it any credence, then even

2:05:19

a simple choice, like, where should I go out for dinner tonight? Will be Chinese

2:05:24

or pizza? if you give

2:05:26

those prospects, if you give some probability to the crazy stuff happening

2:05:28

at the end, that

2:05:31

easy decision gets infected Yeah.

2:05:34

So so I guess you

2:05:36

have to do the dogmatic

2:05:38

thing and just say, look, I'm

2:05:40

just zeroing out. or even more even more

2:05:42

because you can go dogmatics. Well, you can choose your dogmatism. You can either be when things become sufficiently weird, I

2:05:44

give them zero probability -- Yeah. -- it

2:05:46

seems dogmatic. Or I guess you can be

2:05:50

I refuse to consider infinities. Just give them some finite

2:05:53

positive value and and lead at that. Yeah. Or

2:05:55

you just have to become a

2:05:57

fanatic who pursues in for the values -- Well, the

2:05:59

time. -- well,

2:05:59

and you heard me before was putting

2:06:02

in an argument for for the crazy

2:06:04

thing. That's right.

2:06:06

And So for practical purposes, I I think you

2:06:08

you have

2:06:08

to be dogmatic. And maybe

2:06:10

even in some cases, not

2:06:13

just being dogmatic and giving probability

2:06:15

zero to scenarios. In some you just don't even consider them.

2:06:17

They're just not even in your space of

2:06:19

possibilities to begin with.

2:06:22

It's it's not that you you recognize it and give it probability

2:06:24

zero. This is one statisticians

2:06:26

reply that I've heard. You

2:06:30

just don't even put it in your model of the world.

2:06:32

Yeah. Yeah. I mean okay. So

2:06:34

to speak

2:06:34

up for being crazy for

2:06:37

a minute, like imagine that we we really did think

2:06:39

that infinite utility

2:06:40

was a a live possibility that we

2:06:42

had some, like, theory of the universe

2:06:45

in which, you know, we could

2:06:46

possibly let us say that we we didn't, for example, think that

2:06:49

the universe was gonna, like, peter out either become very

2:06:51

big or very small -- Yeah. -- and

2:06:53

such that, like, we're in a steady state universe and maybe

2:06:55

set up a system where you like do live forever. There's like nothing that interferes

2:06:57

with your life. And so maybe you could get

2:06:59

an infinite utility that

2:07:02

way. So we have some, like, theory that makes

2:07:04

it, like, feel not infinestimally likely, but maybe,

2:07:06

like, one in a thousand likely. Mhmm. Then

2:07:09

it feels less crazy to say, well, you

2:07:11

should orient your life around trying to do that, trying to get the infinig utility forever because

2:07:14

the universe permits that.

2:07:17

Yeah. So maybe maybe we can buy

2:07:19

the bullet. I guess yeah. Another way to go is to give infinity a

2:07:24

more nuanced treatment. Mhmm. So so far, I

2:07:26

was imagining it'll be hard to convey this just over the podcast,

2:07:28

but I'm I'm sort of drawing

2:07:30

the figure eight of infinity. It's that

2:07:33

figure eight on its side, infinity. And that's the un nuanced infinity that

2:07:35

seems to

2:07:36

have these problems.

2:07:39

If you have it, or

2:07:42

you multiply it by one in the Google plex,

2:07:44

you still get the

2:07:46

same sideways, figure eight infinity

2:07:48

back. But if you had a

2:07:50

more mathematically

2:07:51

nuanced treatment of infinity

2:07:53

where halving something or multiplying

2:07:55

it by when in the

2:07:57

Googleplex made a difference then we

2:07:59

might

2:07:59

get the ordering

2:07:59

that we

2:08:00

want again. This is another way

2:08:02

of handling the problem by

2:08:04

the way of which led

2:08:07

to your lexical rule Maybe

2:08:09

if we just

2:08:10

distinguish among different infinities. Mhmm. God, I'm scared of it.

2:08:13

This

2:08:13

this just seems

2:08:16

like it's create more problems. And

2:08:18

that and it it's also scary just the sheer mathematics

2:08:20

of it -- Yeah.

2:08:23

-- is formidable. But it

2:08:25

turns out that there

2:08:28

are

2:08:28

these systems, for example,

2:08:30

the surreal numbers,

2:08:31

hyperreal numbers where you

2:08:33

have infinities and multiplying them makes a difference, multiplying by

2:08:36

half

2:08:36

or what

2:08:39

have you will change

2:08:41

the value will make it smaller in

2:08:43

this case. And so maybe now you've get the ordering

2:08:44

that you're hoping

2:08:46

for and you can choose

2:08:50

Chinese over pizza after all if

2:08:52

you keep track of the sizes of

2:08:54

all of these infinities. Yeah. And that's that's

2:08:57

been a bit of a cottage industry too

2:08:59

of -- Okay. -- doing these highly

2:09:01

technical, highly sophisticated essay. Refinements of decision theory.

2:09:03

Okay. Yeah. Okay. Let let me

2:09:05

make another line of

2:09:08

argument here. Infinity's

2:09:09

mess shit up. Yeah. So

2:09:10

okay. Some sublisted might be familiar with

2:09:12

the Bannik Tarsky

2:09:15

Paradox. Yeah. So basically, the

2:09:17

idea is, like, take take a sphere, a solid sphere. Yeah. If you divide it into an infinite number of

2:09:19

points, the mathematicians in the

2:09:22

audience might be annoyed by

2:09:25

divide it into an infinite number of points and

2:09:27

then move them around in some special way. And it seems like you can get two full spheres out of

2:09:29

the matter of the

2:09:32

other, like, the volume of the

2:09:34

original sphere. Yeah. So it's like you've doubled the amount of volume that you have just by like splitting something into infinite points and then putting

2:09:39

it together again. I don't think that

2:09:41

that could happen in the universe. Probably, it doesn't seem like that happens. And there's, like, maybe just whenever we

2:09:43

put infinities into

2:09:47

into these decisions, we're just gonna

2:09:49

find lots of problems and lots of things that won't that will never happen in the real world and we and

2:09:51

so we should be okay to dismiss infinities

2:09:56

and say, and, like, throw them out because

2:09:58

of, like, some just just other ways that they make a life unlivable. I know. Great. Yeah. Feynman was

2:09:59

told about the Baraktarsky Paradox,

2:10:02

and it was presented to him

2:10:05

involving an orange. Yeah. You've got an orange of a certain size and by suitably

2:10:07

cutting it up, you can

2:10:11

create two oranges of

2:10:14

that size, in fact, you can

2:10:16

keep multiplying them. And Feynman

2:10:18

bet that that was just

2:10:20

nonsense. That wasn't true.

2:10:22

And then someone explained

2:10:23

to him how you do it. There's this infinitely precise

2:10:25

surgery and it involves non measurable

2:10:27

sets and so on.

2:10:29

And Feynman said come on, thought

2:10:31

you meant a real orange. Now, of

2:10:34

course, we understand that

2:10:36

reaction. But

2:10:38

I feel like saying, yeah,

2:10:40

but that doesn't really solve the paradox,

2:10:42

you know. Well, thank God, we can't

2:10:45

do infinitely precise surgery on

2:10:47

oranges. hence our theory of measure, okay, is is safe. You

2:10:49

feel like saying no that, of course,

2:10:51

this is highly implausible that

2:10:54

you can't actually do this.

2:10:56

But aren't you

2:10:57

worried that there's something wrong with our theory of measure

2:10:59

that it seems to allow this this result?

2:11:01

And I feel like saying

2:11:03

something similar about Decision

2:11:05

theory noticed that Richard Jeffrey's reply was rather like Feynman's

2:11:07

regarding

2:11:11

Bannaktarsky Jeffrey

2:11:13

said with regard to the Saint Petersburg Paradox, anyone who offers

2:11:15

you the Saint Petersburg game as a liar. And

2:11:18

of

2:11:19

course, that's true no

2:11:22

one

2:11:22

in the real world is gonna offer you the

2:11:24

Saint Petersburg game genuinely. But

2:11:26

I still have that niggling

2:11:28

feeling too look, there's still something

2:11:31

wrong with with just our theory of measure in the banach Tarsky case

2:11:33

of expected utility

2:11:35

and rational decision.

2:11:36

expected utility and rational decision

2:11:39

in the case of Saint Petersburg. And it'd be

2:11:41

nice

2:11:41

to solve that that

2:11:43

problem. Yeah.

2:11:44

Maybe that's now

2:11:46

the philosopher in me rather than the physicist or the engineer in me. Well,

2:11:48

it's a it's a very common theme, I guess, in

2:11:50

philosophy. That one flips between like the

2:11:53

sublime realm of ideas. Yeah. You're,

2:11:56

like, highly idealized situations, and then you, like, bring it

2:11:58

back into the world. And you have to say, like,

2:12:00

is this still relevant? It's like you

2:12:02

do a bunch of math and you're like, does right. guess judgments

2:12:04

on whether it's still relevant. Yeah. As

2:12:06

you've made it like stranger and stranger.

2:12:11

Yep. That's right. Flossers often have highly

2:12:13

fanciful thought experiments

2:12:16

to make some philosophical

2:12:19

points like Frank Jackson imagined Mary

2:12:21

in a room and she she knows all the physical facts,

2:12:23

but she's never seen red

2:12:27

and then she sees red for the first It seems that she's

2:12:29

learned something. The

2:12:31

Chinese room from

2:12:32

Sirl is a

2:12:34

famous thought experiment, Putnam, had

2:12:36

Twin Earth and so on.

2:12:38

Now it seems to me,

2:12:42

philosophically unsatisfying to reply,

2:12:43

well, there's no such room. There's there's no there's

2:12:45

no room with Mary in it.

2:12:47

There's no Chinese

2:12:50

room. Twin Earth comes as all rubbish. This is all rubbish. There's

2:12:52

no Twin Earth. Yeah. Of course, we know

2:12:54

that. We never said there was. Yeah.

2:12:57

But you still feel these these thought experiments

2:12:59

in Saint Petersburg are put in the same

2:13:02

category -- Yeah. -- like Tarske, they

2:13:04

are

2:13:05

putting pressure on some

2:13:07

entrenched notion of ours. Yeah. I mean, is it because you

2:13:09

want your theories of things to be,

2:13:11

like, to apply in all cases, to

2:13:14

not be contingent on, like, specific empirical

2:13:16

facts about how how

2:13:18

finally you can cut things. Yeah. Exactly. Yeah. Yeah. In in case of theory, you

2:13:23

know, fingers crossed that there

2:13:25

are no sufficiently large bank accounts and then you look at the world. Okay. We're good. In the case

2:13:28

of Balaktarsky,

2:13:32

Fingers crossed that you can't do

2:13:34

that surgery on oranges. Few. Yeah. Turns

2:13:36

out you can't and

2:13:39

all is good. No.

2:13:40

You you still think the

2:13:42

mere possibility, the conceivably as we were

2:13:43

talking about before or is

2:13:45

enough to

2:13:46

break it? Is enough to to

2:13:48

make one worry -- Yeah. -- the thought experiments

2:13:51

are a problem enough. Okay. Well, we should we should move on

2:13:53

for our expected

2:13:55

value. I mean, for people in the audience who have

2:13:57

reservations about expected value in, like, ordinary so many people, they hear about these cases

2:13:59

and they're like,

2:14:02

I'm going to now has serious reservations about using expected value

2:14:04

to make decisions in my life on what to

2:14:06

do. Yeah. I guess I probably know

2:14:09

a very outsized frac of the number of people in the world

2:14:11

who who actually do this. But do do you have

2:14:14

any advice for them? It it feels like this

2:14:16

field hasn't wrapped

2:14:18

up yet. We haven't answered this one. Yeah. the some

2:14:20

revisions of expected utility theory

2:14:22

that

2:14:22

might provide some therapy, like

2:14:27

the risk weighted theory or the weighted

2:14:29

linear utility theory, that might provide some relief. I'm

2:14:32

just guessing that

2:14:32

all of these are

2:14:33

gonna have their own paradoxes. But I don't

2:14:35

I You actually

2:14:38

in fact, let let's just cut to the chase. They

2:14:40

do. And in fact, they'll

2:14:43

be either

2:14:43

reformulations or the, you know, the negative

2:14:46

version like a negative Saint Petersburg game that

2:14:48

trouble for some of these theories.

2:14:50

So problems are still working. But I in a way,

2:14:52

I

2:14:53

think we we can

2:14:55

quarantine the problem. It's

2:14:57

a little bit like what Hume

2:14:59

said. He said something along these lines that when he leaves, know, is philosophizing

2:15:01

and

2:15:01

he goes out

2:15:04

into the world,

2:15:06

he plays billions or whatever.

2:15:09

You know, he he leaves his

2:15:11

philosophical problems

2:15:11

behind. And and of

2:15:13

course, we we shouldn't be

2:15:16

paralyzed by the

2:15:16

Saint Petersburg game or what have you -- Yeah. -- about

2:15:18

I mean, I think I think they're important to think about

2:15:20

conceptually while

2:15:23

we're being philosophers in the real

2:15:26

world, we will

2:15:26

not write out a a decision matrix that'll have these problematic features. Pascal's

2:15:28

wagerie actually is interesting --

2:15:30

Yeah. -- in that case, I

2:15:34

think there is a possibility of creating, like, new

2:15:36

universes or, like, infinite like, infinities

2:15:38

do seem possible maybe in in the universe

2:15:40

is sorry. Maybe that's not the point

2:15:42

you want. But Yeah. And also, I mean, people really do

2:15:45

take seriously what what Pascal

2:15:47

said. Mhmm. Okay. Yeah. Of course,

2:15:49

people do that. Yeah. There are

2:15:51

lots of lots of Christians

2:15:52

who say that Pascal said it

2:15:55

exactly right and not just Christians, there there'll be other religions with which

2:15:57

will acknowledge some source

2:15:59

of

2:15:59

infinite utility. And

2:16:02

so now this becomes a

2:16:04

practical problem. This is

2:16:05

not like banaktarshi in cutting up

2:16:08

oranges, you know, given certain

2:16:09

beliefs that people people really have in the real world, it seems like a

2:16:11

life issue. Yeah.

2:16:13

like alive issue

2:16:15

Yeah. I guess, It seems like

2:16:17

we at least don't have an impossibility proof. We don't have an impossibility

2:16:19

theorem yet that shows that we can't have a satisfying

2:16:22

decision theory. So so So

2:16:24

the dream so the dream

2:16:26

remains. Keep keep working. Keep philosophers employed and keep GPI more

2:16:31

research -- funded. -- we got we got to pay highest salaries to get better people into reservoirs. Absolutely.

2:16:37

Look at how the problems are. Alright. Let's push

2:16:39

on and talk now about kinda factuals, which have

2:16:42

been one of your your big recent passions.

2:16:44

Yeah. To start with yeah. Can you

2:16:46

explain what are kind of factors? I guess, we use the term loosely quite a lot, but, like,

2:16:51

yeah, precisely, what are they? Yeah. They're

2:16:54

conditionals. They're if then, statements and typically with it were

2:16:56

the case, the p, it

2:16:58

would be the case the

2:17:00

q, or if it had

2:17:02

been the case, the p, it

2:17:05

would have been the case that

2:17:07

queue. And typically, they presuppose that the first

2:17:09

bit what we call the antecedent, the p bit

2:17:12

is false. and

2:17:15

then they have us say

2:17:17

something about a hypothetical scenario. Yeah.

2:17:19

I mean, sometimes we might allow the

2:17:22

antecedent to be true, but the typical

2:17:24

typical cases where p in if p then

2:17:26

q is false. So we've got to say so

2:17:29

If I were able to fly, I could

2:17:31

travel to New York. The antecedent is I were

2:17:33

able to fly. Yep. But -- Yeah. -- there's a proposition that's

2:17:35

false in the actual world I

2:17:38

I can fly and then

2:17:39

we we imagine

2:17:40

a a situation in which that is realized. Yes, I see. Okay. So

2:17:42

that that's the antecedent. And what's what's the the

2:17:48

second bit is the consequence. Yes.

2:17:50

So if p then q, p is the antecedent q is the

2:17:52

consequent. Is

2:17:55

there, like, a a classic, what do you think,

2:17:57

the paradigm example of a counter factual that is used in philosophy when when you're

2:17:59

teaching this to students? Yes. Well, I'll give you a couple. Actually,

2:18:03

this is a good way of

2:18:05

bringing out the difference between two kinds of conditionals, the counterfactual as

2:18:07

opposed to what's called indicative conditional. If

2:18:13

Oswald didn't kill Kennedy, then

2:18:15

someone else did. Now

2:18:17

that seems true, and this

2:18:20

is an indicative conditional,

2:18:22

So

2:18:22

we're assuming that Kennedy really was killed. We know that,

2:18:25

but we're not sure who did

2:18:27

it. If Oswald didn't kill Kennedy,

2:18:29

then someone else did because one way or another he was killed, I see. Okay. Now compare

2:18:31

that to

2:18:35

if Oswald hadn't killed Kennedy,

2:18:37

someone else would have Now that's

2:18:38

quite different, quite different and maybe far less

2:18:43

plausible. I mean, that suggests

2:18:45

there was a backup assassin

2:18:47

maybe some conspiracy, and we could easily deny that. Yes.

2:18:51

Anyway, or or maybe even

2:18:53

easy how about We're not completely sure who

2:18:55

wrote Hamlet. If Shakespeare didn't write

2:18:57

Hamlet

2:18:57

than someone else

2:18:59

did. That seems

2:19:01

true because we know

2:19:03

Hamlet was written. Yeah.

2:19:04

Okay. If Shakespeare hadn't written, Hamlet

2:19:06

-- Someone else would. -- sounds crazy.

2:19:08

That sounds crazy as if Hamlet was just

2:19:10

fated to be written and it Shakespeare

2:19:14

happened to be the vehicle for for it,

2:19:16

but someone else would have stepped in if need be able

2:19:18

to put put it and sit it in their mind.

2:19:21

Yeah. So what what's the name

2:19:23

for these two different kinds? Okay. Indicative.

2:19:26

Yeah. Okay. And counterfactual. Subs you we use the subjective

2:19:28

conditional that

2:19:30

mood typically to express the counterfactual.

2:19:32

Yeah. We'll call

2:19:33

those counterfactuals where typically there's the presupposition that the antecedent

2:19:35

-- Yes. -- false. Yeah.

2:19:39

What what's

2:19:39

what's the history of the study of counterfactuals?

2:19:41

I imagine, surely, I mean, people have been making statements of this kind -- Yeah. -- like, since humanity began speaking, you imagine.

2:19:44

Yes. But It

2:19:48

seems like discussion of this kind of wood

2:19:51

could if then it doesn't seem like that's

2:19:53

kind of in the classic pantheon of old

2:19:55

school philosophers. They they didn't seem to think

2:19:57

about very much. I think you're right. They they did talk about conditionals,

2:19:59

for example, the

2:19:59

stoics, talked

2:20:05

about conditionals, theodaurus, creecepus, But often they

2:20:07

were talking more about what we'd now call the indicative conditional, for

2:20:09

example, they had what

2:20:12

we would now call

2:20:14

the material conditional. That has

2:20:16

a certain truth table. Basically, it's

2:20:19

true. However, P and Q turn

2:20:21

out except where P is true

2:20:23

and Q is false. then if

2:20:25

p then q is false, otherwise true

2:20:27

in every combination. Why did they care about that?

2:20:29

They were doing logic. They care about logic more

2:20:32

generally. And certainly

2:20:34

the material conditional is the standard part -- Yeah.

2:20:36

-- of of logic and they they got onto it early. So

2:20:38

yeah. So when did when did the study of this kind of

2:20:41

a first flourish. Counterfactual? Yeah. I I

2:20:43

would say, more recently, it I'd

2:20:45

say it started to hit its hay day. Well, in

2:20:48

the forties, Chism and

2:20:50

Goodman began to write about

2:20:52

it.

2:20:52

Goodman wrote some

2:20:55

classic stuff, especially in fact fiction

2:20:57

and forecast that book, there's

2:20:59

a classic treatment of counterfactuals. And

2:21:01

then I think

2:21:02

the real heyday for counterfactuals came a

2:21:06

bit later in the 60s,

2:21:09

perhaps towards the end of

2:21:11

the 60s, especially Storlacher and Lewis and their classic other

2:21:15

possible world's accounts, you know. Roughly the

2:21:17

idea is that if it were the case,

2:21:18

the p, it would be the q is true just

2:21:22

in case, at the closest p world

2:21:25

q is the case -- I see. -- by p world, I mean,

2:21:27

a world where p is true. Hold Yeah.

2:21:31

And then they had this famous

2:21:33

debate about, well, is there a unique closest -- Mhmm. -- p

2:21:35

world that was thought not

2:21:38

the was thought not And

2:21:39

by the way, notice this

2:21:41

as a use of a heuristic from earlier

2:21:43

when Stormnacko talks about the closest p world there

2:21:48

are

2:21:48

two ways you could challenge with that. There could be

2:21:50

more than one, p, well, there could be ties

2:21:53

-- Yes. -- and I think. If if Bizzais

2:21:55

and Verity were compatriots Would they

2:21:56

have been French? Would they have been

2:21:58

Italian? Seems maybe that that they're

2:22:00

equally close possibilities. And

2:22:02

going in the other direction, maybe

2:22:04

there's no Yeah.

2:22:05

No. Closest world, just ever, closer world. So

2:22:07

I'll

2:22:07

give you an

2:22:09

example of that if I want

2:22:11

to. It could be relevant later.

2:22:13

Lewis, imagines

2:22:14

the following case. If I were taller

2:22:16

if i were than

2:22:17

seven feet, how tall

2:22:19

would I be?

2:22:20

And let's imagine that

2:22:23

the closer I am hypothetically to my actual height, the

2:22:26

better

2:22:27

for close in in a sense

2:22:29

that's relevant here. would I be

2:22:31

seven foot one? Mhmm. Well, Lewis has thought is that that's a

2:22:33

gratuitous departure from my

2:22:35

actual height. Seven foot

2:22:37

half an inch would

2:22:40

be closer seven foot quarter of an inch

2:22:42

closer still -- Yeah. -- an infinite

2:22:44

sequence, a bit like Zeno's paradox --

2:22:46

Yeah. -- ever closer world's none closest

2:22:48

and that's that's been to be trouble for stormnaker.

2:22:51

Perhaps later on, I'll argue it's it'll turn out

2:22:53

to be trouble for Lewis. But and it was so that was a classic

2:22:55

period in the study of counterfactuals. Is

2:22:59

this under the banner of like a modal

2:23:01

logic? Is this basically what this is? Yeah.

2:23:03

It's yeah. You could certainly say that certainly artifacts will seem to have a modal element What

2:23:07

does Moto mean here? It means

2:23:09

something like it's not just a matter of how things

2:23:12

are actually

2:23:14

actually somehow

2:23:15

possibility is involved or

2:23:17

perhaps necessity. Yeah. I would call probability a modality

2:23:19

too, but not just things

2:23:24

as they actually turn out. Okay. So why

2:23:26

should we care about counterfactuals and and conditionals

2:23:28

and stuff? Like, from one point of view,

2:23:30

it seems like all pretty straightforward. We like

2:23:32

to use terms all the time. You wanna get confused. Everything seems

2:23:34

fine. But, like, yeah, how does it relate to,

2:23:39

like, actually important questions? Very good. For

2:23:41

start, it's philosophically important to study counterfactuals

2:23:43

because so many philosophers

2:23:44

reach for counterfactuals in

2:23:47

the study of other things.

2:23:49

things

2:23:49

like causation,

2:23:51

dispositions, explanation, laws

2:23:54

of

2:23:55

nature, free

2:23:56

will

2:23:58

will, perception, confirmation.

2:23:59

Flosses are often

2:23:59

analyzing

2:24:01

things or at least

2:24:03

referring to counterfactuals in the

2:24:05

study of these other important philosophical concepts. So starting with

2:24:07

philosophy, all over the

2:24:12

place. Yes. Then science,

2:24:13

I think traffics in counterfactuals in various ways. You could

2:24:16

ask a question like

2:24:18

if I were to

2:24:20

drill a hole through

2:24:22

the world, through the earth, and

2:24:24

drop a ball, what would happen?

2:24:27

Oh, it would be a harmonic oscillator. The

2:24:29

physics textbook tells you. Okay? Yeah. That would

2:24:31

be a

2:24:31

kind of factual. I

2:24:34

think the social sciences

2:24:37

traffic in counterfactual's history, for example, if

2:24:39

the archduke hadn't been

2:24:41

if the archduke hadn't been assassinated,

2:24:43

they wouldn't have been

2:24:45

World War one. Yeah. You

2:24:47

might say, economics, worries about the incremental benefit

2:24:50

of some commodity or the change

2:24:53

to the economy. Psychology -- Yeah. -- counterfactual is very

2:24:55

important. Think of say

2:24:56

regret regret

2:24:59

is is often informed by a counterfactual.

2:25:01

Actually, I I'm somewhat prone to regret. And when I psychoanalyze myself, I think that's why I

2:25:03

got so interested in counterfactors.

2:25:09

And that's why I especially want them to come

2:25:11

out false because then the

2:25:13

the counterfactuals that underpin my regret I

2:25:15

could I could banish. And we think

2:25:18

of relief you know, thank god that this happened because if it hadn't happened,

2:25:22

this bad thing would have happened.

2:25:24

So I think psychology is permeated

2:25:27

with cat effectuals?

2:25:27

Yeah. Okay. So it's, like, important in answering lots of other relevant philosophical questions. And

2:25:29

I guess also it's just, like, it's

2:25:31

such a core part of human reasoning

2:25:34

all the time. We constantly have to

2:25:36

think about how things would have been

2:25:38

in order to establish causation and think about what

2:25:40

is good and bad relative to other stuff. Exactly. And that

2:25:42

was gonna be my next point to just in daily life.

2:25:45

seems we need to be thinking

2:25:47

about counterfactuals, about decision making, and I

2:25:49

could add that to the philosophical list too, that that's an important

2:25:51

account of rational decision traffics

2:25:55

in counterfactuals. But never mind the philosophy,

2:25:57

just common sense, is permeated with thinking, well, if I were to do this, this would happen if I were to

2:26:00

do

2:26:00

that. something

2:26:05

else would happen? What should I do?

2:26:07

Okay. Daily life is

2:26:10

riddled with

2:26:11

counterfactuals. So when philosophers first, like, look at kind

2:26:14

of factors, like, what is the problem that they identify?

2:26:16

Why are

2:26:18

they interesting or

2:26:18

potentially challenging? For start, they can't just

2:26:21

be given the straightforward truth conditions of, for example,

2:26:23

the material conditional. That that might be your first

2:26:25

stab at analyzing if if it were

2:26:27

the case the p, it would be

2:26:29

the q. What what would that be?

2:26:31

It would be that's true in every case

2:26:33

except where p is true and

2:26:36

q is false. Okay. But

2:26:37

that would be a disaster. Okay. Because

2:26:39

now all counterfactuals where they're

2:26:41

genuinely counterfactual -- Mhmm. --

2:26:43

antecedents falls. Yeah. That would just come out

2:26:45

true. I see. And you wouldn't be able to

2:26:48

make distinctions

2:26:50

consider

2:26:50

a counterfactual of the

2:26:51

form if I were to let

2:26:53

go of the cup it would fall and I don't actually let go

2:26:55

of the cup. Now

2:26:59

on

2:26:59

on this material conditional analysis,

2:27:01

that comes out true because it has a false antecedent.

2:27:03

So far so good, I guess.

2:27:05

But now

2:27:07

if I were to let go of

2:27:09

the cup, I would finish up on the moon.

2:27:10

That would come out true as well because it still has a false antecedent

2:27:13

Okay.

2:27:16

So obviously, we want to make

2:27:18

distinctions among

2:27:19

these with false antecedents. Yeah.

2:27:21

I see. Some of them true,

2:27:23

some of them false. we're

2:27:25

gonna need some more sophisticated machinery. Hey,

2:27:27

listen. Just a

2:27:27

reminder that in

2:27:30

these kind of factual statements, we're

2:27:32

saying if

2:27:33

he, then q, then

2:27:34

he is the antecedent and q is

2:27:37

the consequent. So if I went

2:27:39

to Spain, I would have a

2:27:41

great time. Then me

2:27:42

going to Spain is the antecedent. It's the counterfactual thing that didn't happen.

2:27:46

And then me having a great time is

2:27:48

the consequent. The thing that would happen if

2:27:50

that happened. Okay. Back to the show. Got it. Okay. So the with the statement like, if

2:27:53

I

2:27:55

let go of the comp, then this other

2:27:57

thing will follow. If you never let go overcome, then possibly anything could come

2:27:59

after without the statement

2:28:02

being false because the if condition is not

2:28:04

met because you did not do it. Well,

2:28:06

and And intuitively, that's the wrong answer. We want to be able to say, well, these are the

2:28:10

true ones, and these are the false ones.

2:28:12

Not just anything goes, if some false antecedent were

2:28:15

the case. Right. Right. Right. And

2:28:17

now we need to have some

2:28:19

subtle way of distinguishing between the

2:28:21

the true ones and the false ones. Okay. Can't just be the material conditional. So how did

2:28:23

you try to do

2:28:27

that? Okay. Well, now philosophers like to

2:28:29

reach the possible world's And possible

2:28:30

worlds have been very influential, successful in

2:28:34

the study of modal logic,

2:28:37

like necessity, possibility. For example, we say that something's necessary if it's true

2:28:39

in all possible

2:28:43

worlds. Something's possible. If

2:28:44

it's true in some possible

2:28:47

world. we may have to restrict

2:28:47

the world's suitably, but that that's the first step. And now

2:28:50

the thought is

2:28:51

let's do something

2:28:54

similar to that for counterfactuals.

2:28:56

and

2:28:56

the thought is, but we don't

2:28:58

just want to look at all the

2:29:00

world's. Let's let's look at

2:29:02

certain privileged world's, the one that's ones that

2:29:04

matter And

2:29:05

the way we say that

2:29:07

is, well, the most similar worlds

2:29:10

where the

2:29:10

antecedent

2:29:11

is true. And roughly

2:29:12

this this style of analysis says,

2:29:14

P would queue, if P, where

2:29:16

the case queue would be the

2:29:18

case, is true just

2:29:19

in case. the most

2:29:21

similar p world's ARQ, and

2:29:23

maybe all of them, all of

2:29:25

the most similar p world's ARQ

2:29:27

and then let the debate So

2:29:28

hopefully, this

2:29:29

might make things clearer. So

2:29:31

you've got an issue where you

2:29:33

wanna say, if I let go

2:29:35

of the cup, then it would

2:29:37

fall down. But you're like, this leaves a

2:29:39

scenarios in which you, like, over it's like

2:29:42

a very wide range of possible scenarios. Lots of

2:29:44

things a bit different as well. That's right.

2:29:46

And, like, what for what for example, if you let

2:29:48

go of the cup and also suddenly a table appear

2:29:50

underneath the the cup to to catch it. that's, like,

2:29:52

then it then it wouldn't fall down and and hit

2:29:54

the floor. I make a lot of that very point

2:29:56

actually. That's right. So then you have to be, like, so

2:29:58

which out of the, like, vast space of possible,

2:30:01

like, kind of factual wells in which you dropped the

2:30:03

cup. Yes. Are we actually talking about when we

2:30:05

make some kind of factual claim like this? Okay.

2:30:07

And you're saying, The standard the standard accountant philosophy is

2:30:09

to say, well, it's similar and it's similar to the

2:30:11

actual world in every respect except for this one change

2:30:13

where you let go of the cup. which sounds very intuitive. Yes. You you have to make

2:30:15

some You can't

2:30:19

just tweak this one fact and keep

2:30:22

everything else as it was. because you might end up with some

2:30:23

inconsistency. That's right. Okay. Yep. Yep. can't just insert, you know,

2:30:26

the one change, like,

2:30:28

I release

2:30:29

a cup where,

2:30:31

in fact, I didn't because

2:30:33

there'll be all sorts of

2:30:35

ramifications

2:30:35

of that, ripple effects as you

2:30:37

have to take into account. But now the most

2:30:39

similar world where

2:30:41

the most similar world where all

2:30:42

of that stuff is taken care

2:30:45

of or such worlds. The thought is those are the worlds

2:30:46

that matter to evaluation of the kind of factual many

2:30:51

many people listening will be like, yeah, obviously,

2:30:53

you wanna make the minimal change when you're producing counterfactual. If if you're if the operation you're doing

2:30:56

is say, kind

2:30:59

of factual that changes that I like all the cup.

2:31:01

You know, you shouldn't be adding a table underneath it

2:31:04

has problems. Well,

2:31:08

there are three things I don't like

2:31:10

about the most similar world's accounts,

2:31:14

namely the most similar and

2:31:16

worlds. Yeah. Okay. And otherwise, I'm

2:31:18

right on the way. And and I

2:31:21

should say for years, I've I've

2:31:23

just assumed this philosophical,

2:31:24

let's call it, orthodoxy. But

2:31:26

more recently, I've come

2:31:27

to have my misgivings, which

2:31:29

I will now gladly share with

2:31:31

you. So let's go through them.

2:31:33

the

2:31:34

most. That means that we're supposed

2:31:35

to just attend to the

2:31:38

closest worlds, the ones that are

2:31:40

first in this ordering and we don't

2:31:41

look further back. We only look at the front row so to

2:31:43

speak of

2:31:45

the antecedent worlds. And I say,

2:31:48

well, no, sometimes you've got to look

2:31:50

further back.

2:31:50

I'll give you an example. Consider the

2:31:52

consider the last

2:31:55

US election and consider

2:31:57

this counter factual If Trump

2:31:59

or

2:31:59

Biden had won the

2:31:59

election,

2:32:00

or biden

2:32:02

the president would

2:32:03

be a Democrat. Does

2:32:05

that sound true? So that sounds intuitively wrong because it seems like in the cases where Trump won, it

2:32:07

wouldn't be. Exactly.

2:32:12

Spot on. Okay. But now let's run

2:32:14

it through the similarity semantics. What's the most

2:32:17

similar world where

2:32:19

the antecedent is true Because

2:32:21

one more Biden one. Right? It's the one more simple one. It's right here. It's the actual world.

2:32:24

We're we're

2:32:27

we're standing in it.

2:32:28

And it is true in the actual

2:32:30

world that the president as a democrat. Yeah. So this should come out true

2:32:33

as

2:32:36

according to this similar account

2:32:38

where as as

2:32:38

does seem intuitive, the the most similar world to

2:32:41

the actual world is

2:32:43

itself. So it seems

2:32:45

like

2:32:45

in that case when

2:32:47

you're saying if Trump or Biden one, what you're trying to

2:32:49

you're making you're trying to make a claim about all

2:32:52

the world in which, like, either of those two facts hold -- Yeah. --

2:32:54

not just about the one world that's most similar. Yeah. So I I

2:32:56

say you

2:32:58

have to yeah. As you say, you also

2:33:00

have got to consider the Trump world and that will push you

2:33:02

further back from the front row. Oh, and then I guess you've got a question of how

2:33:04

deep because

2:33:07

-- Okay. -- like, including all of them,

2:33:09

including the ones where the world exploded for no reason. Now

2:33:12

conditional account, you

2:33:14

have to look at all of the world.

2:33:17

And then maybe that's gonna be contextually circumscribed in some way, but that

2:33:19

-- Yeah. -- that does give you some impetus to look beyond Is

2:33:23

it just the front row? It's not as

2:33:25

gonna be b? Yeah. Not just the closest. Yeah. Okay. And let me give you another

2:33:28

case. And I'm

2:33:30

glad we talked about Lewis' seven foot

2:33:33

example -- Mhmm. -- earlier because now

2:33:35

I think it's gonna backfire. Yeah. Again, remember according to him in in that example, what

2:33:40

matters to similarity is just closeness

2:33:42

to my actual height. And he

2:33:45

had this sequence of ever

2:33:47

closer worlds getting

2:33:48

closer and closer to my actual

2:33:50

height, none closest. Alright. Now let me make the smallest tweak to apparent counter example.

2:33:54

the parent counter example and I

2:33:57

think it backfires on him. Yeah. If

2:33:59

I were

2:33:59

at least seven feet tall, how

2:34:02

tall would I be? So if

2:34:04

I were greater

2:34:04

than or equal to seven feet. Mhmm. Tool.

2:34:07

How tall would I be? Well, then it's

2:34:09

a hard bound, so it's easier. It's seven

2:34:11

seven feet. Yeah. Well, there you go. according

2:34:13

to that ordering, which he used, so I think it's fair

2:34:15

for me to now

2:34:18

use it

2:34:19

against him. The unique – well,

2:34:21

in this ordering, the closest worlds are going

2:34:23

to be the exactly seven foot seven point 0000

2:34:27

to infinitely many decimal place. Right.

2:34:29

Right. He told us that's that leaves so much in the ordering. So by

2:34:31

his lights, if I were at least

2:34:33

seven feet tall, I would

2:34:36

be seven point zero zero

2:34:38

to infinitely many decimal places. Exactly.

2:34:41

Exactly. And I say, whoa. Really,

2:34:43

that that comes as a surprise

2:34:45

to me. If anything, I I

2:34:47

would say, Well,

2:34:48

I suppose I might be, but it's

2:34:50

highly unlikely I'd be exactly so precisely

2:34:52

seven foot. I might be, you

2:34:54

know, a little bit more than

2:34:56

seven foot I

2:34:57

think in these cases,

2:34:59

the similarity account is

2:35:01

giving implausibly

2:35:04

specific verdicts Yeah. It's committed to the truth

2:35:06

of implausibly specific counterfactuals. Mhmm. In the

2:35:08

seven foot case, it was if I

2:35:10

were at least

2:35:10

seven feet tall, I'd be exact

2:35:13

actually seven feet tall really. So

2:35:15

specific in the

2:35:16

Trump or Biden case, the specific

2:35:18

verdict that the president would be a Democrat

2:35:20

when

2:35:22

that doesn't take into account the

2:35:24

Trump possibility. And again, I think this is

2:35:26

just symptomatic of only looking at the front row

2:35:30

of the world's, the closest world's, sometimes

2:35:32

you need to look a bit further

2:35:34

back where, for example, I'm a bit taller than seven feet or where the alternative in

2:35:39

the disjunction, Trump winning the election has

2:35:41

to be taken seriously. Right. Yes. It seems like when we're describing lots of these anti sedans,

2:35:44

the if thumb

2:35:47

or Biden one. If I were at least seven

2:35:50

feet tall, we're actually trying to indicate a range of possible worlds. the most just picks out of them like Albert

2:35:52

Fraterially. sense.

2:35:57

Exactly. And one thing that might push

2:35:59

us towards

2:35:59

is the strict

2:36:02

conditional account where you look

2:36:04

at all of the antecedent worlds,

2:36:06

and they're perhaps contextually restricted in

2:36:08

some

2:36:08

way. And by the way,

2:36:10

earlier, I should have mentioned on

2:36:12

Finto and Gileys' components of that kind of view. Or you

2:36:14

could go another way. This would be my way and this

2:36:19

might be getting to my positive view later.

2:36:21

where

2:36:21

I'd say you have to look at the worlds that

2:36:23

have positive chance at

2:36:25

the relevant time and that will take

2:36:28

us sometimes further back than just the

2:36:30

most similar worlds. Yeah. Okay. So that's you've attracted to the you've attracted to most.

2:36:35

Let's now object to to similar. Similar.

2:36:37

Lots of problems there, I think. I think in

2:36:39

the early days of this similarity approach,

2:36:41

it was

2:36:43

assumed that

2:36:44

similarity was a common sensical thing.

2:36:46

It's what the folk would regard as resemblance. Mhmm. And Gitmeid up with

2:36:48

what seems to be

2:36:51

a devastating counter example

2:36:54

to that understanding of similarity

2:36:57

his example was cast your

2:36:59

mind back to sixties, the

2:37:01

Cold War and consider the counterfactual

2:37:03

if Nixon had pressed the

2:37:06

button

2:37:06

on the nuclear bomb, there would

2:37:08

have been a holocaust. And that seems intuitively true.

2:37:10

Yeah. Let's say we want that to come out true

2:37:14

But holocaust make a big difference.

2:37:16

You know, if holocaust worlds are not

2:37:18

similar to our world where the holocaust didn't happen. Right. Okay?

2:37:22

more

2:37:23

similar would be a world

2:37:25

where Nixon presses

2:37:26

the button and then the mechanism just fizzles. No bomb is triggered and it's business pretty

2:37:28

much

2:37:29

as

2:37:32

usual. Okay. So

2:37:33

whatever disturbance is created by the

2:37:35

button not working is much much

2:37:37

smaller than the disturbance. to the

2:37:40

actual world created by the holocaust

2:37:42

versus not. That's right. So if you're assuming commonsenseical similarity, it seems that

2:37:47

we're gonna get the wrong verdict. We'll judge

2:37:49

that counterfactual to be false -- Yeah. -- by the likes of common sense equivalence. Yeah. In fact, if you could confidently make the

2:37:52

statement that if

2:37:56

Nixon had pressed the nuclear button, it wouldn't

2:37:58

have worked. Yeah. Exactly. It would have

2:38:01

fizzled. Yeah. because that would be the

2:38:03

most similar way of realizing Verizon pressing

2:38:05

the button wrong answer. Yeah. and Lewis took

2:38:07

this very seriously and then

2:38:10

fashioned a set

2:38:11

of priorities of what

2:38:14

matters to similarity well, really to handle this fine case, the And

2:38:15

fact, you might worry

2:38:18

that when he comes

2:38:20

up with

2:38:21

the zed hock,

2:38:23

you know, just reverse engineer to this particular

2:38:25

case. And you wonder how much it'll

2:38:27

generalize. And in fact, as we will

2:38:29

say in a moment, it doesn't seem

2:38:32

to generalize So well, anyway, here's

2:38:34

what Lewis said. First priority

2:38:36

is to avoid big miracles,

2:38:38

as we might say, widespread diverse

2:38:40

violations of law.

2:38:42

Okay? First priority. Second

2:38:45

priority.

2:38:45

Maximize perfect match

2:38:48

of history. Third

2:38:48

priority, avoid small miracles,

2:38:50

small violations of law. And and there's

2:38:52

a bit more. But that that's the

2:38:54

main idea. This was supposed to handle

2:38:56

supposed to handle The

2:38:58

deterministic case. Interestingly, Lewis assumes determinism

2:39:00

in a way I don't want him

2:39:02

to because it seems like the

2:39:03

universe isn't Yes. In fact,

2:39:06

he himself in in other work

2:39:08

thinks that the actual world we live in

2:39:10

is indeterministing. So one doesn't want to assume that

2:39:12

it's deterministic

2:39:15

to handle this case. And to

2:39:17

be fair to

2:39:18

him, he did then later also consider the priorities

2:39:20

for indeterministic

2:39:22

worlds -- Mhmm. -- and he

2:39:24

introduces the notion of a quasi

2:39:26

miracle. I should tell you a bit about that. Well, quasi miracle is

2:39:30

something

2:39:31

that while it's consistent

2:39:32

with the laws of nature,

2:39:34

it's not a genuine miracle. It's somehow remarkable

2:39:36

It's like a

2:39:38

pattern of outcomes seem to be conspiring

2:39:41

in a surprising way. Now, it's a little

2:39:42

bit hard to pin that down exactly. And in

2:39:43

fact, that's perhaps

2:39:47

a problem, but this very notion of quasi

2:39:49

miracles is a little bit shaky. But anyway, to to give you the sense of it, go go

2:39:51

back to the Nixon example, and

2:39:56

imagine a a world where all

2:39:58

traces of Nixon pressing the button are just erased. There's no trace that he pressed the

2:39:59

button. Now,

2:40:04

in a deterministic world, it seems that would

2:40:06

take a big miracle because you'd

2:40:07

have to

2:40:10

erase this trace and that trace and

2:40:12

another one over there. you'd need widespread

2:40:14

miracles to remove all of those. But

2:40:17

in a

2:40:18

chancy world, you

2:40:19

don't need any miracles quite

2:40:21

lawfully a

2:40:22

sequence of chancy events could collectively erase all

2:40:25

traces of the button pressing

2:40:28

just by chance Yeah.

2:40:29

They all look at thermodynamic miracles. That sort of thing quantum mechanics

2:40:31

or whatever.

2:40:34

Yes. And then Lewis wants

2:40:36

to say that quasi miracles detract

2:40:39

from similarity. we don't want to say, for example, if Nixon had pressed the then all

2:40:44

traces would have been erased. And we

2:40:46

want to somehow vanish the quasi miracles too. The quasi miracles, things that are consistent with the laws of physics, but I guess are

2:40:48

a tiny fraction

2:40:51

of worlds that

2:40:53

are consistent. And

2:40:55

they're somehow remarkable And so they detract from similarity and

2:40:58

virtue of that. Alright? And so now

2:41:00

we have the more complicated package. We've got,

2:41:02

sorry, big miracles. Big miracles. Big then similarity of

2:41:04

history. Then no small miracles. Then

2:41:06

no big match of history, then no match

2:41:08

of history. Okay. Then then no small miracles. And no priority

2:41:10

is no quasi miracle. Yeah. And actually, you might now wonder how these

2:41:14

priorities interact with each other,

2:41:16

the indeterministic and the deterministic

2:41:18

ones. So avoid the big miracles and then dispersed between

2:41:22

that and the avoid small miracles we've

2:41:25

got this other priority, maximize perfect match. Already, you might think that's

2:41:27

a bit strange. Like, where do I insert avoid

2:41:31

medium sized miracles. Yeah. It's it's

2:41:33

feeling definitely a bit arbitrary. Yeah. You'd think there's there's a continuum among

2:41:35

the miracles and it's little

2:41:39

odd that they're interrupted by this

2:41:41

different kind of priority. But now how do we square that stuff with the quasi miracle stuff?

2:41:44

So what square that stuff with

2:41:46

the quasi miracle stuff so what how

2:41:48

do we prioritize, avoid big

2:41:51

quasi miracles as opposed to

2:41:53

avoid medium sized and genuine

2:41:55

miracles? And so gets more more complicated.

2:41:58

And and one point I want to make is notice we've come

2:42:00

some

2:42:02

distance from the original common sensible resemblance. Now

2:42:04

-- Yeah. -- we've got this quite complicated

2:42:06

theory of how similarity works for counterfactuals. So taking a step back and looking at

2:42:10

the progression here, I guess, so we started

2:42:13

out with this kind of appealing, very simple, very clean statement, the

2:42:15

most similar world. If, like, if that worked in

2:42:18

all the cases, it'd be like beautiful. So

2:42:20

elegant, it's very parsimonious. Let's just keep that.

2:42:22

Now we've got this far more cumbersome theory. We've got, like, four different stages and where, like, you're you're getting

2:42:25

to suspect that if we looked at

2:42:27

more cases, we'd find more problems. And

2:42:29

now it's now we're gonna have

2:42:31

seven different steps in order to try to

2:42:33

make it work. And then you have a question.

2:42:36

As a philosopher's suppose, you can say, on the one hand, maybe

2:42:38

just the way humans use language with kind of factuals is super messy and

2:42:40

complicated. in

2:42:42

fact, like, even seven wouldn't be enough because there'll be

2:42:45

some weird cases with that. And maybe it just in order

2:42:47

to actually know what people are referring to when they use catapactuals you

2:42:50

just have to have the full intuition of a

2:42:52

human mind to, like, understand what is being referred

2:42:54

to. On the other hand,

2:42:54

you might think, actually, counterfactual should be cleaner than this. There should be a more simple statement.

2:42:59

And now we're in Epi cycles where we're trying to

2:43:01

fix this theory that was broken fundamentally at its core and we need a different approach. Is is

2:43:03

this kind of right that you could see

2:43:07

see you could you'd see what's going on

2:43:09

two different ways, Megan. I think that's a good way

2:43:11

to put it. And I think even when we add the epic cycles, as

2:43:14

we've been doing. We're still not

2:43:16

done. Okay. In fact, let's let's

2:43:18

Yeah. Let's do something. He's he's more trouble.

2:43:21

So

2:43:21

let's take Lewis' priorities

2:43:23

and consider the following cateffectual.

2:43:26

The story is that I

2:43:28

did not scratch my finger

2:43:30

yesterday. Let yesterday be the twenty

2:43:32

four hour period that ended at midnight last night,

2:43:34

and I'll include midnight as part of yesterday.

2:43:39

okay,

2:43:39

I did not scratch my

2:43:41

finger yesterday. If I had scratched

2:43:43

my finger, I would have done so at midnight precisely at midnight no earlier.

2:43:44

the midnight precisely at

2:43:46

midnight and know earlier Yeah.

2:43:49

It seems comes out true

2:43:51

on the Louisian priorities. Why is

2:43:53

that? Well,

2:43:54

we don't need any big miracles

2:43:56

to get the antecedent to come

2:43:58

out true. So we moved to the second priority. We now want to maximize perfect match of

2:43:59

history, but

2:44:03

we get to do that by delaying

2:44:06

my hypothetical scratch as late as possible because we get more and more match of what actually happened.

2:44:10

That means that the maximal match

2:44:12

will happen if I delay scratch until midnight. Exactly. That's

2:44:14

where I would have scratched exactly. That already seems

2:44:18

very Say that again.

2:44:19

If I had scratched my finger yesterday,

2:44:21

I would have done it at midnight exactly.

2:44:23

That's crazy. Yeah. That seems crazy. Notice it's another example of what call implausible specificity

2:44:29

up. Yeah. Why then exactly? Now to make things

2:44:31

even worse, let's suppose that it's a somewhat probabilistic matter

2:44:36

when I scratch I start the day a

2:44:38

little bit itchy, I'm more likely to scratch earlier

2:44:42

in the day. And as the day

2:44:44

progresses, it becomes less and less

2:44:46

likely that I'll scratch and in fact midnight is the least likely time for me

2:44:52

to scratch. Nevertheless, by those

2:44:54

priorities, seems to come out true that if I'd scratched yesterday, I would have done so at midnight,

2:44:57

namely

2:45:00

the least probable time, and I think that

2:45:02

makes the problem even worse. So we've tried patching it, and now we've got this other problem that's appeared another

2:45:04

case of excessive specificity. Are

2:45:06

there any other issues with

2:45:09

the Lewis attempted patch of

2:45:11

his theory? Yeah. look good on

2:45:13

him for going out on a limb and

2:45:15

telling us what matters to similarity. The trouble is

2:45:18

I think he'll get some camera examples and they'll have this form of

2:45:20

implausible specificity.

2:45:22

Now you

2:45:23

might retreat again and say,

2:45:26

well, similarity, it's

2:45:26

context dependent, and it's vague, and

2:45:30

it's complicated and don't

2:45:32

really say what matters

2:45:34

to similarity. And Goodman, by the way, said similarity is

2:45:36

insidious. It's

2:45:40

sort of impostor. It's it's

2:45:42

a quack. And he thinks that similarity is not suitable to

2:45:44

he thinks that similarities not

2:45:46

suitable to ground

2:45:48

any philosophical project. Oh, wow. Yeah. Okay. Okay.

2:45:50

Well, let let's come back to this in a second. Yeah. What's what what are some other kind of examples? Yeah. Well, so now I'm

2:45:51

imagining that we retreat

2:45:57

and

2:45:57

we we just say similarity. I won't tell you

2:45:59

much about it. It's

2:46:03

this context dependent,

2:46:04

vague, complicated thing. Well, now

2:46:05

I guess it's harder to come up with counter examples. But now it's not clear

2:46:08

what the

2:46:10

theory is saying. It's not making predictions. it's

2:46:12

not providing any explanation. And I like to put

2:46:14

this in the form of a dilemma.

2:46:17

And to explain this, I should just

2:46:19

say something about Harry Potter. I I went

2:46:21

to one of the Harry Potter movies. I

2:46:24

don't remember which one. This was a long

2:46:26

time ago. And to be honest, I didn't

2:46:28

like it as much as everyone else. seem

2:46:30

to him. For the following reason, well, early on in the movie,

2:46:32

early on in the movie Harry

2:46:34

has got his magic wand and

2:46:36

you see him doing all these cool

2:46:38

magic tricks Then later on

2:46:39

in the same movie, Harry's in danger and we in

2:46:41

the

2:46:44

audience are supposed to feel scared

2:46:46

for him. But I

2:46:46

felt like yelling at the screen just usually bloody wand. Yeah. Right.

2:46:49

And my complaint was I felt

2:46:51

that we weren't properly told the

2:46:53

rules of the game. You know?

2:46:56

Right. What could the one do and not do

2:46:58

-- Mhmm. -- I didn't know that suddenly

2:47:00

the one couldn't get him out of this difficult situation.

2:47:02

Yeah. Okay. So now I'm in problem of fantasy and

2:47:05

fiction, I think. Yeah. Yeah. That's it.

2:47:07

Now I think there's a worry about the

2:47:09

similarity account if you do this retreat. If you

2:47:12

just say, Oh, similarity. Well, I won't

2:47:14

tell you much about it. I won't tell you

2:47:16

the rules of the game. It's just whatever it needs to be to

2:47:18

make the counterfactual come out true, where we want them to come out true.

2:47:22

Well,

2:47:23

that's like this Harry

2:47:24

Potter as a wand theory. It's a

2:47:26

magic wand similarity. If ever you find yourself in a corner, you just like

2:47:31

change the rules and say, I know I meant something

2:47:33

else. Yeah. That's right. Or or you you just don't

2:47:35

specify the rules in the first place and so there are there are no counter examples. Mhmm. Okay. that's

2:47:39

the Harry Porto Horn of a dilemma

2:47:42

-- Yeah. -- or if you do the the more, I think, philosophically honest thing like

2:47:44

Lewis did,

2:47:44

and

2:47:47

just say, look, I'm gonna try hard to

2:47:49

tell you what matters to similarity, then I worry they're gonna be counter examples like the stretching, the

2:47:52

case. Yeah. Okay.

2:47:55

So some people in the audience, I think,

2:47:57

might understand what we think. Like, we're we're being awfully fussy here about, you know, exactly what the

2:47:59

what kind of factual are we referring to in these cases where in

2:48:05

actual reality, if two people are having a conversation and made statements

2:48:07

like this, there would be

2:48:09

no confusion about what they're referring to --

2:48:11

Yeah. -- to defend the fussiness for a minute.

2:48:13

The

2:48:14

challenge here is that in these, like, everyday cases where someone says, like, nicks and pressed the button, then why, you know,

2:48:16

if I'd scratch my finger yesterday,

2:48:18

then said, intuitively, we know what

2:48:21

what we're what we're communicating through

2:48:23

all of the context. but

2:48:25

we're going to try to develop rules

2:48:27

about the nature of counterfactuals and, like,

2:48:29

what reasonable, like, logic you can

2:48:31

apply to that. from these cases. And then we're gonna

2:48:33

start applying it to, like, very unusual cases, to,

2:48:36

like, strange things like, you know, if we can

2:48:38

conceive of pees on bees, then like other thing -- Yeah. -- where we're not gonna have the same intuition about what the situation really

2:48:40

is or, like, what

2:48:42

logic can reasonably apply.

2:48:44

Yes. And if we

2:48:46

have if we develop rules

2:48:48

that, as it turns out, don't actually work in

2:48:50

all of the, like, cases where we do have an

2:48:53

intuition about what works and what doesn't. Yeah. Then we could,

2:48:55

like, extend this, like, incorrect logic all

2:48:57

these other cases where we're not gonna be able to see

2:48:59

the error. Yeah. I think that is like one reason why

2:49:01

we really would ideally like to pin down what isn't isn't legitimate when you're

2:49:03

doing kind of natural reasoning. In

2:49:05

the cases where we might be able to see mistakes.

2:49:07

Mhmm. So then in case of what we can't see

2:49:09

the mistakes, we feel more more solid ground. That's right. By the way, this is a very general problem

2:49:12

in philosophy I

2:49:14

think that we often fashion

2:49:16

our conceptual analyses to familiar cases,

2:49:19

and then we hope that they still

2:49:22

apply to these maybe more ricochet

2:49:24

cases. And sometimes, philosophers say, well, it spoils to

2:49:26

the vector. You know, in some far fetched case,

2:49:30

we haven't earned the right to

2:49:32

some firm intuition about these cases

2:49:34

because they're strange and just let the theory dictate what

2:49:38

we should say about those cases. On

2:49:40

the other hand, we talked about this earlier too. Sometimes we

2:49:42

really do look to the strange cases like Saint Petersburg to stress test

2:49:47

an analysis that was formed to handle more

2:49:49

familiar cases. Okay. But let let me take the other angle for a minute. We'll say language is

2:49:52

always mess sees.

2:49:55

Like, people are constantly saying things that, like,

2:49:57

technically, aren't true, but, like, they they communicate the broad thrust of the argument, and people

2:49:59

know what they're saying. Yes. here

2:50:03

you're applying like a real fine to combed everyone's

2:50:05

statements and saying, oh, this isn't precisely right. But that's because you're taking the kinds of things that people

2:50:07

might say in ordinary life and then treating them as if they're like, factual

2:50:13

to a level that they were never designed to be. That's

2:50:15

an important part of my overall

2:50:18

view about counterfactuals. I think that the

2:50:20

things we say, the counterfactuals that we

2:50:22

use in daily life are mostly loose talk and let me say something

2:50:24

about

2:50:27

that. I think that

2:50:29

they're false but approximately

2:50:30

true and close enough to the truth that they convey

2:50:34

useful information. And I think this

2:50:36

is perfectly familiar perhaps soon, I'll say something about

2:50:38

why I think most artifacts are false.

2:50:41

and And

2:50:42

people think this is some

2:50:45

crazy, radical view. And I say, no, come on. Most

2:50:47

of what we say could be false -- Yeah. -- for completely

2:50:48

intelligible

2:50:53

reasons. Sometimes we're joking. Sometimes we're

2:50:55

being ironic. Sometimes, yeah, we're exaggerating a bit loose talk. I a common explanation.

2:50:57

Just

2:50:58

think about the

2:51:00

cases not involving counterfactuals.

2:51:03

We say things like six

2:51:05

and a half million people

2:51:07

have died of COVID,

2:51:08

sadly. Well, we don't say really,

2:51:11

so six million specific time. five

2:51:13

hundred thousand. No. And of course,

2:51:15

we didn't intend to convey that

2:51:18

what we meant was something like

2:51:20

roughly six and a half million people have

2:51:22

died. And that's approximately true even though the

2:51:24

exact truth is something slightly different or

2:51:26

someone says, I'll I'll be there at eleven.

2:51:28

Exactly. Yeah. Exactly. The probability of

2:51:30

arriving at any specific instant is zero.

2:51:33

That's it. See how a familiar phenomenon

2:51:35

this is. Mhmm. No one truly

2:51:37

arrived at eleven when they said

2:51:39

they'll arrive at eleven. Of course,

2:51:41

we we charitably understand them roughly eleven. We

2:51:44

think of the claims

2:51:46

say tennis balls are spherical.

2:51:48

Mhmm. Well, they're not

2:51:49

in the

2:51:51

mathematical sense, spheres because that has a

2:51:54

very specific meaning. They're approximately

2:51:56

spherical.

2:51:56

And close enough to

2:51:59

spherical that

2:51:59

we can treat them as if there's fears

2:52:02

for for most practical purposes. For

2:52:04

example, I'm packing a box with

2:52:05

tennis balls and I wanna calculate how

2:52:07

many tennis balls Can

2:52:09

I fit in the box? I

2:52:12

won't go far wrong if I

2:52:14

treat the balls as perfect spheres because they approximately

2:52:16

aren't. then I

2:52:18

do the calculation using perfect spheres and

2:52:20

it'll work well enough for the approximate

2:52:22

spheres, the tennis balls really are. Yeah.

2:52:24

Okay. So if

2:52:25

we kind of relax our attempts to

2:52:27

come up with a super precise theory of kind

2:52:29

of factuals like the most similar world and now and

2:52:32

we accept that they're fuzzy

2:52:34

and a bit messy and contextual and so

2:52:36

on. Where does that leave us as philosophers

2:52:38

or as people using counterfactual reasoning? First, I've distinguished the truth values of the counterfactual's

2:52:42

from the way we

2:52:44

use them, the assertability, the acceptability of them.

2:52:46

Should I go into each of those aspects? Yeah. Yeah. I'm not sure what this

2:52:47

means. Yeah. Alright. Good.

2:52:48

Maybe

2:52:52

first I'll hit you with a shocker -- Yeah.

2:52:54

-- that most counterfactuals are false. Mhmm.

2:52:57

And people will think I've lost my

2:52:59

philosophical marbles. And then I'll soften the blow

2:53:02

by saying something about assertability and -- Yeah.

2:53:05

-- acceptability. Go for

2:53:06

it. Great. For start, most

2:53:08

counterfactuals are false. consider the

2:53:10

coin in my pocket. Let's assume it's a fair coin.

2:53:13

I'll never

2:53:14

toss it. If I were to

2:53:15

toss it, it would land heads.

2:53:17

hotels, it would land heads. Doesn't seem right? That doesn't seem right. Thank you. I don't

2:53:19

think that's

2:53:22

right. I think that's false. And why? Well, it's

2:53:24

a chancy coin on imagining -- Mhmm. -- and

2:53:26

it if I were to toss it, might land heads and might land tails. Alright. Now let's

2:53:32

make

2:53:32

the coin heavily biased to Heads, let's

2:53:34

say ninety nine percent chance of heads, one percent chance of tails. If I

2:53:39

were to toss the coin at wood,

2:53:41

land, heads, not tails, Still bad. Yeah. Let's say it still

2:53:43

might land tails. Consider a

2:53:47

huge lottery. Let's say it has

2:53:49

a million tickets that's never

2:53:51

played. if the lottery were

2:53:51

played, ticket number one

2:53:52

would lose. Mhmm.

2:53:54

I say no.

2:53:55

And and notice

2:53:57

by the way the

2:53:59

problem there If you say that of

2:54:01

ticket number one, it seems you better say about

2:54:04

ticket two, ticket three would lose, blah, blah, blah,

2:54:06

blah, ticket number million would lose. seems you to ticket lose. that has win. One's there's

2:54:08

got to be a winning

2:54:10

ticket. So that in

2:54:12

fact, you contradict yourself if

2:54:14

you said all of that. Yeah. Okay.

2:54:17

And now consider

2:54:18

your favorite intuitive,

2:54:20

common sensical counterfactual.

2:54:23

I'm holding a cup if

2:54:24

I were to release the cup,

2:54:26

it would fall. Now, I know it's

2:54:28

very tempting to say that's true.

2:54:31

I still say it's false. because

2:54:32

it's a lottery. It's a

2:54:34

chance process, I say. If the

2:54:36

cup were released,

2:54:37

it might not fall because someone might quickly lace a table

2:54:39

under it, a very surprising

2:54:43

updraft of air might suddenly lift

2:54:45

it rather than letting it fall

2:54:47

positive things might happen. I know

2:54:50

some of them extremely

2:54:51

improbable. I don't I don't

2:54:54

mind. Just as in the lottery

2:54:56

case, it was

2:54:58

extremely improbable that ticket number one

2:55:00

would be the

2:55:01

winner. Yeah. Okay? So these things

2:55:03

aren't like absolutely certain,

2:55:05

it's not true in, like, every possible kind

2:55:07

of factual world that the cut does fall. I guess, some

2:55:09

people might wonder, it's like, does it really matter that in some, like,

2:55:12

infinite testimony action,

2:55:14

not like possible counterfactual worlds. The consequent doesn't

2:55:17

actually occur. Or or is this like or are you being a bit of juice here? Yeah.

2:55:19

III get that a lot. Yeah. Maybe

2:55:24

I am. I choose well,

2:55:27

I'm pedantic, but I I

2:55:29

do think that our use

2:55:31

of counterfactuals commits us to

2:55:33

this

2:55:33

kind of patentry in in various ways. Mhmm. For example,

2:55:36

look at

2:55:38

the logic of counterfactuals. Lotus Poonan

2:55:40

seems plausible, and that's that's

2:55:43

the rule if p then q, p therefore q. So modus ponents will

2:55:47

fail. It seems if you lower

2:55:50

the bar for

2:55:50

the chanceiness below one. One. So this is, like,

2:55:53

if p then probably

2:55:55

q Yes. p then q then,

2:55:57

that doesn't go through. Yeah. That that's it. Right? If you thought all you needed

2:55:59

for the truth of the

2:55:59

counterfactual was

2:56:03

the probability, high probability of q given p,

2:56:06

something like that, then you could

2:56:07

easily have a case where p is true, the probability is high, and

2:56:11

then it didn't happen. Like, it was

2:56:13

very probable that ticket number one would

2:56:15

lose should say better, the

2:56:17

ticket that, in fact, wins

2:56:20

in the

2:56:20

lottery, it was very improbable

2:56:23

that it would win but you

2:56:25

don't want to say if the lottery

2:56:27

were

2:56:27

played, it would lose just because its probability

2:56:29

of losing was high. Maybe this is a misplaced rant. But in general,

2:56:31

I I wonder whether logic

2:56:35

as it's like like formal logic as it's

2:56:37

taught, exactly things like f p, then q p, so q. Whether that stuff is as

2:56:39

useful as people like to think, because the world just doesn't

2:56:41

so

2:56:45

rarely affords us the opportunity to use these, like,

2:56:47

certain logical rules. Like, if you're

2:56:49

in computer science or something, maybe fine.

2:56:51

Yeah. But, like, in actual everyday life,

2:56:53

all we ever almost all we ever have is, like, if p,

2:56:55

then probably q. Yes. p, so probably

2:56:59

q. Yeah. That's right. And so that

2:57:01

kind of more informal reasoning is basically

2:57:03

Yeah. All all the the luxury that we're ever afforded. And then it means that, for example, like, strict

2:57:05

logical fallacies are not

2:57:07

so interesting. Instead, you

2:57:09

wanna be looking at,

2:57:11

like, probabilistic fall like, way in which, like, arguments

2:57:14

are not as powerful as they seem

2:57:16

like they might be and

2:57:17

so on. Yeah. That's right. Let's consider one logical rule, the

2:57:19

way that conjunction behaves.

2:57:22

If p is true and

2:57:24

q is true, then the conjunction

2:57:26

p an q is true and vice versa. Earlier, we talked about

2:57:28

some problems

2:57:31

to do with conjunction -- Mhmm. --

2:57:33

that was the lottery paradox and

2:57:35

the preference paradox, and where we have the interaction of, say, rational belief and conjunction.

2:57:40

So you might say in that

2:57:42

case,

2:57:42

look, rational belief is not bound

2:57:45

by a strict conjunction rule

2:57:48

because we saw a case where you could

2:57:50

rationally believe each of the sentences in your

2:57:52

book, but not rationally believe the conjunction

2:57:54

of them. actually something very parallel to

2:57:57

this is relevant to counterfactuals. The

2:57:59

way there seems to be a sort of conjunction introduction rule for counterfactuals. if

2:58:04

I were to release the

2:58:05

cup, it would fall. If I were to release the cup, it would break. Therefore, if I were

2:58:07

to release the cup, it would

2:58:08

fall

2:58:14

and break. Mhmm. So that's a conjunction introduction in the consequent.

2:58:17

I think that's

2:58:18

valid, and I want to --

2:58:20

Okay. -- perspective. And actually notice

2:58:22

how if you set the bar for probability less than one for the truth of the cat effectuals, you'll

2:58:28

violate this conjunction rule in the

2:58:30

consequent. I did it. In fact, for the lottery. If the lottery were played, ticket number

2:58:33

one

2:58:33

would lose, says

2:58:35

someone who thinks that

2:58:37

high probability is good

2:58:39

enough for truth. background.

2:58:40

So ticket number two would lose. And

2:58:42

now I can join all of the tickets

2:58:44

would lose. And notice I I did that

2:58:46

and you didn't blink. You didn't stop

2:58:50

me and say, hey, Al, you can't

2:58:52

conjoin those consequence like that. I think

2:58:54

it's very intuitive that you can. I did. And

2:58:57

then you get the absurdity that if the

2:58:59

lottery were played, every ticket would lose --

2:59:01

Mhmm. -- which we all agree is false. Yeah. So most kind of factors are false

2:59:03

in this respect.

2:59:05

this respect What

2:59:07

does that imply for listeners? Or what does that

2:59:09

imply for people who are using kind of factuals in in their

2:59:11

reasoning? Great. And this brings me to the next bit I wanted to talk about the assertability and

2:59:16

the acceptability of counterfactuals. When I

2:59:18

go around saying that most counterfactuals are false, a lot of floss is

2:59:22

think I've lost my philosophical

2:59:24

marbles. This is just crazy. Actually, just

2:59:26

a little sociological observation. Some of them think I'm crazy and some of them think, well, I've just given

2:59:28

some of them think i'm crazy and some of them think well

2:59:31

i've just given good arguments and that's

2:59:33

it's I'm exactly right. The former,

2:59:35

the ones who think I'm crazy. They

2:59:37

tend to be more philosophers of language

2:59:39

and philosophers of mind who think that

2:59:41

principles of charity and humanity are operative. And I'm

2:59:44

attributing to competence speakers

2:59:46

-- Mhmm. -- some deep confusion or

2:59:48

something. Yeah. And I must be just

2:59:50

getting how semantics works. wrong. The latter, who thinks that I've given conclusion's right.

2:59:53

They they tend to

2:59:55

be more philosophers of

2:59:57

science or or maybe

3:00:00

meta physicians who just take seriously,

3:00:02

well, this is what chance. What it

3:00:04

means is what implies for counterfactuals. We live in

3:00:06

a chancy world. That's just what you get. Mhmm. Now

3:00:09

how do I soften the blow? I don't

3:00:12

think falsehood is so scary. You know? I

3:00:14

don't think it's so troubling if if stuff that we say turns out

3:00:16

false Lots

3:00:18

of things we say. Lots of

3:00:20

things. And in fact, Gil Harmon

3:00:22

and Donald Davidson apparently did a bit of a early x five bit of experimental philosophy

3:00:28

I don't know if they ever published

3:00:30

this. They eavesdropped on conversations in

3:00:32

bars and cafes, just listening to

3:00:35

how much truth and false would do

3:00:37

people say in normal conversation. And they concluded that, remember, almost Much

3:00:39

or most of what we say is false for

3:00:44

completely understandable reasons. I

3:00:47

mentioned some before, sometimes

3:00:49

we're joking. We're exaggerating

3:00:51

loose talk is very important here. Sometimes

3:00:53

we we just have false beliefs. Sometimes we have true beliefs, but we just

3:00:56

take short cuts.

3:00:58

We just don't wanna say the full thing.

3:01:01

It would be wordy. And so we just cut to the chase. So

3:01:03

I think it's like that with counterfactuals. I I think they're in fact false. but

3:01:08

they're researchable. The ones that we think

3:01:10

are good. And I have a story

3:01:12

about that. Roughly well, first, I

3:01:14

should tell you my truth conditions, four

3:01:17

counterfactual themselves, and then then you'll see the rest.

3:01:19

I think that if p

3:01:21

where the case q would be the

3:01:23

case is true just in case the

3:01:25

chance of q given p is one at a suitable

3:01:27

time. And that

3:01:30

suitable time is when p was not

3:01:32

yet settled. things could have gone either way with respect to

3:01:34

p. Mhmm. It had positive chance and not p

3:01:38

had positive chance. Okay? Now that's very

3:01:41

demanding truth condition. And that's why I get results like if I were

3:01:43

to let go of the cup, it would fall is

3:01:46

false because the chance

3:01:48

isn't one, the falling

3:01:51

given, release it It's very close to one.

3:01:54

It's approximately one. That's the

3:01:56

key to my understanding acceptability

3:01:58

of counterfactuals. They're acceptable if they're approximately true, if the conditional is

3:02:00

close enough to one.

3:02:03

And that's very much

3:02:06

like stuff I said about

3:02:08

the tennis balls and about the

3:02:10

COVID cases. You know, these claims were

3:02:13

false pretty obviously, but they were useful. They

3:02:15

were useful. They're approximately true good

3:02:18

to reason with acceptable. Acceptability is

3:02:20

a story about what's what's useful

3:02:22

to reason with. And they're a certable

3:02:25

And now let let me

3:02:27

give the story about assertability

3:02:29

for counterfactuals. We take some possibility

3:02:31

seriously in our conversations and

3:02:34

that's a context sensitive matter.

3:02:36

And I say that if p with the

3:02:38

case, q would be the case, is certable

3:02:42

just in case the conditional

3:02:44

chance is one and it's the

3:02:47

following conditional chance, q given p and the serious possibilities, the stuff that we think

3:02:53

our live possibilities in our

3:02:55

conversation. That's context dependent. Mhmm. I

3:02:57

don't think that the truth

3:02:59

of counterfactuals is context dependent -- Okay. --

3:03:01

that everyone says that nearly, but me

3:03:04

but I think that the assertability is

3:03:06

and I think it depends on, like,

3:03:08

adding another conjunction additional condition. That's an extra

3:03:10

condition about what possibilities we're taking

3:03:13

seriously. In a normal context, we

3:03:16

don't take seriously the released

3:03:18

cup suddenly being saved by a table or an updraft.

3:03:22

But if I then draw

3:03:24

your attention to these possibilities,

3:03:26

then the context shifts. And I think now it would start adding a bunch of conjunctions saying,

3:03:30

and no thermo dynamic verticals. And like no one

3:03:32

catches it. And -- Yeah. -- that's it. And

3:03:34

then it becomes even unassertable because in that context, we're taking seriously these rather

3:03:37

odd possibilities. Yeah.

3:03:39

But I I

3:03:42

now locate the context

3:03:44

sensitivity in the assertability

3:03:46

conditions and not the truth.

3:03:48

And I think we can explain away

3:03:50

the intuitions that these counterfactuals are true

3:03:54

My error theory is they're

3:03:56

false, but they're researchable. They're acceptable.

3:03:58

They're good to reason with. And that's because they're approximately

3:04:00

true and

3:04:03

that's because the conditional chance is

3:04:05

close to one. It's like the number of COVID deaths is close

3:04:07

to six point five million. So a

3:04:12

lot of this is currently kind of resting

3:04:14

on you that we live in an indeterministic world, which I guess we have good reason to think that we do, but

3:04:16

but hypothetically

3:04:20

if we that's a study doing kind

3:04:22

of factors. Yeah. If we lived in a

3:04:25

deterministic world, would a bunch of this problem

3:04:27

go away? I think it doesn't. And

3:04:29

so I think even under determinism, our counterfactuals will come up

3:04:31

mostly false. And it

3:04:35

is actually a live debate and

3:04:37

some physicists actually think the world is deterministic. Even quantum mechanics

3:04:39

can be given a deterministic Yeah.

3:04:43

It's just some like interpretation. It turns out

3:04:45

that things collapse for reasons that we don't

3:04:47

yet know exactly what it is and there's only one path or certain interpretations bones and deterministic.

3:04:52

That's a live debate, and and

3:04:54

some physicists might say I've taken

3:04:56

the wrong wrong term. Turn here.

3:04:58

But now let let's take

3:05:00

the deterministic case Now I think there's a different

3:05:03

problem. It's not so much choanciness. It's

3:05:06

what I would call unspecificity. Let's take

3:05:08

a case that floss was a

3:05:10

fond of talking about, Sophie sadly missed the big parade of,

3:05:13

I think, baseball

3:05:16

players. At a

3:05:18

certain point, Pedro danced she

3:05:20

would have loved to see that. Sadly,

3:05:23

she missed that she didn't go to

3:05:25

the parade. If Sophie had gone to the parade, she would

3:05:27

have seen Pedro Dads. Alright? Now

3:05:30

let this be a deterministic case. There's

3:05:32

no chanceiness. And we don't know that might have gone to the

3:05:34

bathroom. She she might have gone to the bathroom exactly

3:05:39

just the wrong time or the case that

3:05:41

that is usually considered, she might have got

3:05:43

stuck behind a tall person. Mhmm. If so, if you'd gone to the parade and got stuck behind a tall person, she

3:05:49

would have seen Pedro dance. That seems

3:05:51

false. She wouldn't have seen. Okay.

3:05:53

Now I I wanna make a lot

3:05:55

of that. If she'd gone to

3:05:58

the parade, somehow or rather. That was the antecedent. She would have

3:06:00

seen specifically Pedro dance.

3:06:02

I say, well, no.

3:06:04

It depends on how

3:06:07

the antecedent is realized. If she'd gone

3:06:09

to the parade and got stuck behind a tall

3:06:11

person -- Yeah. -- she'd gone to the parade and

3:06:13

gone to the bathroom at the wrong time, it's not true

3:06:15

that she would have seen Pedro Dantz

3:06:17

and that might have happened. These are ways

3:06:19

of realizing the antecedent. For the same reason you

3:06:22

want to add in, I'm guessing a whole bunch

3:06:24

of additional conditions to the antecedent

3:06:26

lag. Yeah. And she wasn't stuck behind a tall

3:06:28

person and she didn't go to the bathroom. All of the stuff that is being assumed

3:06:30

in normal language -- Yes. -- because we engage in lose talk.

3:06:35

but actually should be there if we

3:06:37

wanted to formalize it. Exactly right. And

3:06:39

now some people will say to me well, context will determine what should

3:06:42

be added or not. Mhmm. And I

3:06:45

say that's a matter of assertability. But as far as the truth

3:06:47

goes, I take the counterfactuals that face value, you know, you said if

3:06:52

she'd gone to the parade, she

3:06:54

would have seen Pedro dance. Now

3:06:56

you're telling me that defeat her

3:06:59

of her seeing Pedro Danes is getting

3:07:01

stuck behind a tall person, and she might

3:07:03

get stuck behind a tall person. So I

3:07:05

say it's not true that she would have

3:07:07

seen Pedro Danes She might not have. She

3:07:10

might have got stuck. Yeah. Are there any

3:07:12

any other approaches to kind of factors

3:07:14

that it's worth people having in mind or

3:07:16

alternative ideas that people are for for it to make

3:07:18

sense of all of this? Sure. Yeah.

3:07:22

An important alternative approach to counterfactual involves

3:07:24

conditional probabilities. And I like

3:07:27

this approach. I'm thinking

3:07:29

of Adams, Edgington, Skirms.

3:07:31

Now, first aid, think that

3:07:33

counterfactuals don't have truth values, but they can

3:07:35

be assigned probabilities.

3:07:38

According to atoms, for example,

3:07:40

your probability for if p

3:07:42

would queue is your conditional probability of queue given

3:07:46

p before learning that p

3:07:48

was false. And for

3:07:50

Edgington, the correct probabilities, the conditional chance of q given

3:07:55

p just before it turned

3:07:57

out. The p and Skirm's identifies

3:07:59

the counterfactual's assertability for

3:08:02

you with your

3:08:05

expectation of the

3:08:08

conditional chance. According to these accounts, counterfactuals don't

3:08:10

have truth conditions. Lightgab does give

3:08:12

them truth conditions. He says if p

3:08:14

would queue is true, just in case,

3:08:18

the conditional chance is high. And

3:08:20

and my own accounts influenced by all

3:08:22

of these. I like conditional probabilities. I really like

3:08:26

conditional chances for counterfactuals. And

3:08:28

I really, really like truth conditions given in

3:08:30

terms of those conditional probabilities, those conditional chances. So

3:08:36

that's an important alternative. The basic idea

3:08:38

here is taking kind of away from

3:08:40

kind of the realm of strict logic

3:08:42

where you're like, if p then q,

3:08:44

and bring it into the realm of war

3:08:46

of probability or bayzianism or making claims

3:08:49

about correlations or associations between things. So

3:08:51

it's cause of relationships maybe we're

3:08:53

saying, if p then q is more likely or something like that. And

3:08:55

it's also a

3:08:59

a good approach to reasoning and

3:09:01

arguing with counterfactuals. And this goes hand in hand

3:09:03

with this Adam's alternative

3:09:06

approach to thinking about reasoning. So

3:09:08

don't think in terms of

3:09:11

classical validity, which is truth preservation. Because again, the thought

3:09:14

is that these conditionals, in

3:09:17

this case, counterfactual, don't have

3:09:19

truth values. Mhmm. But we want a reason in such a way that

3:09:23

we go from high probability premises

3:09:26

to high probability conclusions. We don't want to have our premises being probable

3:09:31

and our conclusion being improbable. And

3:09:33

Adams has this nice theory

3:09:35

of so to speak high probability preservation. And fits in with

3:09:37

that whole alternative program.

3:09:40

So it's not classical

3:09:42

validity. It's not truth values.

3:09:45

It's high probabilities. Okay. Let's push

3:09:47

on and think about maybe an

3:09:49

application of this set of objections.

3:09:52

I think you you you

3:09:54

reckon that this this sort of reasoning

3:09:56

about counterfactuals or recognizing the trouble that comes

3:09:58

with counterfactuals that can potentially present a problem for

3:10:01

a flavor of utilitarianism called objective utilitarianism

3:10:03

or, I guess, objective consequentialism of M and A

3:10:05

kind. Yes. I guess. Most of what I consequentialism is when

3:10:07

when you judge the

3:10:10

value or, like, the goodness of actions or

3:10:12

what would be right to do based on the consequences that

3:10:14

they have. Most people have heard of consequentialism in some form, but what is objective consequentialism? Roughly

3:10:20

this, Action one is objectively better

3:10:23

than Action two, if not least,

3:10:25

the consequences of Action one, better

3:10:27

than those of action two.

3:10:29

And I think here we're imagining really the long term consequences, not just the immediate consequences,

3:10:32

but really perhaps

3:10:37

to the end of history. Yeah. And now we get into

3:10:39

a big discussion, which

3:10:42

I know is is close to

3:10:44

the hearts of many listeners many

3:10:46

listeners. about the long term consequences of what we do. But anyway, what I'm

3:10:49

about to say, I

3:10:51

think, will generalize beyond

3:10:53

just objective consequentialism, but

3:10:55

that's a good place to start.

3:10:58

Alright. So let's take a case.

3:11:00

You have a choice. You could help

3:11:02

the old lady across the street or

3:11:05

something else. Go to the pub. Yeah.

3:11:07

What should you do? What's the right thing to

3:11:09

do? Now, let's suppose, in fact, you take the old lady across the street. You

3:11:12

help her. I

3:11:15

don't have any problem with taking a

3:11:18

total of the all of the goodness, whatever the happiness or the

3:11:20

welfare after

3:11:23

that. I'm happy to rule out there's

3:11:25

a fact of the matter of the

3:11:27

total value, the total goodness, the consequences of that. but

3:11:30

what about the thing you didn't do? You did

3:11:32

not go to the pub. That's where my worry is

3:11:34

gonna kick in. Okay. First thing we we should make clear that this is a counterfactual the

3:11:39

way I I just stated it before,

3:11:41

notice the the carelessness of it. Action one is objectively better

3:11:43

than action two, if not only if the consequences of action one, are

3:11:48

better than those of Action two.

3:11:50

Well, in this case, Action two didn't happen. It was nonactual, and it didn't have any consequences.

3:11:56

So we must be talking about counterfactual

3:11:58

consequences. And now my worries of about counterfactuals

3:12:00

are gonna start to kick in. Yeah. Alright.

3:12:02

So let's take the thing you didn't do.

3:12:05

You didn't go to the pub. Well, case one,

3:12:07

the world is chancy. Well,

3:12:10

let's consider the very first

3:12:12

chancy coin toss that never

3:12:15

happened. how would it have landed if that coin had been tossed, it would landed heads,

3:12:21

not tails, heads, No. No. I say that I find

3:12:23

implausible it might have landed tails. Consider the

3:12:26

first lottery that never

3:12:28

happened if the lottery

3:12:31

had taken place But ticket number seventeen would have

3:12:33

won. No, I say, can't save any

3:12:36

ticket that it would have won it.

3:12:38

Some other ticket might have won instead. Alright.

3:12:40

I've hardly started. Now I

3:12:43

I know in the cluelessness

3:12:45

industry that this worry

3:12:48

about consequentialism, there's a lot of

3:12:50

discussion of how our actions

3:12:52

have these far reaching consequences.

3:12:54

There are these ripple effects It's

3:12:56

not like ripples in a pond that

3:12:58

tend to dampen down as you go

3:13:00

further out. No. These these just keep on

3:13:02

rippling for the rest of history. you

3:13:05

know, unborn children, which children would have

3:13:07

been born or not. You know, depends acutely

3:13:09

sensitively on the size of the population. I think I think the

3:13:12

things that that

3:13:15

we do, you know, very minor changes

3:13:17

in what we do. Alright. So now let's go back to the pub,

3:13:19

the hypothetical visit to the pub. The first child to

3:13:24

have been born thereafter. Well, the

3:13:26

child to be conceived, hypothetically,

3:13:30

that depends on which

3:13:32

sperm fertilizes an egg and it's a

3:13:34

lottery which sperm wins the race to the

3:13:36

egg. So there would have to be

3:13:38

a factor of the matter of which

3:13:40

sperm wins the lottery to fertilize the egg

3:13:42

to make it this child that would have

3:13:45

been conceived and not some other one that

3:13:47

would have been the winner of another. a

3:13:49

different sperm winning the race. And I've still barely started. That's

3:13:51

the first child. But

3:13:54

now consider that child's children and grandchildren

3:13:56

and great grandchildren and now the

3:13:58

rest of history. And all of the people who they interact with, and all the people even slightly speed up or slightly

3:14:03

delayed. All of that. All of that. That's right.

3:14:05

Mhmm. Now, I find that wildly implausible that there is a fact of

3:14:07

the matter. We're still considering the Chancy

3:14:10

case where all of

3:14:12

these Chancy Chancy processes

3:14:15

would be resolved in one particular way

3:14:17

and no other way, but it

3:14:20

makes a huge difference to

3:14:22

how we evaluate these artifactual histories

3:14:24

which way things go. So on

3:14:26

one hypothetical scenario, the children that

3:14:28

happen to be conceived, you know,

3:14:31

latter day, gandis, and Einstein's

3:14:33

and and a wonderful world follows.

3:14:35

And with just a small tweak, we

3:14:37

now get a different kind of actual

3:14:40

history with a

3:14:42

latter day Hitler followed by a latter

3:14:44

day style and in a horrible world, and

3:14:46

everything in between. And all of this, again, is acutely sensitive to how

3:14:51

things are initiated and also how the

3:14:53

chance processes go. Yeah. So there's a lot here. Let's let's say

3:14:55

step by step. So we're considering that ethical

3:14:59

question, like, should you help this all

3:15:01

laid across the road or should you

3:15:03

go to the And it turns out that from a, like, total,

3:15:07

like, forever consequentialist point of view, the question

3:15:09

of how would the kind of factual would be of you going to the pub is

3:15:11

going to depend on, like, the main timing

3:15:15

of exactly when you go to the pub, like,

3:15:17

what cars you happen to work in front of and and slightly delay. When you go in, like, order a beer versus or not,

3:15:19

because that could, like, that

3:15:24

ultimately actually will, we think given the setup

3:15:26

of the world, end up affecting the identities of old people potentially in future times it will

3:15:31

change the exact moments of fertilization and so

3:15:33

on. Yep. Now, given that the action of going to the pub

3:15:35

could be extremely good or extremely

3:15:38

bad based on like a one second difference of

3:15:40

when you do it. You're gonna say that we need

3:15:42

to specify much more precisely what the counterfactual is. We need to say

3:15:46

than this, like, extremely precisely specified set of

3:15:48

actions that you're going to go and engage

3:15:50

in because otherwise it's just not not defined. Yeah. Okay.

3:15:53

Now so far I've been assuming

3:15:55

indeterminism, but now let's assume determinism,

3:15:57

which I think is the best -- Yeah. --

3:15:59

for objective consequentialism. And

3:16:02

now we're imagining if given a

3:16:04

precise specification of the initial conditions

3:16:07

and the laws of nature, we

3:16:09

get an entire history that's determined thereafter.

3:16:11

That's the best case. And now, maybe it's more

3:16:13

plausible that there's a fact of the matter of what would

3:16:16

have happened had

3:16:19

I gone to the pub,

3:16:21

but now the problem of unspecificity kicks in. This is not choanciness

3:16:23

at all. This is under

3:16:28

determinism if I'd gone to the

3:16:30

pub, somehow or how exactly would

3:16:33

I have gone to the pub, and

3:16:35

it makes a difference, you know, would

3:16:37

I have entered the pub at 603

3:16:40

and seventeen

3:16:43

milliseconds rather than eighteen

3:16:45

milliseconds and given the acute sensitivity of what follows

3:16:47

thereafter, it matters

3:16:50

which of these initial conditions

3:16:52

is realized. So I say,

3:16:54

This parallels what I said about Sophie a moment ago. Even under

3:16:58

determinism, I just don't think there's a

3:17:00

fact of the matter given this loosely specified

3:17:02

and antecedent. I go to the pub somehow or other counterfactually,

3:17:08

that you get this very

3:17:10

specific consequence, it'd be this realization, you know, the seventeen

3:17:14

millisecond one perhaps, not some

3:17:17

other one. And this, by the way, is another example of

3:17:19

implausible specificity. I think even

3:17:22

under determinism, I think it's implausibly

3:17:24

specific. lose antecedent, all too tight,

3:17:27

consequence. Okay. So I'm gonna try

3:17:29

to rescue it here. So

3:17:31

the person says, well, when I meant

3:17:33

go to the pub, what I meant was this, like,

3:17:35

whole portfolio of outcome of, like, possible

3:17:37

ways that you could do that. Yes. And you'll

3:17:39

say, yeah, but that's like a massive mixture

3:17:41

of, like, extremely good and extremely bad outcomes

3:17:44

that, like, depending on exactly how you

3:17:46

specify those actions. Then we say, okay. Alright.

3:17:48

Well, Let's say that I could exactly specify

3:17:50

the exact path. I'm gonna, like, hypothetically, we could

3:17:52

write out in this book like perfect instructions of

3:17:54

the exact moments I should make each of

3:17:56

these movements. like, on a space, that might rescue

3:17:58

this issue of, like, indeterminacy, although it is

3:18:01

now a bit weird. And I should say

3:18:03

more about the weirdness. Yeah. Yeah. Yeah. Okay.

3:18:05

So now let's very precisely specify

3:18:08

the way I go to the pub.

3:18:10

And now and we're resuming determinism still,

3:18:12

which I'd rather not, but let's Let's

3:18:14

spot that. Okay. So now given the initial condition precisely

3:18:17

specified, determinism, I

3:18:19

am happy enough

3:18:22

that there is history

3:18:24

that follows than the one that's determined.

3:18:26

But now we have a different problem.

3:18:28

I'll call it clumsiness, and I'm trying

3:18:30

to rip off some cluelessness. way, it's

3:18:36

hard for us to know or

3:18:38

to have confidence reasonable belief in which of these histories would be realized. I

3:18:40

think the problem is much worse

3:18:42

than that. I think it's a

3:18:45

metaphysical problem that there's just no

3:18:47

fact of the matter of

3:18:49

what would be Now we fix

3:18:51

that one. And we fix that one

3:18:54

with the determinism and the exact initial condition. And I say, the problem is, I'll call it,

3:18:56

clumsiness. it's

3:18:59

just not under our control

3:19:02

to realize the initial conditions in one very precise way rather than another,

3:19:04

to enter

3:19:08

the pub exactly at

3:19:10

seventeen milliseconds after 603

3:19:13

rather than eighteen Yeah. It's

3:19:15

just not somebody is likely to do

3:19:17

the good one as the bad one because they're so adjacent one another. That's basically

3:19:20

if that it's

3:19:22

not under my volition to so

3:19:24

finely tune exactly what happens, you know, in this case

3:19:26

down to the millisecond, let alone all of extra

3:19:31

stuff, you know, exactly when I ordered the

3:19:33

beer and so on. Even if I knew what I ordered What I ordered do,

3:19:35

it would be this this is the trajectory I want to get on let's

3:19:41

say, the seventeen millisecond one, not the eighteen one.

3:19:43

It's just not something that's

3:19:45

an option for me. I can't choose

3:19:47

seventeen rather than eighteen milliseconds after 603

3:19:50

So to to step back, put this in the form of a dilemma. Now what are the things that we're evaluating

3:19:57

what are the objects of moral evaluation? On the one

3:19:59

hand, let them be these rather

3:20:03

coarse grained statements like, I help

3:20:05

an old lady, as opposed to I go to the pub

3:20:07

somehow or rather. So now it's it's

3:20:11

pretty plausible. These really are my options,

3:20:13

but now I say it's not plausible

3:20:15

that there's an entire counterfactual history would be

3:20:18

realized given that

3:20:22

unspecificity of the option.

3:20:24

That was the first horn

3:20:26

of the dilemma. Second horn

3:20:28

now let the options in inverted

3:20:30

commas be. It's very precisely specified things,

3:20:33

you know, the exact millisecond I enter

3:20:35

the pub and so on. Now I think it's more

3:20:38

plausible that there is a fact of the matter of the entire

3:20:40

counterfactual history but

3:20:43

it's not plausible that that's an

3:20:45

option that anyone could actually act on it. That's right. Indeterminacy or

3:20:47

a relevance. Yeah. Yeah. It's not not the appropriate evaluation of

3:20:53

a moral theory. It's not something that I can realize

3:20:55

as an act of my

3:20:59

evolution. Okay. So one way

3:21:01

of dealing with this problem would be, I guess,

3:21:03

to specify extremely precisely what

3:21:05

you're going to do. So with, let's

3:21:07

say, you know, you're gonna go to

3:21:09

the pub at this exact time with these exact movements or help the

3:21:11

person in these very

3:21:16

exact ways. And that can

3:21:18

potentially Maybe that's overly precise, but it might at least allow you then to say in principle

3:21:20

that there is a

3:21:23

specific consequence. I guess,

3:21:26

again, always assuming determinism here.

3:21:28

for the sake of for the

3:21:30

sake of the conversation. That's right.

3:21:32

Well, for start, what does determinism mean? It

3:21:34

means given an entire time slice of the world,

3:21:38

and the laws of nature,

3:21:40

the rest of history follows. But I

3:21:42

don't think that it's enough just to be told the

3:21:47

details of your going to the

3:21:50

pub, for example, even an exact specification of your pubbing, I say, doesn't determine a

3:21:52

unique history

3:21:56

thereafter, even under determinism. Because you're just

3:21:58

a tiny part of the world, you

3:22:01

know, in the sweep of world history,

3:22:03

you're just a speck I'm sorry

3:22:05

if this comes as news to

3:22:07

you. But even describing precisely the

3:22:09

details of your going to

3:22:11

the pub, you know, the exact

3:22:14

millisecond of your arrival and exactly

3:22:17

your movements thereafter. Duchess

3:22:19

Falls way short of determining the

3:22:21

entire world's initial conditions at that time and under determinism, it's the initial

3:22:23

conditions of the entire world. and

3:22:29

the laws that entail the rest

3:22:31

of history. So so

3:22:33

so put it powerfully, according

3:22:35

to determinism, a snapshot of the

3:22:37

world at the time will give you the

3:22:39

rest of history.

3:22:41

But I'm imagining we've only just

3:22:43

got a selfie. We've only just

3:22:46

got this little part of the world, you know, you entering

3:22:50

the pub. And that's not

3:22:53

enough to entail the rest of history.

3:22:55

You're just a tiny

3:22:57

part of a portion of

3:23:00

a small bit of a

3:23:02

fragment of a time slice of history. So so in order to specify sufficiently closely,

3:23:06

we would have to not only talk about

3:23:08

your actions, but I guess specify the entire

3:23:10

initial conditions of the universe or all of the different atoms that could affect all of the different ones, which guess it's

3:23:17

becoming an absurdly bloated set of instructions. we need

3:23:19

not just information about you, we need to know about the hypothetical, you

3:23:24

know, other people and the hypothetical

3:23:26

dogs and frogs and bees and trees and photons and electrons and

3:23:31

everything else, you know, hypothetical

3:23:33

pandemics and natural disasters And I claim you don't

3:23:35

get all of that for free out of

3:23:40

the specification of your going to

3:23:42

the pump. even precisely specified. Okay. So this is all very clever,

3:23:44

but it it does it does

3:23:46

do make it easier. Well,

3:23:49

let's just start. Let's just start

3:23:51

here. I'll take it. It it does feel

3:23:53

like some kind of tricks being played here. Like, surely

3:23:55

we can patch this theory. So let's just, like, try

3:23:57

to do it in a kind of common sense way.

3:23:59

So what did someone really mean when they said, if I got into would be better. What they were

3:24:02

talking about was like some portfolio

3:24:04

of different scenarios in which you

3:24:06

went to the pub and we might

3:24:09

sample from them. And then, like, hypothetically, we

3:24:11

could see what the four long term consequences

3:24:13

of them would be. And then, you're gonna

3:24:15

choose one of these different pathways somewhat at

3:24:17

random from this portfolio of things that

3:24:19

plausibly match going to the pub.

3:24:21

And then we're gonna average, I

3:24:24

expected value of the of the different ones that

3:24:26

you might sample from. And then you should do it

3:24:28

if, like, the expected value of the, you know, outcomes of these

3:24:30

different options that you might end up taking. In fact, in

3:24:32

practice, would would

3:24:34

would be positive versus negative. That seems

3:24:36

to fix it. It's I think a big improvement. I had

3:24:38

noticed you really had to change the view. Remember, the original view said

3:24:43

one should perform the action that has

3:24:45

the best consequences. It wasn't any probabilistic thing. It wasn't an unexpected value thing. Mhmm.

3:24:47

It was just the best consequences.

3:24:52

Yeah. And I've been challenging that

3:24:54

thought. Yeah. And in particular, the

3:24:56

thought that there's a fact or

3:24:58

the matter of the consequences for

3:25:00

something you didn't actually do. Now,

3:25:02

I like this much better where you somehow probability of this thing, you take the expected consequences

3:25:08

where that's a probabilistic notion. And by the

3:25:10

way, we're gonna have to talk about

3:25:13

what the nature of probability is. I

3:25:15

like how various topics that we've been talking

3:25:17

about, all of a sudden, but now the

3:25:19

interpretation of probability matters, I think here

3:25:21

the right way to go is it's

3:25:23

some sort of reasonable subjective probability

3:25:25

that that we're putting in at this point. But notice we have gone a long

3:25:27

way from the original objective

3:25:32

consequentialism. Maybe, yeah, for practical

3:25:34

purposes, maybe it feels quite

3:25:36

similar, but I suppose we

3:25:38

have now this quite foundational level of

3:25:41

the theory, we started sticking in

3:25:43

subjective probability judgments and expected value calculations. These things that originally distinct like philosophical

3:25:48

issues. Now our moral theory is this,

3:25:51

like, combination of consequentialism plus expected value, plus,

3:25:53

like, as to some view on, like,

3:25:55

how people ought to form views about

3:25:57

things will happen. Yes. So an unpleasant consequence of

3:25:59

this is that if

3:26:01

if we identify problems with, like, subjective

3:26:04

opinion forming or, like, how we aggregate

3:26:06

them to expected value or so on. Yes. That's this is, like, all really deep in the moral theory, and so it's gonna

3:26:08

create a problem for,

3:26:10

like, the the the

3:26:12

subjective utilitarianism as well.

3:26:15

Yeah. Spot objection to objective

3:26:17

consequentialism was that it's presupposes

3:26:19

this bizarre matter physics, I find

3:26:21

impossible. To be fair, I should

3:26:23

say just very quickly, some,

3:26:26

you know, really serious good philosophers do,

3:26:28

I think, believe that there are these, which we might call,

3:26:30

count effects, or he's definitely gonna call some count effects.

3:26:34

That's the matter of what would

3:26:36

happen given any antecedent for the

3:26:38

entire world. This goes back to the molyneux who thought that God

3:26:43

has these these knowledge of these

3:26:45

kind of kind of actuals. And it seems to

3:26:47

to go pretty well with the early storenacko view that there's is

3:26:52

a unique closest possible world

3:26:54

where an antecedent is realized. John Hawthorne runs with that few, Sarah Moss,

3:27:00

I've mentioned stories Stephanson, Richard Bradley, there there are a

3:27:02

number of people who have views, I think, in

3:27:05

this neighborhood. So III have to

3:27:07

take this view very seriously. even though I've said

3:27:09

I don't buy it. I don't buy it in the end. So

3:27:11

so far, I've

3:27:15

I've been making the point that

3:27:17

objective consequentialism presupposes this very questionable metaphysics. Mhmm. And and

3:27:19

even if it turns out

3:27:22

the metaphysics is correct

3:27:24

and it's defended by

3:27:26

all of these really good philosophers,

3:27:28

we should be upfront

3:27:30

that this ethical theory,

3:27:32

this theory about morality

3:27:35

has this deep metaphysical commitment. Now

3:27:37

I think you're making the next

3:27:39

point that if we fix the theory as

3:27:41

I think we should and we go probabilistic and

3:27:43

we involve, we get expected value

3:27:47

into the calculations. Now

3:27:49

the foundations are probability theory

3:27:51

and expected value theory And

3:27:54

we've talked earlier about some of

3:27:56

the problems there. And again, the moral theory has

3:27:58

got these foundational presuppositions. We want to make sure they're in good order. Yeah.

3:28:04

I see. So it makes I'd say,

3:28:06

like, solving the problems with that, like,

3:28:08

even more important. That's that's without that

3:28:10

consequentialism, like, just It doesn't doesn't even, like,

3:28:13

really make make sense. Yeah. Yeah. And quite

3:28:15

a while back, you asked me, why should we care about kind of effectuals and why should we care about probability

3:28:20

in the expected value theory. And now

3:28:22

we see yet another implication that

3:28:25

some moral theories are gonna depend

3:28:28

on how cat effectuals pan out. Some

3:28:30

of them will now depend on how probability

3:28:33

and expect the value turns out. And

3:28:35

by the way, just to finish an

3:28:37

earlier thought, I began by saying, I'll put my

3:28:39

objection as an objection to Mhmm. But I

3:28:42

actually think it's it

3:28:45

will generalize beyond that.

3:28:48

Namely any moral theory that

3:28:50

takes consequences at least

3:28:52

somewhat seriously. So even

3:28:55

a deontologist should take some account

3:28:57

of consequences, especially when mistakes are sufficiently

3:28:59

high. Yeah. I think virtually all

3:29:01

do think that likely consequences matter

3:29:03

as well. Yeah. Yeah. And especially given this

3:29:06

more recent movement about how our actions

3:29:08

have these huge consequences for for the rest

3:29:10

of history, I've got right. It's like,

3:29:13

someone's like, 0IIII shouldn't, like, steal

3:29:15

this thing. Of course, like, that ends up changing everyone's identity

3:29:17

for the same reason -- Yeah. -- going to the picture actually does. Yeah. You know, so it's like

3:29:20

massive consequence. That's

3:29:22

it. The deontologist who says, I ought

3:29:24

to keep a promise, but now imagine some history we're in so doing

3:29:26

it. Right? So doing it, you know, it was in Stalin's later

3:29:32

are created. So even the the

3:29:34

ontologist has to take some account of consequences. Similarly, a virtual ethicist, virtual ethics is partly a matter I hope,

3:29:41

promoting good consequences for people, but now we have to

3:29:43

take seriously these

3:29:46

worries about counterfactual consequences and much

3:29:48

of the stuff I said before

3:29:50

kicks in again. Yeah. Okay. Yeah. So it's really yeah. It's gonna impact any theories that give some way to consequences.

3:29:55

I suppose it might also if you totally

3:29:57

constrain your moral picture to dust being, like, it's wrong to steal. You shouldn't steal here and now. sufficiently

3:29:59

constrained it probably like, it it's not gonna

3:30:02

create a problem -- Yeah. -- because you can

3:30:04

enter you can say if I took this

3:30:06

course of action, I would be much less

3:30:09

likely to steal. Although it says now we're in the

3:30:12

likelihood, but like you could say, yeah. Well, I I'm

3:30:14

gonna specify a path of action where that doesn't include stealing. And

3:30:17

I can do that set of action sufficiently closely that

3:30:19

I'll be confident that I won't steal -- Yes. -- to

3:30:21

and so that's safe. But as soon as you start considering broader issues and play them,

3:30:23

this is gonna come back That's

3:30:26

right. And now this takes us back

3:30:28

to long termism. So, yeah, maybe if we're myopic and

3:30:30

just look at the very short term consequences of what we do the

3:30:36

considerations that I've mentioned don't

3:30:38

kick in. But if horizon, then

3:30:43

I think it's a very live issue.

3:30:45

Slightly reminding me of the paralysis argument that I talked about with Warren McCaskill in my second

3:30:47

interview with him a couple of years ago. yeah,

3:30:52

we guess we don't have to have to go into that now, but

3:30:54

it turns out if you're a deontologist and you

3:30:58

place different values on, like, harms that you actively

3:31:00

cause versus harms that you prevent through your actions. And you

3:31:02

can end up in a very sticky situation where a bunch

3:31:05

of lots and lots of actions are prohibited to

3:31:07

you because of their foreseeable unintended consequences, basically.

3:31:09

It has a similar flavor. Yeah. Yeah.

3:31:11

That's right. Although even

3:31:14

being paralyzed and just locking yourself

3:31:17

up at home so that you don't

3:31:19

have any effects on all these other

3:31:21

people that you can't control and you

3:31:24

can't foresee. Well, there'll still be consequences of your

3:31:26

locking yourself up at home. Both well,

3:31:30

unforeseen, there'll be cluelessness there.

3:31:32

clumsiness too by the way. That's right. The

3:31:34

exact way you you quarantine yourself.

3:31:37

And I think you you just

3:31:39

can't avoid it. Even what are you supposed

3:31:42

to do? Just stay at home and not even move. And

3:31:44

then you die and then people are gonna come

3:31:47

and hit. Oh, yeah. If you worry that. Get

3:31:49

your body. Get your body in that gonna have consequences as such to

3:31:51

rule and so it becomes gonna be unclear what it's stating. Yeah. Yeah. I think

3:31:53

that this is this is a

3:31:55

this is a this is a

3:31:57

huge cannibalization. So yeah. People

3:32:00

we wanna what we were just talking about, probably have

3:32:02

to listen to my my second interview with

3:32:04

Will. Yeah. But anyway, just to summarize, I guess, what

3:32:06

we've said here, I I think it's underappreciated how these methodical

3:32:09

positions, whatever, objective consequentialism

3:32:12

or some other kind of

3:32:14

consequentialism, maybe subjective with inspected value,

3:32:18

and even at the end, the

3:32:20

ontology and virtue ethics, they may

3:32:22

involve some very questionable metaphysics, I say, about

3:32:26

how the counterfactuals pan out.

3:32:28

Or if we now probabilify

3:32:30

them in terms of expected value,

3:32:33

then the foundations of probability

3:32:35

and expected utility theory, very relevant. Yeah.

3:32:37

Just as a reminder, people who might have forgotten, deontology

3:32:40

is the class

3:32:43

of ethical theories where it's it's like rule

3:32:45

based ethics, like, you know, you shouldn't lie, you shouldn't steal, I guess, classically, that that kind of thing,

3:32:48

especially prohibitions although

3:32:51

usually provisions that can sometimes be swamped by

3:32:53

other considerations that might be really important. Yep. That's right. Thank you. And I guess, virtual ethics is kind of the theory that puts front

3:32:55

and center cultivation of,

3:33:01

like, good qualities of the actor. I think, you know, you should try to be a courageous

3:33:03

I don't know. You should

3:33:05

be I'd try to be a kind person, things like

3:33:07

that. That's right. And then when you spell out these things,

3:33:09

so that there's somewhat sensible consequences will have to be part of

3:33:11

the picture. And

3:33:14

my worry is about the the

3:33:16

non actual consequences as before. Okay. So this has been

3:33:18

a very nice segment to, I guess, like, bring together almost all of the different threads that

3:33:21

we've that we've talked about through

3:33:23

the interview. It's been a lot

3:33:25

to build. And it's been lots

3:33:28

of would not be surprised if some people, including me,

3:33:30

might have to listen to this request to fully

3:33:32

to fully grasp their speech has gone ahead. We should wrap

3:33:34

up though. We've got other sessions at the conference to get

3:33:36

to. I suppose yeah. Final

3:33:38

question. We we we managed to get through our Lexing

3:33:40

on stage earlier. We're without without too much too much. We didn't do anything too embarrassing. Or

3:33:42

if we did, I guess, listeners won't know because we'll have cut it out. Yeah.

3:33:47

you worse have done, like, tons of presentations. There's tons

3:33:49

of stuff on stage over the years. And III

3:33:51

think has gone, like, I either incredibly well incredibly poorly for you. I

3:33:55

hope some of them have gone incredibly

3:33:57

well at best for others to judge. I can tell you about

3:33:59

one that went very surprisingly, shall we say, This

3:34:04

happened in Argentina as at

3:34:06

a conference and I gave a talk and as is

3:34:10

often the case, there was an empty

3:34:12

glass in front of me and and a bottle, filled

3:34:14

bottle. Towards the end of the talk, I was getting thirsty. Mhmm. q

3:34:19

and a began, and I I

3:34:21

poured poured the bottle into the glass. And then I had a nice big drink from the

3:34:24

glass. Yeah. and

3:34:28

my head nearly exploded. And my

3:34:30

first reaction was just to

3:34:34

spill over the audience.

3:34:36

And I thought I can't do that.

3:34:39

So I just swallowed. What

3:34:41

was it? Well, my head was

3:34:43

hit by a sledgehammer. Okay. What

3:34:45

the hell do they put in their water here

3:34:47

in Argentina? Alright. Well,

3:34:49

I have to explain behind me was

3:34:52

a whiteboard and you were supposed to

3:34:54

use marker pens and then you erase

3:34:56

what you've written on the board with

3:34:58

a bit of eraser fluid which is

3:35:00

one hundred percent Pure ethanol. Jesus. So I just

3:35:02

just had a big drink of of pure

3:35:08

ethanol. So I was

3:35:10

smashed. Do people notice? Sure. Tell us, listen. I mean, how would you avoid making a scene? And the first

3:35:12

question is still coming at

3:35:14

me and now, you know,

3:35:16

the the room is just

3:35:18

spinning. Okay. For me. And I

3:35:21

said, hey folks, just a a

3:35:23

word of caution. Next time, you

3:35:25

put an empty glass in a bottle

3:35:27

next to the glass. make sure

3:35:29

it's not cleaner fluid. Make sure

3:35:31

it's not cleaner fluid. Water would be

3:35:33

better. And in a way, it's sort of worked out.

3:35:36

Now normally, I

3:35:38

maybe get a little nervous at talks and

3:35:40

maybe in Q and A, perhaps jump in a

3:35:42

little too fast or something. But I was so

3:35:46

mellow. Right? I was so relaxed, you

3:35:48

know, because I was smashed. I'm just gonna answer the

3:35:50

question when I damn well feel like it. Yeah. So now I'm thinking

3:35:52

maybe a

3:35:55

future. When I give a talk at

3:35:58

a conference, I should bring along a little hip flask of pure ethanol just to relax me. serious.

3:36:00

Anyway, that

3:36:05

was one of the more surprising things to happen when I gave it

3:36:07

all. Yeah. Well, I'm

3:36:09

glad you didn't hurt yourself. It's I was

3:36:12

is it isn't that dangerous? It I think

3:36:14

it is dangerous. I was okay. Yeah. The story does end pretty well. Okay. Yeah. I think

3:36:19

I don't think there was permanent damage,

3:36:21

but it was a little scary. Yeah.

3:36:23

Yeah. I think we can say if there's any brain damage done, it has not been apparent to me. Thanks so

3:36:26

much. And fortunately,

3:36:29

we didn't have

3:36:32

such mishap doing this interview. podcast today has been

3:36:34

Alan Hayek. Thanks so much for coming on

3:36:36

the eight eight thousand hours podcast, Alan. Thanks

3:36:38

so much for having me wrong with this. If

3:36:40

you like that episode, you might

3:36:42

also like some of our other

3:36:44

more challenging episodes such as number

3:36:47

ninety eight, Christian Tasani on future bias and a possible solution to moral fanaticism or perhaps

3:36:49

episode eighty six, Hillary

3:36:51

Graves on Pascal's mugging,

3:36:54

strong long termism and whether

3:36:56

existing can be good for us.

3:36:58

Sticking with the practical philosophy theme. If you've

3:37:00

made it this far, there's a decent chance

3:37:02

you've heard of the Oxford philosopher Nick Bostrom.

3:37:04

who is pioneer in thinking about existential

3:37:06

risk and humanity's potentially grand futures. A

3:37:08

friend of mine has started a

3:37:11

podcast feed called Radio Bostrom, features professional audio

3:37:13

readings of Bostrom's papers and articles

3:37:15

on those topics, as well as

3:37:17

Wise philanthropy, the ethics of AI,

3:37:19

transhumanism, and the case for speeding up some investments in technology while slowing down others. If that sounds interesting

3:37:21

to you, just search for radio

3:37:23

bostrom in any podcasting app or

3:37:25

visit radio bostrom dot com to

3:37:28

learn more. Finally, as

3:37:30

I mentioned in the intro, if you're interested

3:37:32

in working with us at eighty thousand hours, we are

3:37:34

currently hiring a recruiter in order to help grow our team.

3:37:37

Not being able to hire fast enough is actually

3:37:40

one of our biggest bottlenecks as an organization. And

3:37:42

the person in this role will help grind hiring rounds, source candidates, and help improve our recruiting

3:37:44

processes. and

3:37:46

I'll probably be key to increasing our impact

3:37:49

in the coming years. This might be a particularly good fit

3:37:51

for someone excited about building a career in operations or HR. and

3:37:55

you don't need any previous experience with recruiting

3:37:57

to apply. If that sounds interesting to you, then you need to get your application in pretty quickly because applications close on November

3:38:00

the second. and

3:38:04

you can find out more about that position at eighty

3:38:06

thousand hours dot org slash jobs or

3:38:09

by checking out our blog eighty thousand hours

3:38:11

dot org slash latest. Alright. The eighty thousand

3:38:13

hours podcast is produced and edited by Karen Harris,

3:38:15

audio mastering and technical editing by Ben

3:38:17

Cordell and Ryan Kessler. Full transcripts and an extensive

3:38:19

collection of links to learn more are available on

3:38:21

our site and put together by Katie Moore. Thanks for joining. Talk to you

3:38:24

again soon.

Rate

Join Podchaser to...

  • Rate podcasts and episodes
  • Follow podcasts and creators
  • Create podcast and episode lists
  • & much more

Episode Tags

Do you host or manage this podcast?
Claim and edit this page to your liking.
,

Unlock more with Podchaser Pro

  • Audience Insights
  • Contact Information
  • Demographics
  • Charts
  • Sponsor History
  • and More!
Pro Features