# Lloyd: What do you think the chances are of a guy like you and a girl like me... ending up together? … What are my chances? Mary: Not good. Lloyd: You.

## Presentation on theme: "Lloyd: What do you think the chances are of a guy like you and a girl like me... ending up together? … What are my chances? Mary: Not good. Lloyd: You."— Presentation transcript:

Lloyd: What do you think the chances are of a guy like you and a girl like me... ending up together? … What are my chances? Mary: Not good. Lloyd: You mean, not good like one out of a hundred? Mary: I'd say more like one out of a million. Lloyd: So you're telling me there's a chance … YEAH! Dumb and Dumber When you have eliminated the impossible, whatever remains, however improbable, must be the truth. Sherlock Holmes, The Sign of Four [Barbossa is about to kill Will, but Jack shows up.] Barbossa: It's not possible! Jack Sparrow: Not probable. Pirates of the Carribean [Ali G, interviewing the Surgeon General C. Everett Koop] Ali G: So what is the chances that me will eventually die? C. Everett Koop: That you will die? - 100%. I can guarantee that 100%: you will die. Ali G: You is being a bit of a pessimist.

A plea for the improbable Alan Hájek

The improbable in science, philosophy, and elsewhere I will begin my case for the importance of improbable events—my plea for the improbable—by gesturing at a number of areas in which they have found application.

Statistical significance testing Null hypothesis H 0 vs alternative hypothesis H 1. Reject H 0 if, by its lights, data at least as extreme as that observed is too improbable (typically less than 0.05 or 0.01). ‘Improbable’ must be relativized to a probability function.

Data ‘too good to be true’ As well as data fitting a given hypothesis too poorly, it can also fit it suspiciously well.

Data ‘too good to be true’ Fisher accused Mendel of cooking the books in the published results of his pea experiment. The probability that by chance alone the data would fit Mendel’s theory of heredity that well was 0.00003.

Cournot’s Principle Cournot’s principle: an event of small probability singled out in advance will not happen.

Cournot’s Principle It was advocated by Kolmogorov. And Borel: “The principle that an event with very small probability will not happen is the only law of chance.”

Cournot’s Principle The principle still has some currency, having been recently rehabilitated and advocated by Shafer.

Cournot’s Principle But I think that Aristotle got it right: “It is likely that unlikely things should sometimes happen.”

Cheney’s Principle “If there’s a 1% chance that Pakistani scientists are helping Al Qaeda build or develop a nuclear weapon, we have to treat it as a certainty in terms of our response.”

Cheney’s Principle USA had to confront a new kind of threat, that of a “low-probability, high-impact event”.

Cheney’s Principle Cournot effectively rounds down the event’s low probability, treating it as if it’s 0.

Cheney’s Principle Cournot effectively rounds down the event’s low probability, treating it as if it’s 0. Cheney rounds it up, treating it as if it’s 1.

Engineering and risky events Jet Propulsion Laboratory: the probability of launch failure of the Cassini spacecraft (mission to Saturn) was 1.1 x 10 -3. According to the US Nuclear Regulatory Commission, the probability of a severe reactor accident in one year is imprecise: [1.1 x 10 -6, 1.1 x 10 -5 ]

The improbable in philosophy Improbable events have earned their keep in philosophy. A concern with improbable events has driven philosophical positions and insights.

The lottery paradox The lottery paradox puts pressure on the ‘Lockean thesis’: belief is sufficiently high degree of belief.

The lottery paradox For a given threshold (say, 0.95) consider a sufficiently large lottery (say, 100 tickets). Your degree of belief that ticket #1 will lose > 0.95 By the Lockean thesis (with that threshold), you believe that ticket #1 will lose. Likewise for ticket #2, …, ticket #100. But you believe that some ticket (of #1, …,#100) will win. You have inconsistent beliefs!

The lottery paradox For the Lockean thesis to be plausible, the putative threshold for belief must be high. Accordingly, the probabilities involved in the lottery paradox will be small.

The lottery paradox For the Lockean thesis to be plausible, the putative threshold for belief must be high. Accordingly, the probabilities involved in the lottery paradox will be small. Lotteries cast doubt on Cournot’s principle.

The lottery paradox For the Lockean thesis to be plausible, the putative threshold for belief must be high. Accordingly, the probabilities involved in the lottery paradox will be small. Lotteries cast doubt on Cournot’s principle. We see an important feature of small probabilities: they may accumulate, combining to yield large probabilities.

Skepticism about knowledge Vogel: I know where my car is parked right now. But I don’t know that I am not one of the unlucky people whose car has been stolen during the last few hours.

Why care about the improbable? I have pointed out many ways in which scientists and philosophers do care about the improbable. This does much to build my case that they should care…

Why care about the improbable? There are specific problems that arise only in virtue of improbability. We want a fully general probability theory that can handle them. We want a fully general philosophy of probability.

Why care about the improbable? Probability interacts with other things that we care about, and something being improbable can matter to these other things.

Why care about the improbable? There are problems created by improbable events that are similarly created by higher probability events; but when they are improbable we are liable to neglect them. –Skepticism about knowledge (as Vogel argued) –Counterfactuals (as I will argue)

What is ‘improbable’? I will count as improbable these propositions (‘events’): 1. Those that have probability 0. 2. Those that have infinitesimal probability—positive, but smaller than every positive real number. 3. Those that have small positive real-valued probability. ‘Small’ is vague and context-dependent, but we know clear cases when we see them, and my cases will be clear. 4. Those that have imprecise probability, with an improbable upper limit (as above).

What is ‘improbable’? There are various peculiar properties of low probabilities. I want to use them to do some philosophical work. I will go through these properties systematically, showcasing each with a philosophical application …

Probability 0 events Much of what’s philosophically interesting about probability 0 events derives from interesting facts about the arithmetic of 0. Each of its idiosyncrasies motivates a deep philosophical problem.

Open-mindednessOpen-mindedness To be sure, we could reasonably dismiss probability zero events as 'don't cares' if we could be assured that all probability functions of interest assign 0 only to impossibilities—i.e. they are regular/strictly coherent/open-minded.

Open-mindednessOpen-mindedness Open-mindedness is part of the folk concept of probability: ‘if it can happen, then it has some chance of happening’.

Open-mindednessOpen-mindedness Open-mindedness has support from some weighty philosophical figures (e.g. Lewis). We will see how much trouble probability- zero-but-possible events cause. It would be nice to banish them!

Closing open-mindedness? There are apparently events that have probability 0, but that can happen.

Closing open-mindedness? A fair coin is tossed infinitely many times. The probability that it lands heads every time HHH … is 0. We will revisit this claim later, but assume it for now.

Closing open-mindedness? A dart is thrown at random at the [0, 1] interval…

Closing open-mindedness? Various non-empty subsets get assigned probability 0 : All the singletons Indeed, all the finite subsets Indeed, all the countable subsets Even various uncountable subsets 01

Probability 0 events So there are various non-trivial and interesting examples of probability 0 events. They create various philosophical problems, each associated with a peculiar property of the arithmetic of 0.

You can’t divide by 0 : problems for the conditional probability ratio formula The ratio analysis of conditional probability: … provided P(B) > 0

You can’t divide by 0 : problems for the conditional probability ratio formula What is the probability that the coin lands heads on every toss, given that the coin lands heads on every toss?

You can’t divide by 0 : problems for the conditional probability ratio formula What is the probability that the coin lands heads on every toss, given that the coin lands heads on every toss? 1, surely!

You can’t divide by 0 : problems for the conditional probability ratio formula What is the probability that the coin lands heads on every toss, given that the coin lands heads on every toss? 1, surely! But the ratio formula cannot deliver that result, because P(coin lands heads on every toss) = 0.

You can’t divide by 0 : problems for the conditional probability ratio formula There are less trivial examples, too.

You can’t divide by 0 : problems for the conditional probability ratio formula There are less trivial examples, too. The probability that the coin lands heads on every toss, given that it lands heads on the second, third, fourth, … tosses is ½.

You can’t divide by 0 : problems for the conditional probability ratio formula There are less trivial examples, too. The probability that the coin lands heads on every toss, given that it lands heads on the second, third, fourth, … tosses is ½. Again, the ratio formula cannot say this.

Trouble for conditionalization The zero-probability problem for the conditional probability formula quickly becomes a problem for the updating rule of conditionalization, which is defined in terms of it: – Suppose that your degrees of belief are initially given by P initial (–), and that you become certain of a piece of evidence E. P new (X) = P initial (X | E) (provided P initial (E) > 0)

Trouble for conditionalization Suppose you learn that the coin landed heads on every toss after the first. What should be your new probability that the coin landed heads on every toss?

Trouble for conditionalization Suppose you learn that the coin landed heads on every toss after the first. What should be your new probability that the coin landed heads on every toss? ½, surely.

Trouble for conditionalization Suppose you learn that the coin landed heads on every toss after the first. What should be your new probability that the coin landed heads on every toss? ½, surely. But P initial (heads every toss | heads every toss after first) is undefined, so conditionalization (so defined) cannot give you this advice.

Trouble for conditionalization To be sure, there are some more sophisticated methods for solving these problems.

Trouble for conditionalization To be sure, there are some more sophisticated methods for solving these problems. Various authors have written on this topic, including myself.

Trouble for conditionalization To be sure, there are some more sophisticated methods for solving these problems. Various authors have written on this topic, including myself. – Popper functions

Trouble for conditionalization To be sure, there are some more sophisticated methods for solving these problems. Various authors have written on this topic, including myself. – Popper functions – Kolmogorov: conditional probability as a random variable (conditional on a sigma algebra)

Trouble for conditionalization But something must be done – we can’t retain the Bayesian orthodoxy in the face of such cases.

Trouble for conditionalization But something must be done – we can’t retain the Bayesian orthodoxy in the face of such cases. We have to assess the costs of these other approaches.

Trouble for conditionalization But something must be done – we can’t retain the Bayesian orthodoxy in the face of such cases. We have to assess the costs of these other approaches. And there are other, less familiar problems with Bayesian orthodoxy …

The multiplicative destroyer: problems for independence 0 is the multiplicative destroyer: multiply anything by it, and you get 0 back. This spells trouble for the usual definition of probabilistic independence. We want to capture the idea of A being probabilistically uninformative about B. A and B are said to be independent just in case P(A and B) = P(A) P(B).

The multiplicative destroyer: problems for independence According to this account of independence, anything with probability 0 is independent of itself: If P(X) = 0, then P(X and X) = 0 = P(X)P(X). But surely identity is the ultimate case of (probabilistic) dependence.

The multiplicative destroyer: problems for independence Suppose you are wondering whether the coin landed heads on every toss. Nothing could be more informative than your learning: the coin landed heads on every toss. But according to this account of independence, the coin landing heads on every toss is independent of the coin landing heads on every toss!

The multiplicative destroyer: problems for independence More generally, according to this account any proposition with probability 0 is independent of anything. This includes: –its negation; –anything that entails it, and anything that it entails.

The multiplicative destroyer: problems for independence The ratio account of conditional probability was guilty of a sin of omission.

The multiplicative destroyer: problems for independence The ratio account of conditional probability was guilty of a sin of omission. But this account of independence is guilty of a sin of commission.

The multiplicative destroyer: problems for independence We need to rethink probabilistic independence.

The multiplicative destroyer: problems for independence We need to rethink probabilistic independence. Branden Fitelson and I argue that we should define it in terms of Popper functions.

The additive identity: problems for expected utility theory While 0 is the most potent of all numbers when it comes to multiplication, it’s the most impotent when it comes to addition and subtraction. It’s the additive identity: adding it to any number makes no difference. This creates problems for decision theory.

The additive identity: problems for expected utility theory Arguably the two most important foundations of decision theory are the notion of expected utility and dominance reasoning.

The additive identity: problems for expected utility theory Expected utility is a measure of how good an option is: the weighted average of the utilities associated with that option in each possible state of the world, the weights given by the probabilities of those states. (Weak) Dominance reasoning says: if one option is at least as good as another in every possible state, and better in at least one possible state, then it is preferable (assuming independence of options and states).

The additive identity: problems for expected utility theory Normally these two rules agree.

The additive identity: problems for expected utility theory Normally these two rules agree. Example: We toss a fair coin. –If it lands heads, I give you \$1. –If it lands tails, you give me \$1.

The additive identity: problems for expected utility theory Normally these two rules agree. Example: We toss a fair coin. –If it lands heads, I give you \$1. –If it lands tails, you give me \$1. How good is the gamble from your point of view? Its expected utility (in dollars) is: 1 (1/2) – 1 (1/2) = 0.

The additive identity: problems for expected utility theory Now change one of the pay-offs. –If the coin lands heads, I give you \$2. –If the coin lands tails, you give me \$1.

The additive identity: problems for expected utility theory Now change one of the pay-offs. –If the coin lands heads, I give you \$2. –If the coin lands tails, you give me \$1. Now the expected utility is: 2 (1/2) – 1 (1/2) = ½, better than before.

The additive identity: problems for expected utility theory Now change one of the pay-offs. –If the coin lands heads, I give you \$2. –If the coin lands tails, you give me \$1. Now the expected utility is: 2 (1/2) – 1 (1/2) = ½, better than before. Dominance reasoning agrees (and so does commonsense).

The additive identity: problems for expected utility theory And yet probability 0 propositions show that expected utility theory and dominance reasoning can give conflicting verdicts.

The additive identity: problems for expected utility theory Suppose that two options yield the same utility except on a proposition of probability 0; but if that proposition is true, option 1 is far superior to option 2.

The additive identity: problems for expected utility theory Suppose that we toss the coin infinitely many times. You can choose between these two options: –Option 1: If it lands heads on every toss, you get a million dollars; otherwise you get nothing. –Option 2: You get nothing.

The additive identity: problems for expected utility theory Expected utility theory says that these options are equally good: they both have an expected utility of 0.

The additive identity: problems for expected utility theory Expected utility theory says that these options are equally good: they both have an expected utility of 0. But dominance reasoning says that option 1 is better than option 2. Which is it to be? I say that option 1 is better. I think that this is a counterexample to expected utility theory, as it is usually understood.

Infinitesimal probabilities Infinitesimals to the rescue? (E.g. from the hyperreals.)

Infinitesimal probabilities 0 1 0

We might say that the probability that the coin lands heads forever is not really 0, but rather an infinitesimal.

Infinitesimal probabilities Lewis: “Zero chance is no chance, and nothing with zero chance ever happens.” A version of Cournot’s principle, with zero probability counting as “small probability”. I think Lewis is trading on a pun on “no chance”. “… infinitesimal chance is still some chance.”

Infinitesimal probabilities But can we really uphold open-mindedness? Williamson has an argument that the probability of heads forever really is zero, even allowing infinitesimals.

Infinitesimal probabilities We can close open-mindedness even for a hyperreal-valued probability function by correspondingly enriching the space of possibilities. The dart is thrown at the [0, 1] interval of the hyperreals.

Infinitesimal probabilities 01 x x-ε/2 [ x+ε/2 ] Each point x is strictly contained within nested intervals of the form [x – ε/2, x + ε/2] of width ε, for each infinitesimal ε, whose probabilities are their lengths, ε again. Not to scale!

Infinitesimal probabilities 01 x [ x+ε/2 ] Each point x is strictly contained within nested intervals of the form [x – ε/2, x + ε/2] of width ε, for each infinitesimal ε, whose probabilities are their lengths, ε again. So the point’s probability is bounded above by all these ε, and thus it must be smaller than all of them— i.e. 0. Not to scale! x-ε/2

Infinitesimal probabilities Kolmogorov’s axiomatization restricts the range of all probability functions: the real numbers in [0, 1], and not a richer set. Yet it permits arbitrarily large domains: they can be any sets you like, however large. Any axiomatization like Kolmogorov’s will restrict the range of all probability functions first, while remaining permissive about their domains.

Infinitesimal probabilities We can apparently make the set of contents of an agent’s thoughts as big as we like. But we limit the attitudes that she can bear to those contents—they can only achieve a certain fineness of grain. Put a rich set of contents together with a relatively impoverished set of attitudes, and you violate open-mindedness. Probability 0 events can’t be dismissed as ‘don’t cares’ in Bayesianism, or in probability theory more generally.

Infinitesimal probabilities We have seen what I take to be some serious vices of orthodox Bayesianism, and of orthodox probability theory more generally. One way or another, we need to go unorthodox.

Real-valued, small positive probabilities There is nothing weird about their mathematics. Yet they give rise to a host of philosophical problems in their own right.

Most counterfactuals are false Stare in the face of chance... ‘If the coin were tossed, it would land heads’ I submit that it is false. There is no particular way that this chancy process would turn out, were it to be initiated. Jeffrey: “that’s what probability is all about”. To think that there is a fact of the matter of how the coin would land is to misunderstand chance.

Most counterfactuals are false The argument goes through whatever the chance of Tails, as long as it is a possible outcome.

Most counterfactuals are false The argument goes through whatever the chance of Tails, as long as it is a possible outcome. Or consider a fair lottery. ‘If you were to play the lottery, you would lose’ is false no matter how many tickets there are in the lottery.

Most counterfactuals are false In an indeterministic world such as ours appears to be, lotteries—in a broad sense—abound. The evidence from quantum mechanics points that way.

Most counterfactuals are false In an indeterministic world such as ours appears to be, lotteries—in a broad sense—abound. The evidence from quantum mechanics points that way. The indeterminism reaches medium-sized dry goods.

Most counterfactuals are false In an indeterministic world such as ours appears to be, lotteries—in a broad sense—abound. The evidence from quantum mechanics points that way. The indeterminism reaches medium-sized dry goods. Even billiard ball collisions, human jumps, … are indeterministic.

Most counterfactuals are false ‘If I were to jump, I would come down’ is false. To be sure, a closely related counterfactual is true: ‘If I were to jump, I would very probably come down’. But the probabilistic qualification makes all the difference. To ignore it is to make a Cournot-like rounding error.

Most counterfactuals are false There are various reasons why you may not be staring chance in the face…

Most counterfactuals are false There are various reasons why you may not be staring chance in the face… The trouble is that chance is staring at you.

Most counterfactuals are false There are various reasons why you may not be staring chance in the face… The trouble is that chance is staring at you. It is heedless of your ignorance, defiant of your ignorings.

Most counterfactuals are false Once you take seriously what quantum mechanics says, you should see chanciness almost everywhere. The world looks like a huge collection of lotteries. But whether or not you take seriously what the theory says, that’s apparently how the world is.

Most counterfactuals are false There are subtle issues here!

Most counterfactuals are false There are subtle issues here! Contextualists about counterfactuals will deny that most counterfactuals are false, insisting that they have context-dependent truth values.

Most counterfactuals are false There are subtle issues here! Contextualists about counterfactuals will deny that most counterfactuals are false, insisting that they have context-dependent truth values. E.g. ‘if I were to jump, I would come down’ is true when uttered in normal contexts; but when I make weird quantum mechanical possibilities salient, I create a new context in which it is false.

Most counterfactuals are false I have various arguments against this kind of contextualism …

Most counterfactuals are false I have various arguments against this kind of contextualism … For now, notice that even contextualists should agree that most counterfactuals are easily made false, by creating a context in which improbable possibilities are salient.

Most counterfactuals are false I have various arguments against this kind of contextualism … For now, notice that even contextualists should agree that most counterfactuals are easily made false, by creating a context in which improbable possibilities are salient. Witness again the philosophical impact of the improbable!

Multiplication by extremely large utilities Even extremely small positive probabilities can be offset by multiplication by extremely large utilities when calculating expected utilities.

Pascal’s Wager God existsGod does not exist Wager for God ∞ f1f1 Wager against God f2f2 f3f3 f 1, f 2, and f 3 are finite utilities (no need to specify) Your probability that God exists should be positive. You should maximize expected utility. Therefore, You should wager for God.

Pascal’s Wager God exists (p) God does not exist (1 – p) Wager for God ∞ f1f1 Wager against God f2f2 f3f3 Let p be your positive probability for God's existence. Your expected utility of wagering for God is ∞p + f 1 (1 – p) = ∞ Your expected utility of wagering against God is f 2 p + f 3 (1 – p) = some finite value. Therefore, you should wager for God.

Pascal’s Wager But this argument is invalid! Pascal's specious step is to assume that only the strategy of wagering for God gets the infinite expected utility. He has ignored all the mixed strategies.

Pascal’s Wager Consider this mixed strategy: – Toss a coin. – If it lands heads, wager for God; if it lands tails, wager against God. According to Pascal, the expected utility of this mixed strategy is: ∞ (1/2) + (finite value)(1/2) = ∞, the same as that of outright wagering for God.

Pascal’s Wager Or consider this mixed strategy: –Wait to see if your lottery ticket wins (1/1000000000 chance that it does, say). –If it wins, wager for God; if it does not win, wager against God. According to Pascal, the expected utility of this mixed strategy is: ∞ (0.000000001) + (finite value)(0.999999999) = ∞.

Pascal’s Wager Or consider even more esoteric mixed strategies. E.g. wager for God iff I quantum tunnel out of this auditorium before the end of this lecture…

Pascal’s Wager But this still understates Pascal's troubles. For whatever you do, you should apparently assign some positive probability to winding up wagering for God…

Pascal’s Wager If you are open-minded, you assign positive probability to all such non-Pascalian routes to wagering for God!

Pascal’s Wager If you are open-minded, you assign positive probability to all such non-Pascalian routes to wagering for God! Pascal's utility matrix implies that everybody who is open-minded enjoys maximal expected utility at all times!

Pascal’s Wager If Pascalian agents are open-minded, all practical reasoning is useless; if not, the earlier theoretical problems (for conditional probability, conditionalization, independence and decision theory) are alive and well!

Pascal’s Wager If Pascalian agents are open-minded, all practical reasoning is useless; if not, the earlier theoretical problems (for conditional probability, conditionalization, independence and decision theory) are alive and well! Your being open-minded apparently implies that you give some positive credence to your being a Pascalian agent! (Tricky issues here.)

Pascal’s Wager If Pascalian agents are open-minded, all practical reasoning is useless; if not, the earlier theoretical problems (for conditional probability, conditionalization, independence and decision theory) are alive and well! Your being open-minded apparently implies that you give some positive credence to your being a Pascalian agent! (Tricky issues here.) You had better not be open-minded!

Pascal’s Wager, reformulated But Pascal’s Wager can apparently be made valid.

Pascal’s Wager, reformulated God existsGod does not exist Wager for God f f1f1 Wager against God f2f2 f3f3 Let p be your positive probability for God's existence. Your expected utility of wagering for God is f p + f 1 (1 – p) Your expected utility of wagering against God is f 2 p + f 3 (1 – p) = some finite value. If f is large enough, you should wager for God.

Pascal’s Wager, reformulated Some real-world decision problems look rather like this, because they involve sufficiently high stakes (relative to the associated probabilities).

Imprecise probabilities with small upper limit Sometimes our probabilities are imprecise due to lack of relevant information, or conflicting information. Think of imprecise probabilities as interval-valued: [x, y] y may be small. But when the associated stakes are sufficiently high, there may still be cause for serious concern.

Flying in Europe during the volcano eruption All of Europe’s airports closed because of the risk to flights posed by the eruption of the volcano in Iceland. The probability of crashes was “small”.

Flying in Europe during the volcano eruption It was also imprecise (lack of information).

Flying in Europe during the volcano eruption It was also imprecise (lack of information). Rare events especially lend themselves to this source of imprecision—their rarity typically means we have scant information about them on which to base our probabilities.

Flying in Europe during the volcano eruption It was also imprecise (lack of information). Rare events especially lend themselves to this source of imprecision—their rarity typically means we have scant information about them on which to base our probabilities. But the stakes were so high that it was wise to cancel the flights.

Global warming Climate scientists differ in their probabilities of various scenarios of global warming (lack of information leading to conflicting information). It seems that our probabilities should be correspondingly imprecise.

Global warming

We mainly hear about the most likely scenarios, which involve serious consequences, but various people argue that they are not catastrophic. Perhaps we should be more concerned with much less likely scenarios, but ones that involve truly catastrophic consequences. (Weitzman, Broome) This is so even when the corresponding probabilities are imprecise.

Global warming

THE END

Thanks especially to Rachael Briggs, David Chalmers, John Cusbert, Kenny Easwaran, Renée Hájek, Leon Leontyev, Aidan Lyon, Daniel Nolan, Peter Vranas, and Lyle Zynda for very helpful comments; and to Carl Brusse and Elle Benjamin for lots of help with the slides.

Unifying ‘the improbable’ ‘The improbable’ may not pick out a natural kind. At least the four subcases (0, infinitesimal, small, and small-imprecise probability) are each more unified internally.

Unifying ‘the improbable’ They bear some important resemblances to each other: –Cournot’s principle (not that I’m a fan) –Low frequency (typically) –Low betting prices –Our usual neglect of them I think ‘the improbable’ is unified enough to be a philosophical category of interest. Compare ‘the possible’.

More on Cournot’s Principle Start with the basic formulation. Add: ‘singled out in advance’. Add: it can only be used once. (Vovk) But: in the lottery, each person only uses it once! If you prohibit that, then how will you restrict it? Only use it once in all of history?! –When?! (Choose carefully!) –It loses the empirical traction that Borel lauded.

A Contextualist Cournot’s Principle? Make the threshold for what counts as ‘improbable’ context-dependent? E.g. in lottery contexts, the threshold must be very low, perhaps 0.

A Contextualist Cournot’s Principle? What your context happens to be makes no difference to what happens or not (setting aside events involving the context being such-and-such). Chancy events are like lotteries. Some of my concerns about contextualism about counterfactuals kick in.

Closing open-mindedness, even with infinitesimals Times: 1 2 34 … First coin: H HH H … Second coin:HHH … P(unitalicized sequence) = ½ P(bold sequence) P(unitalicized sequence) = P(italicized sequence) P(italicized sequence) = P(bold sequence) So P(unitalicized sequence) = P(bold sequence) So P(unitalicized sequence) = ½ P(unitalicized sequence)

Finite probability models We want a fully general mathematics of probability. It’s ‘pure’. (Compare logic.) Infinite models are par for the course elsewhere (e.g. physics, economics). The real world may require them – e.g. radioactive decay. Probability is up to its neck in infinitude –Countable additivity: n = 1 to ∞ –Limit theorems: n  ∞.

Finite probability models There are assumptions of infinitude in the applications of probability –Axioms of decision theory (e.g. the splitting condition) –Idealized Bayesian agents

Chancy events are like lotteries Start at one end with a paradigm case of a lottery, at the other end with a paradigm case of a seemingly non-lottery-like event – e.g. my jumping. I will work my way in stages from each end, meeting in the middle.

Chancy events are like lotteries Start with a paradigm case of a lottery. –Make the chance process quantum mechanical. (That only improves it.) –Vary the chances of the outcomes. –Clump many of the outcomes into a single one – e.g. one person holds all tickets but #1, written on one big ticket. –Vary the outcomes—e.g. one winner gets a million dollars, another gets killed.

Chancy events are like lotteries Start with one of my chancy events—say, my jumping—and make it more lottery-like: –Subdivide my normal falling into zillions of very precise ways of falling that are equiprobable (e.g. cells in the relevant phase space). If you don’t like me analogizing jumps to lotteries, focus on their chanciness (which you should not deny). That’s what I appeal to.

‘Will’ vs ‘would’ My PROVES TOO MUCH scepticism. Some controversy about future contingents.

‘Will’ vs ‘would’ Regarding ‘will’: there is a crucial symmetry- breaker among the worlds: @ ‘Wills’ face their day of reckoning. Staring in the face of chance creates no clash with ‘will’. There can be a fact of the matter of how things WILL turn out in a chancy process (e.g. on 4-dimensionalism), but not of how they WOULD.

‘Will’ vs ‘would’ You can agree that even low chance events are compatible with their happening in the future; but try saying that for ‘woulds’: –The coin has a tiny chance of landing heads, but it would land heads, if tossed. *

‘Will’ vs ‘would’ Say for each ticket i of the lottery: ‘Ticket i will win’. One of these statements is true. And there’s no mystery about which it is – it’s one that picks out the ticket that actually wins. Now do it for counterfactuals about each ticket in a lottery that is never played. It seems odd to say that one of these is true. Which one?

‘Will’ vs ‘would’ A Cournot-like principle would be silly: ‘the future will be realized in a most similar way’.

Reinterpretation strategies Perhaps p ☐  q should be reinterpreted as: p ☐  probably q p and ceteris paribus would q p and nothing weird happens ☐  q

Reinterpretation strategies I reply: –These reinterpretations seem weaker than the original counterfactual. They are not faithful translations. –They no longer clash with the ‘might not’ counterfactual. E.g. ‘if I were to jump, I would probably come down, but I might not come down’ is perfectly felicitous. –We may want to convey the difference between the original counterfactual and its reinterpretations. (Doctor: “if you were to take this pill, you would probably live”.) –Inconsistent lottery counterfactuals come out consistent.

Reinterpretation strategies Threat of triviality The following wrongly come out as necessary truths: p ☐  ceteris paribus p ☐  nothing weird happens Even when p says (or implies) ‘ceteris is not paribus’, or ‘something weird happens’!

A plea against the improbable: paranoia/paralysis/hyperactivity “If you dignify every high-stakes, low probability event with your attention, then you will live in constant fear.” “If you dignify every such event with your attention, then you will be unable to act – everything you might do has some positive chance of dire results”. “If you dignify every such event with your attention, you will be pulled hither and thither, trying to solve every problem under the sun.”

A plea against the improbable: paranoia/paralysis/hyperactivity Bite the bullet: walk through the list of nightmare scenarios, and try to assess their probabilities (imprecisely, to be sure). We have to do our best to figure out the utilities and probabilities, and to act accordingly. Few of them will have probability as high as the 10 degree Celsius warming scenario (e.g. the Cern super-collider creating a black hole doesn’t). Those that do are surely worth of our attention, too! To ignore them is to play the ostrich.

A plea against the improbable: paranoia/paralysis/hyperactivity Some strategies are ‘robust’, dealing with many scenarios at once – e.g. building refuges, coordinating disaster response. Estimating probabilities better: Hanson’s futures markets!

A plea against the improbable: paranoia/paralysis/hyperactivity Do value-of-information calculations: for which things would it be especially important to have more knowledge of the relevant probabilities? Example: asteroid collision. (Relatively low cost to get the information, potentially huge benefit in having it.)

The additive identity: problems for expected utility theory (Re)interpret expected utility theory so that in the case of ties, the theory is silent? That’s a different kind of defect: incompleteness. What would you prefer: \$1, or \$1? The theory is silent? That’s an uncomfortable silence!

The additive identity: problems for expected utility theory Decision theory supplements expected utility theory with further rules? –If two options are identical, it doesn’t matter which you choose. –If option 1 (weakly) dominates option 2, then choose option 1.

The additive identity: problems for expected utility theory There will still be problems. Which do you prefer?: –Option 1: You get a million dollars iff the coin lands HHH … or THH … (you get two tickets) –Option 2: You get a million dollars iff the coin lands TTT... (you get one ticket) Option 1 is surely better, but we get silence from expected utility theory, and silence from dominance reasoning.

What is ‘improbable’? Strictly speaking, ‘improbability’ should be relativized to a probability space, a triple, where Ω is a set, F is an algebra of subsets of Ω, and P is a probability function defined on F. We can thus distinguish three grades of probabilistic involvement: a set of possibilities (proposition) may be recognized by such a probability space by –being a subset of  ; –being an element of F; –receiving positive probability from P. These are non-decreasingly committal ways in which the probability space may countenance a proposition.

What is ‘improbable’? Many propositions may not even make the first grade for a given probability space. E.g., if  is the set of possible outcomes of a coin toss, the set of possibilities in which the coin turns into a shetland sheepdog may not be a subset of .

What is ‘improbable’? Some propositions may make the first but not the second grade: being subsets of , but not receiving a probability assignment at all – e.g. ‘non-measurable’ sets. Some propositions may make the second but not the third grade: receiving probability 0 from P.

What is ‘improbable’? Assume that context makes clear which probability space is relevant. What is ‘improbable’ will be determined by the assignments of that space’s probability function.

Closing open-mindedness, even with infinitesimals Lewis: You may protest that there are too many alternative possible worlds to permit regularity. But that is so only if we suppose, as I do not, that the values of the function C are restricted to the standard reals. Many propositions must have infinitesimal C ‑ values, and C(A|B) often will be defined as a quotient of infinitesimals, each infinitely close but not equal to zero. (See Bernstein and Wattenberg (1969).) Brian also cites this important paper. It shows there is an open-minded probability assignment to the dart experiment (with hyperreal values).

Closing open-mindedness, even with infinitesimals But that’s a very specific case, with a specific cardinality! We need to be convinced that a similar result holds for each set of possibilities, whatever its cardinality. In fact, by Lewis’s own lights (1973) the cardinality of the set of alternative possible worlds is greater than that—at least beth 2, the cardinality of power set of the reals

Infinitesimal probabilities We can close open-mindedness even for a hyperreal-valued probability function by correspondingly enriching  : the dart is thrown at the [0, 1] interval of the hyperreals. Each point x is strictly contained within nested intervals of the form [x –  /2, x +  /2] of infinitesimal width , for all possible , whose probabilities are their lengths,  again. So the point’s probability is bounded above by all these , and thus it must be smaller than all of them.

Closing open-mindedness, even with infinitesimals We have an arms race: for each commitment on the range of the probability function (reals, hyperreals, …), I can come up with a sufficiently big  to thwart open-mindedness. It is a curious fact about the axiomatization of probability that there is complete freedom in domain of probability functions, but a commitment is made once and for all on their ranges. On the one hand, we can make the set of contents of an agent’s thoughts as big as we like; on the other hand, we hamstring the attitudes that she can bear to those contents—they can only achieve a certain fineness of grain. Put a rich set of contents together with a relatively impoverished probability scale, and voilà, you have irregularity.

Closing open-mindedness, even with infinitesimals The problems for the analysis of conditional probability, for conditionalization, for the analysis of independence, and for decision theory, seem to be very much alive.

[Dart-strike table] Dart hits irrational number Dart hits rational number Option 1 Option 2 Point 1 –Sub point Point 2 –Sub point

[numberline – unmeasurable] Point 1 –Sub point Point 2 –Sub point 01

[s-curves][s-curves] Point 1 –Sub point Point 2 –Sub point Point 3 –Sub point Point 4 –Sub point

Download ppt "Lloyd: What do you think the chances are of a guy like you and a girl like me... ending up together? … What are my chances? Mary: Not good. Lloyd: You."

Similar presentations