Presentation is loading. Please wait.

Presentation is loading. Please wait.

Chapter 12 Uncertainty Consider two lotteries L 1 : 500,000 (1) L 1 ’: 2,500,000 (0.1), 500,000 (0.89), 0 (0.01) Which one would you choose? Another two.

Similar presentations


Presentation on theme: "Chapter 12 Uncertainty Consider two lotteries L 1 : 500,000 (1) L 1 ’: 2,500,000 (0.1), 500,000 (0.89), 0 (0.01) Which one would you choose? Another two."— Presentation transcript:

1 Chapter 12 Uncertainty Consider two lotteries L 1 : 500,000 (1) L 1 ’: 2,500,000 (0.1), 500,000 (0.89), 0 (0.01) Which one would you choose? Another two lotteries L 2 : 500,000 (0.11), 0 (0.89) L 2 ’: 2,500,000 (0.1), 0 (0.9) Again, which one would you choose?

2 Risk is a fact of life. In addition to lotteries, we face risks when walking across the streets (especially in Taipei), making an investment or even getting married. Since there exist risks, outcomes are not deterministic and assume they can be described by a probability distribution. Think about states of nature (s 1,s 2, …,s n ) with probability (  1,  2, …,  n ). On state i, the consumption is c i.

3 Hence, we can think of a contingent consumption plan (c 1,c 2, …,c n ). And draw a consumer’s preference on the consumption plane. Notice that the contingent consumption plan is a specification of what will be consumed in different states of nature. And the preference on consumption plans certainly may depend on the probabilities of states, (  1,  2, …,  n ).

4 An example: A lottery ticket costs 1 dollar. The rule is there will be a winning number drawn from 1-100, each with equal probability. When you buy a ticket, you can choose a number. If the number you choose matches the winning number, you get 100 dollars, otherwise, you get nothing. Your initial wealth is 200 dollars. Each state s i could be the event that number i is the winning number. So (  1,  2, …,  n )=(0.01,0.01, …,0.01).

5 If you do not buy any lottery ticket, then your consumption does not depend on the state. Hence (c 1,c 2, …,c n )=(200,200, …,200). If you buy one ticket and choose number 1, then (c 1,c 2, …,c n )=(299,199, …,199). If you buy one ticket of number 1, another of number 2, then (c 1,c 2, …,c n )=(298,298,198, …,198).

6 Another example: Suppose there is a prob of 1-p that some loss D occurs. Suppose there is an insurance contract which pays the person 1 dollar in exchange for r<1 dollar of premium. The initial wealth of the person is W. Then we can model the situation as there are two states, (s 1,s 2 ) where state 1 is that there is no loss and state is that there is a loss. Hence (  1,  2 )=(p,1-p).

7 If the consumer does not buy any insurance, then his consumption would be (W,W-D). If he buys K dollars of insurance, then he has to pay Kr dollars no matter what the state of nature is and will get paid K dollars when there is a loss. Hence his consumption becomes (W-Kr, W-D+K- Kr). So if we plot c 1 on the x, c 2 on the y, then his budget line will have the slope of -(1-r)/r.

8 Given we have the budget line and the consumer must have some preference over (c 1,c 2 ), so we can derive his optimal consumption plan and therefore determines how much insurance will he buy. Now what does a “fair” insurance policy mean? This means on average the insurance company breaks even. Hence, (1-p)K=Kr or r=1-p. intuitive, since the premium charged is simply the probability that the loss will occur.

9 We now turn to discuss the preference over the consumption plans (c 1,c 2 ). As before, MRSc 1,c 2 =∆c 2 /∆c 1, meaning if you give me one more unit of c 1, how many units of c 2 would I be willing to give up to stay indifferent. Intuitively, this certainly depends on how likely I think the two states (s 1,s 2 ) are likely to be. For instance, if I think the state s 1 is quite impossible, then I would not be willing to give up many units of c 2.

10 This suggests that the preference over the consumption plans (c 1,c 2 ) depends also on (  1,  2 ). Hence in general, we write the utility function representing the preference over the consumption plans (c 1,c 2 ) as u(c 1,c 2,  1,  2 ). Some examples: u(c 1,c 2,  1,  2 )=  1 c 1 +  2 c 2 (Take the expected value of the consumption.)

11 u(c 1,c 2,  1,  2 )= c 1  1 c 2  2 u(c 1,c 2,  1,  2 )=  1 ln(c 1 )+  2 ln(c 2 ) (Take the expected value of ln of the consumption.) The utility forms of the first and the third are quite special that we call them having the form of expected utility. In general, the utility having the form of u(c 1,c 2,  1,  2 )=  1 v(c 1 )+  2 v(c 2 ) is a expected utility function. I.e., the utility is the expected value of some utility function v(  ) of consumption.

12 Let us examine whether your utility has the form of expected utility. Suppose you do, then there exists a v(  ) so that: L 1 : v(500,000) L 2 : 0.1v(2,500,000)+0.89v(500,000)+0.01v(0) L 3 : 0.11v(500,000)+0.89v(0) L 4 : 0.1v(2,500,000)+0.9v(0) Hence, L 1 wL 2  - 0.1v(2,500,000)+0.11v(500,000)- 0.01v(0)≥0

13 Similarly, L 3 wL 4  - 0.1v(2,500,000)+0.11v(500,000)- 0.01v(0)≥0 (the Allais paradox) So if you choose 1 over 2 but 4 over 3 (this happens to a lot of lab subjects) or you choose 2 over 1 but 3 over 4, then your preference cannot be represented by the expected utility function. There is nothing wrong about it, it just shows that the expected utility function cannot accurately represent your preference.

14 u(c 1,c 2,  1,  2 )=  1 c 1 +  2 c 2 =  1 v(c 1 )+  2 v(c 2 ) where v(c)=c u(c 1,c 2,  1,  2 )=  1 ln(c 1 )+  2 ln(c 2 )=  1 v(c 1 )+  2 v(c 2 ) where v(c)=lnc We can think of v(c) as the utility of certain consumption. In this sense, then u() is the expected utility of consumption (c 1, c 2 ). Utility function u() of this particular form is called a von Neumann-Morgenstern utility function or an expected utility function.

15 The function v() is called the Bernoulli function by some. If we have an expected utility function u and we multiply it by some positive constant a and add a constant b so f(u)=au+b, then F ≡ f(u) is also an expected utility function. F(c 1,c 2,  1,  2 )=f(u(c 1,c 2,  1,  2 ))=au(c 1,c 2,  1,  2 )+b=a(  1 v(c 1 )+  2 v(c 2 ))+b=  1 (av(c 1 )+b)+  2 (av(c 2 )+b)=  1 f(v(c 1 ))+  2 f(v(c 2 )), so F is also an expected utility function.

16 This kind of transformation (multiplying a positive constant and adding a constant) is called a positive affine transformation. Turns out that given an expected utility function, if you apply a positive affine transformation, then you get another expected utility function. Moreover, any other kind of transformation will destroy the expected utility property.

17 The most important property characterizing the expected utility is the independence assumption. For instance, if u(c 1,c 2,c 3,  1,  2,  3 )≥u(c 1 ’,c 2 ’,c 3,  1,  2,  3 ) then u(c 1,c 2,d 3,  1,  2,  3 ) ≥u(c 1 ’,c 2 ’,d 3,  1,  2,  3 ). We can think of this as with prob  3, state 3 occurs. In the first case, c 3 is the outcome while in the second d 3. However, it does not matter what the outcome in state 3 is.

18 In state 3, some common outcome will occur. Since it is common, it will not affect our preference. Hence our preference is determined solely by the fact that  1 v(c 1 )+  2 v(c 2 ) ≥  1 v(c 1 ’)+  2 v(c 2 ’). This has some flavor of independence. Notice that before, (c 1,c 2,c 3 ) is consumed at the same time. So it may be the case that when consuming c 3,we prefer (c 1,c 2 ) to (c 1 ’,c 2 ’) while when consuming d 3, our preference is reversed.

19 It is different now because if state 3 occurs, states 1 or 2 will not occur. Notice that with u(c 1,c 2,c 3,  1,  2,  3 )=  1 v(c 1 )+  2 v(c 2 )+  3 v(c 3 ), MRS 12 =MU 1 /MU 2 =  1 v’(c 1 )/  2 v’(c 2 ) which does not depend on c 3. Is it reasonable? Consider going to Venice (V), watching a movie (M) about Venice and staying home (H). However, I may prefer V(0.99)+H(0.01) to V(0.99)+M(0.01) because the latter entails disappointment. (another outcome of the lottery)

20 Moreover, comparing L 1 (500,000, 1) to L 2 (2,500,000, 0.1; 500,000, 0.89; 0, 0.01) I may choose L 1 because there is a possibility that I will regret that I should have chosen otherwise if I have chosen L 2 and 0 is realized. On the other hand, there is no such clear-cut regret potential exists between L 3 (500,000, 0.11; 0, 0.89) to L 4 (2,500,000, 0.1; 0, 0.9) (a choice not made).

21 A person who prefers a certain given outcome to a risky outcome with the same expected income is a risk averter. When a person is indifferent, he is risk neutral. Finally, if a person prefers a risky outcome to the certain outcome, then he is risk loving. Draw a figure with v() concave, linear, convex to illustrate.

22 Fig. 12.2

23 Fig. 12.3

24 Go back to the insurance example. Assume a risk averse expected utility maximizer. Then 1) on an indifference curve, when c 1 is greater, c 2 has to lower. 2) |MRS 12 |=MU 1 /MU 2 =  1 v’(c 1 )/  2 v’(c 2 ), since v() is concave, when c 1 is greater and c 2 is lower, then v’(c 1 )/v’(c 2 ) is lower and hence |MRS 12 | is smaller. So we have the usual (convex to the origin) indifference curves. (average vs extreme)

25 If insurance is fair, i.e. r=1-p, then at optimum, it must be |MRS 12 |=pv’(c 1 )/[(1- p)v’(c 2 )]=(1-r)/r. So v’(c 1 )=v’(c 2 ) or c 1 =c 2. This means, in words, facing a fair insurance, a risk averse, expected utility maximizer will choose to fully insure. On the other hand, if the insurance company makes some profit, then r>1-p and p>1-r. So v’(c 1 )/v’(c 2 )=(1-r)(1-p)/pr<1. Since v’’<0, c 2 < c 1, in words, wealth when the loss occurs is not as high. Not fully insured.

26 At the 45 degree line, since c 1 =c 2, with an expected utility function, |MRS 12 |=p/(1-p) the relative likelihood ratio of state 1 to state 2. Look at another example. Suppose a consumer has wealth w and is considering to invest some amount x in a risky asset. The asset has a return rate of r g in the good state and -r b in the bad state. Good state occurs with probability p and bad with 1-p.

27 If we plot outcomes in the good state on the X and those in the bad state on the Y. When the consumer invests x dollars, then the outcomes are (w-x+(1+r g )x, w- x+(1-r b )x)=(w+r g x, w-r b x). Investing one dollar would decrease wealth at the bad state by r b and increase wealth at the good state by r g. Hence, the slope of the budget line is -r b /r g. If the asset has a strictly positive return, then pr g -(1-p) r b >0 or p/(1-p)> r b /r g.

28 The expected utility maximizer has |MRS gb |=p/(1-p) when he does not invest anything in this risky asset and has wealth (w,w). This implies |MRS gb | is greater than the slope of the budget line. Hence when facing a better than fair gamble, a risk averter will invest a bit in the risky asset, no matter how risk averse he is.

29 This is the same logic that we saw before. When the insurance company is making a profit, selling insurance is like a positive risky asset. Hence a risk averter will not be fully insured, just like selling some insurance. Turn to the benefits of diversification. This summer there is a half of chance to be rainy and another half sunny. If it is rainy, then every one dollar you invest in a raincoat company becomes 2 dollars.

30 And every dollar invested in a sunglasses company becomes 0.5 dollar. Likewise, if the summer is sunny, every dollar you invest in the raincoat (sunglasses) company becomes 0.5 (2) dollar. Call state 1 the rainy state and 2 the sunny state. If you invest 100 dollars on the raincoat company, you get (200, 50) and your utility is 0.5v(200)+0.5v(50). If you invest 50 on the raincoat and 50 on the

31 Sunglasses, then you get (125, 125) and the utility is v(125). A risk averter would have v(125)>0.5v(200)+0.5v(50). So diversification pays off. Assets moving in the opposite direction is like providing an insurance, which will be valuable to someone who dislikes risks. Likewise, the value of an asset depends on how much this asset moves in the opposite direction with the rest of your assets.

32 Risk spreading: each individual has 35,000 and face a 0.01 probability of 10,000 loss. Suppose there are 1,000 individuals and their probabilities of the loss are independent. Then there is room to spread the risk. Suppose the 1,000 individuals decide that if anyone incurs the 10,000 loss, each of the 1,000 individuals will give the person 10 dollars. Then on average, there are 10 houses burnt down and so in most of the time, each has 35,000-10*10=34,900.

33 If they do not insure each other, then with probability 0.99, each has wealth 35,000 and with probability 0.01, each has wealth 25,000. The expected wealth is 34,900. Since a risk averter likes the sure outcome then the risky outcome with the same expected value, this explains why insurance company can make a profit. In fact, you can self insure by saving over time.


Download ppt "Chapter 12 Uncertainty Consider two lotteries L 1 : 500,000 (1) L 1 ’: 2,500,000 (0.1), 500,000 (0.89), 0 (0.01) Which one would you choose? Another two."

Similar presentations


Ads by Google